Lisp2Perl - Lisp to perl compiler

Discussion in 'Perl Misc' started by David, Jan 17, 2004.

  1. David

    David Guest

    Hi,

    Is anyone here at all interested in programs to translate lisp into
    perl? I see there's a lisp interpreter on CPAN but I have implemented
    something a bit different.

    Basically its a program which can translate lisp programs (written in a
    rather scheme-like dialect - even has a single namespace) into efficient
    perl scripts. Perl has enough lisp like features to be able to implement
    most things which can be done in lisp (lexical closures - hurrah!).

    This lisp2perl translator/compiler works rather well. I've used it for a
    major project at work. It's also self hosting (used to compile itself).

    I've put a bit more info, though not much, and the source on my website:-

    http://www.hhdave.pwp.blueyonder.co.uk

    Any thoughts or comments anyone?

    -- David
     
    David, Jan 17, 2004
    #1
    1. Advertising

  2. David

    Uri Guttman Guest

    >>>>> "D" == David <> writes:

    D> This lisp2perl translator/compiler works rather well. I've used it
    D> for a major project at work. It's also self hosting (used to
    D> compile itself).

    D> http://www.hhdave.pwp.blueyonder.co.uk

    D> Any thoughts or comments anyone?

    can it translate the entire emacs lisp library? will it integrate with
    emacs? there is a perl in emacs port that sorta worked a bit. i think
    the author dropped it a few years ago. if you integrate it with that and
    make it work on lisp emacs, i bet more than a few emacs users would love
    to break their lisp shackles.

    uri

    --
    Uri Guttman ------ -------- http://www.stemsystems.com
    --Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
    Search or Offer Perl Jobs ---------------------------- http://jobs.perl.org
     
    Uri Guttman, Jan 17, 2004
    #2
    1. Advertising

  3. David <> writes:

    > Hi,
    >
    > Is anyone here at all interested in programs to translate lisp into
    > perl? I see there's a lisp interpreter on CPAN but I have implemented
    > something a bit different.
    >
    > Basically its a program which can translate lisp programs (written in a
    > rather scheme-like dialect - even has a single namespace) into efficient
    > perl scripts. Perl has enough lisp like features to be able to implement
    > most things which can be done in lisp (lexical closures - hurrah!).


    I have a grungy Lisp->Perl compiler. The major goal was to produce
    readable Perl code, that looked like it might as well have been
    written by a human, and could be maintained by a normal Perl hacker.
    The lowest-level Lisp dialect is pretty much Perl-in-Lisp, then there
    are enough macros and functions built on top of it to make it fairly
    Lispy.

    I have to say, I'm horrified that you're thinking of a scheme-like
    Lisp. Perl itself has 4 namespaces, and I preserved that at my Lisp
    level (special-forms called FUNCTION, SCALAR, HASH, and ARRAY, but an
    inferencing engine generally takes care of picking the correct
    namespace for you). Of course, my Lisp->Perl is hosted on Common
    Lisp, so it was a pretty natural choice.

    > This lisp2perl translator/compiler works rather well. I've used it for a
    > major project at work. It's also self hosting (used to compile itself).
    >
    > I've put a bit more info, though not much, and the source on my website:-
    >
    > http://www.hhdave.pwp.blueyonder.co.uk
    >
    > Any thoughts or comments anyone?


    - Since you're really compiling Scheme to Perl, I'd think scheme2perl
    would be a better name. You might as well pick the most specific
    name (otherwise someone might think you mean elisp, for example :)

    - If you're at all interested in readability of the resulting Perl,
    don't offer gensym, just make-symbol (I accompanied mine with a
    with-gensyms macro that takes the symbol names from the vars that
    are bound to the new symbols). The compiler has to pay attention
    to scoping anyway, so you can have it resolve the names. Hmm, that
    wasn't a very comprehensible sentance, was it? How about an
    example:

    (let ((a (make-symbol "a"))
    (b (make-symbol "a"))
    (c (make-symbol "a")))
    `(let ((,a 0))
    (let ((,b 100))
    (setf ,a (+ ,b 1)))
    (let ((,c "hi"))
    (print ,c))
    (print ,a)))
    => (let ((#1=#:a 0))
    (let ((#2=#:a 100))
    (setf #1# (+ #2# 1)))
    (let ((#3=#:a "hi"))
    (print #3#))
    (print #1#))
    => { my $a = 0;
    my $a2 = 100;
    $a = $a2 + 100;
    {
    my $a = "hi";
    print $a
    }
    print $a; }

    - I'm not sure how you can get better undefined-function error
    reporting if you use the scheme approach. If you use multiple
    namespaces in your Lisp, the function namespace maps directly from
    Lisp to Perl, so you get normal Perl error reporting.

    One thing you might consider here is losing the Lisp1-ness, but
    keeping the Scheme-like handling (normal evaluation) of the first
    position in a form. IE:

    (defun foo (x) ...)
    (let ((foo (lambda () ...)))
    ((complement (function foo)) ((var foo))))

    For an unadorned symbol in the first position, you could have the
    compiler infer the namespace based on the innermost lexical
    binding. EG:

    (let ((x 1))
    (foo x)
    (let ((foo (lambda (x) ...)))
    (foo x)))
    <==>
    (let ((x 1))
    ((function foo) x)
    (let ((foo (lambda (x) ...)))
    ((var foo) x)))

    --
    /|_ .-----------------------.
    ,' .\ / | No to Imperialist war |
    ,--' _,' | Wage class war! |
    / / `-----------------------'
    ( -. |
    | ) |
    (`-. '--.)
    `. )----'
     
    Thomas F. Burdick, Jan 17, 2004
    #3
  4. David

    Ben Morrow Guest

    Uri Guttman <> wrote:
    > >>>>> "D" == David <> writes:

    >
    > D> This lisp2perl translator/compiler works rather well.
    >
    > can it translate the entire emacs lisp library? will it integrate with
    > emacs? there is a perl in emacs port that sorta worked a bit. i think
    > the author dropped it a few years ago. if you integrate it with that and
    > make it work on lisp emacs, i bet more than a few emacs users would love
    > to break their lisp shackles.


    PleasePleasePleasePleasePlease :)

    I love the functional concepts in Perl (I really can't see how any
    language manages without closures :) but Lisp just makes my eyes go
    funny.

    Ben

    --
    For the last month, a large number of PSNs in the Arpa[Inter-]net have been
    reporting symptoms of congestion ... These reports have been accompanied by an
    increasing number of user complaints ... As of June,... the Arpanet contained
    47 nodes and 63 links. [ftp://rtfm.mit.edu/pub/arpaprob.txt] *
     
    Ben Morrow, Jan 18, 2004
    #4
  5. David

    David Guest

    Thomas F. Burdick wrote:
    > David <> writes:
    >
    >
    >>Hi,
    >>
    >>Is anyone here at all interested in programs to translate lisp into
    >>perl? I see there's a lisp interpreter on CPAN but I have implemented
    >>something a bit different.
    >>
    >>Basically its a program which can translate lisp programs (written in a
    >>rather scheme-like dialect - even has a single namespace) into efficient
    >>perl scripts. Perl has enough lisp like features to be able to implement
    >>most things which can be done in lisp (lexical closures - hurrah!).

    >
    >
    > I have a grungy Lisp->Perl compiler. The major goal was to produce
    > readable Perl code, that looked like it might as well have been
    > written by a human, and could be maintained by a normal Perl hacker.
    > The lowest-level Lisp dialect is pretty much Perl-in-Lisp, then there
    > are enough macros and functions built on top of it to make it fairly
    > Lispy.
    >

    Producing readable, hackable perl code was not one of the goals of my
    program (as you might have guessed if you've seen any of the output!). I
    had sort of figured that when you started using macros of any degree of
    complexity then, whether you are translating to perl or not, you
    wouldn't want to see the entirety of the macro expanded code anyway.
    This is certainly the case with one largeish program I've written - the
    amount of expanded code is enormous compared to the un-macro-expanded
    code. Also, using (cond) statements, especially nested ones, produces
    horrible looking perl code. They compile down to lots of trinary ifs
    (a?b:c) in perl. The fact is, if I were programming in perl my program
    just wouldn't be structured like that, whereas in lisp it seems a
    natural thing to do.

    I guess readable code is nice to have if you are 'supposed to be'
    programming in perl, but really want to use Lisp :)

    I'd be very interested in seeing the source code to your program if I may.

    > I have to say, I'm horrified that you're thinking of a scheme-like
    > Lisp. Perl itself has 4 namespaces, and I preserved that at my Lisp
    > level (special-forms called FUNCTION, SCALAR, HASH, and ARRAY, but an
    > inferencing engine generally takes care of picking the correct
    > namespace for you). Of course, my Lisp->Perl is hosted on Common
    > Lisp, so it was a pretty natural choice.
    >

    I didn't intend it to be horrific! (well, not _too_ horrific :)
    I know it seems an odd thing to use a 1 namespace language to translate
    to a 4 namespace language. The reason I'm doing the scheme thing as
    opposed to the common lisp thing is that I just kind of like the 1
    namespace approach. It seems to be a lot simpler and remove the
    necessity for ways of dealing with different kinds of bindings. I guess
    that's just personal preference. It does produce rather odd looking perl
    code (lots of '$fn->(...)'), but as I say, I don't really care about
    that. As long as it executes fast enough. I don't know if $fn->()
    executes any slower than &fn() - haven't checked.

    The other thing I find, with hashes and arrays and such, is that half
    the time in perl I end up using references to those things stored in
    scalars anyway. Particularly when I need nested structures.
    >
    >>This lisp2perl translator/compiler works rather well. I've used it for a
    >>major project at work. It's also self hosting (used to compile itself).
    >>
    >>I've put a bit more info, though not much, and the source on my website:-
    >>
    >>http://www.hhdave.pwp.blueyonder.co.uk
    >>
    >>Any thoughts or comments anyone?

    >
    >
    > - Since you're really compiling Scheme to Perl, I'd think scheme2perl
    > would be a better name. You might as well pick the most specific
    > name (otherwise someone might think you mean elisp, for example :)
    >

    Yeah, I know. It's just that, although it is scheme-like, it certainly
    isn't scheme. It doesn't conform to the standard. It has unschemish
    concepts of truth, for example, and the fundamental datatypes don't
    behave as they should. 'lisp' is a less specific term though,
    encompassing a multitude of similar, but distinct thing. It's its own
    peculiar dialect of lisp :)

    > - If you're at all interested in readability of the resulting Perl,
    > don't offer gensym, just make-symbol (I accompanied mine with a
    > with-gensyms macro that takes the symbol names from the vars that
    > are bound to the new symbols). The compiler has to pay attention
    > to scoping anyway, so you can have it resolve the names. Hmm, that
    > wasn't a very comprehensible sentance, was it? How about an
    > example:
    >
    > (let ((a (make-symbol "a"))
    > (b (make-symbol "a"))
    > (c (make-symbol "a")))
    > `(let ((,a 0))
    > (let ((,b 100))
    > (setf ,a (+ ,b 1)))
    > (let ((,c "hi"))
    > (print ,c))
    > (print ,a)))
    > => (let ((#1=#:a 0))
    > (let ((#2=#:a 100))
    > (setf #1# (+ #2# 1)))
    > (let ((#3=#:a "hi"))
    > (print #3#))
    > (print #1#))
    > => { my $a = 0;
    > my $a2 = 100;
    > $a = $a2 + 100;
    > {
    > my $a = "hi";
    > print $a
    > }
    > print $a; }
    >

    I'm not quite sure I understand this at the moment. I'll have to think
    about it some more. If the generated perl looked like that wouldn't it
    clash with a variable called 'a'? I know I'm probably being thick here.

    > - I'm not sure how you can get better undefined-function error
    > reporting if you use the scheme approach. If you use multiple
    > namespaces in your Lisp, the function namespace maps directly from
    > Lisp to Perl, so you get normal Perl error reporting.
    >

    I know, and it would seem the obvious thing to do wouldn't it? I still
    like the single namespace though, but fear not - I thought of a solution
    [me: looks up solution in files of notes about the program...]
    Ah, here we go: I'm planning to modify the compiler to keep track of
    lexical scope. That way, when it compiles a reference to an undefined
    variable it should know and generate a warning about it. This may be
    related to the gensym issue above. I guess generating symbols can be
    done if I keep track of scope. I'll have to think about that some more.

    Incidentally, can you think of a good argument AGAINST the scheme single
    namespace approach?

    > One thing you might consider here is losing the Lisp1-ness, but
    > keeping the Scheme-like handling (normal evaluation) of the first
    > position in a form. IE:
    >
    > (defun foo (x) ...)
    > (let ((foo (lambda () ...)))
    > ((complement (function foo)) ((var foo))))
    >
    > For an unadorned symbol in the first position, you could have the
    > compiler infer the namespace based on the innermost lexical
    > binding. EG:
    >
    > (let ((x 1))
    > (foo x)
    > (let ((foo (lambda (x) ...)))
    > (foo x)))
    > <==>
    > (let ((x 1))
    > ((function foo) x)
    > (let ((foo (lambda (x) ...)))
    > ((var foo) x)))
    >

    That is a thought. I suppose it would make it play nicer with 'normal'
    perl code. It would be particularly useful for using built in functions.
    At the moment I have to 'declare' those:-

    (<perl-sub> print)
    which expands to a (defmacro ...)

    I guess the first thing to do in any case is to extend the compilartion
    functions so that they keep track of lexical scope.
     
    David, Jan 18, 2004
    #5
  6. David <> writes:

    > Producing readable, hackable perl code was not one of the goals of my
    > program (as you might have guessed if you've seen any of the output!). I
    > had sort of figured that when you started using macros of any degree of
    > complexity then, whether you are translating to perl or not, you
    > wouldn't want to see the entirety of the macro expanded code anyway.
    > This is certainly the case with one largeish program I've written - the
    > amount of expanded code is enormous compared to the un-macro-expanded
    > code.


    It does take some care when writing macros, but it's pretty much the
    same as always being in "I might have to debug this macro" mode. If
    you use meaningful variable names, and prune unused branches, it helps
    a lot. Plus, if you allow the macros to insert comments into the
    resulting code, the volume of Perl code isn't so daunting (kinda like
    if you wrote it by hand).

    > Also, using (cond) statements, especially nested ones, produces
    > horrible looking perl code. They compile down to lots of trinary ifs
    > (a?b:c) in perl. The fact is, if I were programming in perl my program
    > just wouldn't be structured like that, whereas in lisp it seems a
    > natural thing to do.


    Right, whereas in Lisp you might write (setf x (cond ...)), in Perl,
    you'd write:

    if (...) {... $x = 1;}
    elsif (...) {... $x = 2;}
    else {... $x = 3;}

    In my compiler, the translators for compound expressions have three
    modes: producing code for side-effect only, producing code used for
    its value, and producing code to put the value in a specific location.
    So, depending on context, cond might expand into if/elsif/else, or
    trinary-if, or if it's complicated and/or nested, a call to an
    anonymous lambda:

    (sub { if (...) {... return 1;}
    elsif (...) {... return 2;}
    else {... return 3;} })->();

    > I guess readable code is nice to have if you are 'supposed to be'
    > programming in perl, but really want to use Lisp :)


    Well, it wasn't so much "supposed to be", as much as the final product
    had to be in Perl, so it would be easy to find someone later to
    maintain it. No one had any problem with me using whatever expert
    development tools I wanted, as long as the output was maintainable
    as-is. But, pretty much, yeah :)

    > I'd be very interested in seeing the source code to your program if I may.


    I'm sitting on it, pending my thinking about how much time/effort it
    would take to make it useful to the general public, and if there's a
    market for it or not. And it's a mess of unfactored hacks, because I
    was concentrating on the systems I was supposed to be writing, not the
    compiler itself.

    > Thomas F. Burdick wrote:
    >
    > > I have to say, I'm horrified that you're thinking of a scheme-like
    > > Lisp. Perl itself has 4 namespaces, and I preserved that at my Lisp
    > > level (special-forms called FUNCTION, SCALAR, HASH, and ARRAY, but an
    > > inferencing engine generally takes care of picking the correct
    > > namespace for you). Of course, my Lisp->Perl is hosted on Common
    > > Lisp, so it was a pretty natural choice.

    >
    > I didn't intend it to be horrific! (well, not _too_ horrific :)


    Horrific because it's a Lisp->Perl compiler, but not because of the
    namespace issue? :)

    > I know it seems an odd thing to use a 1 namespace language to translate
    > to a 4 namespace language. The reason I'm doing the scheme thing as
    > opposed to the common lisp thing is that I just kind of like the 1
    > namespace approach. It seems to be a lot simpler and remove the
    > necessity for ways of dealing with different kinds of bindings. I guess
    > that's just personal preference. It does produce rather odd looking perl
    > code (lots of '$fn->(...)'), but as I say, I don't really care about
    > that. As long as it executes fast enough. I don't know if $fn->()
    > executes any slower than &fn() - haven't checked.


    I'd imagine it is, but I wouldn't sweat an added indirection when
    you're talking about a bytecode interpreter.

    > The other thing I find, with hashes and arrays and such, is that half
    > the time in perl I end up using references to those things stored in
    > scalars anyway. Particularly when I need nested structures.


    Certainly, references to hashes especially are important, in
    particular for supporting defstruct. But if you want to interact with
    Perl builtins, being able to spread arrays is important. I guess you
    don't need all 4 namespaces for that, it just makes the resulting Perl
    less crazy-looking.

    > >>This lisp2perl translator/compiler works rather well. I've used it for a
    > >>major project at work. It's also self hosting (used to compile itself).


    Oooh, just noticed this. I'm glad I didn't try to go that route, I
    was happy to have all of Common Lisp at my disposal when writing my
    compiler. You might want to reconsider this decision, if you find
    yourself having implementation difficulties -- compilers are a lot
    easier to write in big languages (like CL, or one of the big scheme
    implementations' dialects with all the add-ons).

    > > - If you're at all interested in readability of the resulting Perl,
    > > don't offer gensym, just make-symbol (I accompanied mine with a
    > > with-gensyms macro that takes the symbol names from the vars that
    > > are bound to the new symbols). The compiler has to pay attention
    > > to scoping anyway, so you can have it resolve the names. Hmm, that
    > > wasn't a very comprehensible sentance, was it? How about an
    > > example:
    > >
    > > (let ((a (make-symbol "a"))
    > > (b (make-symbol "a"))
    > > (c (make-symbol "a")))
    > > `(let ((,a 0))
    > > (let ((,b 100))
    > > (setf ,a (+ ,b 1)))
    > > (let ((,c "hi"))
    > > (print ,c))
    > > (print ,a)))
    > > => (let ((#1=#:a 0))
    > > (let ((#2=#:a 100))
    > > (setf #1# (+ #2# 1)))
    > > (let ((#3=#:a "hi"))
    > > (print #3#))
    > > (print #1#))
    > > => { my $a = 0;
    > > my $a2 = 100;
    > > $a = $a2 + 100;
    > > {
    > > my $a = "hi";
    > > print $a
    > > }
    > > print $a; }

    >
    > I'm not quite sure I understand this at the moment. I'll have to think
    > about it some more. If the generated perl looked like that wouldn't it
    > clash with a variable called 'a'? I know I'm probably being thick here.


    The point is that a naive translation would be:

    { my $a1 = 0;
    { my $a2 = 100;
    $a1 = $a2 + 100; # clash
    { my $a3 = "hi";
    print $a3;
    }
    }
    print $a1;
    }

    If you named $a1, $a2, and $a3 all just $a, it would work, except for
    the line labeld "clash", which refers to an $a from two different
    scoping levels. So you can name $a1 and $a3 plain old $a, and only
    need to give $a2 a distinct name. In $a3's scope, it is the only $a
    variable used.

    > > - I'm not sure how you can get better undefined-function error
    > > reporting if you use the scheme approach. If you use multiple
    > > namespaces in your Lisp, the function namespace maps directly from
    > > Lisp to Perl, so you get normal Perl error reporting.

    >
    > I know, and it would seem the obvious thing to do wouldn't it? I still
    > like the single namespace though, but fear not - I thought of a solution
    > [me: looks up solution in files of notes about the program...]
    > Ah, here we go: I'm planning to modify the compiler to keep track of
    > lexical scope. That way, when it compiles a reference to an undefined
    > variable it should know and generate a warning about it. This may be
    > related to the gensym issue above. I guess generating symbols can be
    > done if I keep track of scope. I'll have to think about that some more.


    Yeah, they're def related.

    > Incidentally, can you think of a good argument AGAINST the scheme single
    > namespace approach?


    You get to it yourself in a second :)

    > > One thing you might consider here is losing the Lisp1-ness, but
    > > keeping the Scheme-like handling (normal evaluation) of the first
    > > position in a form.

    [snip]
    > That is a thought. I suppose it would make it play nicer with 'normal'
    > perl code. It would be particularly useful for using built in functions.
    > At the moment I have to 'declare' those:-
    >
    > (<perl-sub> print)
    > which expands to a (defmacro ...)


    Yeah, that's a benefit of recognizing at least the function and
    variable namespaces. That way, you can easily use normal Perl
    functions, and your functions aren't second-class citizens (eg, you
    can write a module that Perl coders can use directly, normally).

    > I guess the first thing to do in any case is to extend the compilartion
    > functions so that they keep track of lexical scope.


    That is the traditional thing to do when writing scheme compilers :)

    --
    /|_ .-----------------------.
    ,' .\ / | No to Imperialist war |
    ,--' _,' | Wage class war! |
    / / `-----------------------'
    ( -. |
    | ) |
    (`-. '--.)
    `. )----'
     
    Thomas F. Burdick, Jan 19, 2004
    #6
  7. David

    David Guest

    Thomas F. Burdick wrote:

    > David <> writes:
    >
    >
    >>Producing readable, hackable perl code was not one of the goals of my
    >>program (as you might have guessed if you've seen any of the output!). I
    >>had sort of figured that when you started using macros of any degree of
    >>complexity then, whether you are translating to perl or not, you
    >>wouldn't want to see the entirety of the macro expanded code anyway.
    >>This is certainly the case with one largeish program I've written - the
    >>amount of expanded code is enormous compared to the un-macro-expanded
    >>code.

    >
    >
    > It does take some care when writing macros, but it's pretty much the
    > same as always being in "I might have to debug this macro" mode. If
    > you use meaningful variable names, and prune unused branches, it helps
    > a lot. Plus, if you allow the macros to insert comments into the
    > resulting code, the volume of Perl code isn't so daunting (kinda like
    > if you wrote it by hand).
    >
    >
    >>Also, using (cond) statements, especially nested ones, produces
    >>horrible looking perl code. They compile down to lots of trinary ifs
    >>(a?b:c) in perl. The fact is, if I were programming in perl my program
    >>just wouldn't be structured like that, whereas in lisp it seems a
    >>natural thing to do.

    >
    >
    > Right, whereas in Lisp you might write (setf x (cond ...)), in Perl,
    > you'd write:
    >
    > if (...) {... $x = 1;}
    > elsif (...) {... $x = 2;}
    > else {... $x = 3;}
    >
    > In my compiler, the translators for compound expressions have three
    > modes: producing code for side-effect only, producing code used for
    > its value, and producing code to put the value in a specific location.
    > So, depending on context, cond might expand into if/elsif/else, or
    > trinary-if, or if it's complicated and/or nested, a call to an
    > anonymous lambda:
    >
    > (sub { if (...) {... return 1;}
    > elsif (...) {... return 2;}
    > else {... return 3;} })->();
    >
    >

    Sounds much like what I did except that I had 2 modes, not 3. I did have
    to create an anonymous sub (lambda) in the perl in certain cases and
    immediately eval it. That was cunning I thought. My compiler doesn't do
    that in the case above though. I actually noticed (I think) that ifs
    which were forced to compile to the trinary operator (since the value
    was used) were somewhat faster than the expression (side effect only)
    ones too. I suppose that's due to not creating a new scope in perl.

    >>I guess readable code is nice to have if you are 'supposed to be'
    >>programming in perl, but really want to use Lisp :)

    >
    >
    > Well, it wasn't so much "supposed to be", as much as the final product
    > had to be in Perl, so it would be easy to find someone later to
    > maintain it. No one had any problem with me using whatever expert
    > development tools I wanted, as long as the output was maintainable
    > as-is. But, pretty much, yeah :)
    >
    >

    Fair enough. It was good that you managed to find a way of still using
    lisp despite the requirement for maintainability by a perl programmer.

    >>I'd be very interested in seeing the source code to your program if I may.

    >
    >
    > I'm sitting on it, pending my thinking about how much time/effort it
    > would take to make it useful to the general public, and if there's a
    > market for it or not. And it's a mess of unfactored hacks, because I
    > was concentrating on the systems I was supposed to be writing, not the
    > compiler itself.
    >
    >

    I know the feeling. There are lots of improvements I want to make to
    mine as well. For one thing I want to change the way the whole thing
    works as I've used a bit of a hack to get things like this to work:-

    (define (f x)
    ...some function of x...)

    (defmacro my-macro (a b)
    (list 'foo a (f b)))

    Basically, in order that the macro can use the function defined before
    it the function must be evaluated at compile time as well as runtime. I
    think the root of this problem is that I want to be able to translate
    the lisp code into a perl script which can be executed in the normal
    way. I don't want to have to read in the lisp code and translate and
    execute each form one by one. This means that the macro definitions must
    executed at compile time (obviously) hence function definitions must be
    executed at compile time. I don't think there should really be such a
    distinction between compile time and runtime. I have found a solution
    though, which lies in having an intermediate lisp representation which
    is just fully macro expanded lisp code which is generated as a side
    effect of 'running' a lisp file. This means that you can't just compile
    the lisp to perl as such, you have to run it, and it gets compiled
    (partially) as a side effect. Separate modules could then be linked, and
    fully translated to perl, later. Of course, running a file is not a
    problem if all it does is define things (functions and macros).

    Does this seem sane?

    Even though lisp2perl isn't nearly where I'd like it to be I decided to
    just release it anyway. Where I work, even though I use this and people
    see its benefits, I think it would have been frowned upon if I'd spent
    loads of time on it instead of what I was using it for (what a
    surprise). I wrote much of it at home.

    >>Thomas F. Burdick wrote:
    >>
    >>
    >>>I have to say, I'm horrified that you're thinking of a scheme-like
    >>>Lisp. Perl itself has 4 namespaces, and I preserved that at my Lisp
    >>>level (special-forms called FUNCTION, SCALAR, HASH, and ARRAY, but an
    >>>inferencing engine generally takes care of picking the correct
    >>>namespace for you). Of course, my Lisp->Perl is hosted on Common
    >>>Lisp, so it was a pretty natural choice.

    >>
    >>I didn't intend it to be horrific! (well, not _too_ horrific :)

    >
    >
    > Horrific because it's a Lisp->Perl compiler, but not because of the
    > namespace issue? :)
    >
    >

    Well, quite. There is something 'unsettling' about the concept. Really
    I'm looking forward to the release of Perl 6, because then I could just
    compile lisp to parrot vm code and still get the benefits of using perl
    code from lisp. That doesn't help your perl generation of course. I'm
    surprised to find that someone else has done something similar.

    >>I know it seems an odd thing to use a 1 namespace language to translate
    >>to a 4 namespace language. The reason I'm doing the scheme thing as
    >>opposed to the common lisp thing is that I just kind of like the 1
    >>namespace approach. It seems to be a lot simpler and remove the
    >>necessity for ways of dealing with different kinds of bindings. I guess
    >>that's just personal preference. It does produce rather odd looking perl
    >>code (lots of '$fn->(...)'), but as I say, I don't really care about
    >>that. As long as it executes fast enough. I don't know if $fn->()
    >>executes any slower than &fn() - haven't checked.

    >
    >
    > I'd imagine it is, but I wouldn't sweat an added indirection when
    > you're talking about a bytecode interpreter.
    >
    >
    >>The other thing I find, with hashes and arrays and such, is that half
    >>the time in perl I end up using references to those things stored in
    >>scalars anyway. Particularly when I need nested structures.

    >
    >
    > Certainly, references to hashes especially are important, in
    > particular for supporting defstruct. But if you want to interact with
    > Perl builtins, being able to spread arrays is important. I guess you
    > don't need all 4 namespaces for that, it just makes the resulting Perl
    > less crazy-looking.
    >
    >

    I know what you mean. Interacting with built ins is something I haven't
    really solved nicely yet.

    >>>>This lisp2perl translator/compiler works rather well. I've used it for a
    >>>>major project at work. It's also self hosting (used to compile itself).

    >
    >
    > Oooh, just noticed this. I'm glad I didn't try to go that route, I
    > was happy to have all of Common Lisp at my disposal when writing my
    > compiler. You might want to reconsider this decision, if you find
    > yourself having implementation difficulties -- compilers are a lot
    > easier to write in big languages (like CL, or one of the big scheme
    > implementations' dialects with all the add-ons).
    >
    >
    >>> - If you're at all interested in readability of the resulting Perl,
    >>> don't offer gensym, just make-symbol (I accompanied mine with a
    >>> with-gensyms macro that takes the symbol names from the vars that
    >>> are bound to the new symbols). The compiler has to pay attention
    >>> to scoping anyway, so you can have it resolve the names. Hmm, that
    >>> wasn't a very comprehensible sentance, was it? How about an
    >>> example:
    >>>
    >>> (let ((a (make-symbol "a"))
    >>> (b (make-symbol "a"))
    >>> (c (make-symbol "a")))
    >>> `(let ((,a 0))
    >>> (let ((,b 100))
    >>> (setf ,a (+ ,b 1)))
    >>> (let ((,c "hi"))
    >>> (print ,c))
    >>> (print ,a)))
    >>> => (let ((#1=#:a 0))
    >>> (let ((#2=#:a 100))
    >>> (setf #1# (+ #2# 1)))
    >>> (let ((#3=#:a "hi"))
    >>> (print #3#))
    >>> (print #1#))
    >>> => { my $a = 0;
    >>> my $a2 = 100;
    >>> $a = $a2 + 100;
    >>> {
    >>> my $a = "hi";
    >>> print $a
    >>> }
    >>> print $a; }

    >>
    >>I'm not quite sure I understand this at the moment. I'll have to think
    >>about it some more. If the generated perl looked like that wouldn't it
    >>clash with a variable called 'a'? I know I'm probably being thick here.

    >
    >
    > The point is that a naive translation would be:
    >
    > { my $a1 = 0;
    > { my $a2 = 100;
    > $a1 = $a2 + 100; # clash
    > { my $a3 = "hi";
    > print $a3;
    > }
    > }
    > print $a1;
    > }
    >
    > If you named $a1, $a2, and $a3 all just $a, it would work, except for
    > the line labeld "clash", which refers to an $a from two different
    > scoping levels. So you can name $a1 and $a3 plain old $a, and only
    > need to give $a2 a distinct name. In $a3's scope, it is the only $a
    > variable used.
    >
    >
    >>> - I'm not sure how you can get better undefined-function error
    >>> reporting if you use the scheme approach. If you use multiple
    >>> namespaces in your Lisp, the function namespace maps directly from
    >>> Lisp to Perl, so you get normal Perl error reporting.

    >>
    >>I know, and it would seem the obvious thing to do wouldn't it? I still
    >>like the single namespace though, but fear not - I thought of a solution
    >>[me: looks up solution in files of notes about the program...]
    >>Ah, here we go: I'm planning to modify the compiler to keep track of
    >>lexical scope. That way, when it compiles a reference to an undefined
    >>variable it should know and generate a warning about it. This may be
    >>related to the gensym issue above. I guess generating symbols can be
    >>done if I keep track of scope. I'll have to think about that some more.

    >
    >
    > Yeah, they're def related.
    >
    >
    >>Incidentally, can you think of a good argument AGAINST the scheme single
    >>namespace approach?

    >
    >
    > You get to it yourself in a second :)
    >
    >
    >>> One thing you might consider here is losing the Lisp1-ness, but
    >>> keeping the Scheme-like handling (normal evaluation) of the first
    >>> position in a form.

    >
    > [snip]
    >
    >>That is a thought. I suppose it would make it play nicer with 'normal'
    >>perl code. It would be particularly useful for using built in functions.
    >>At the moment I have to 'declare' those:-
    >>
    >>(<perl-sub> print)
    >>which expands to a (defmacro ...)

    >
    >
    > Yeah, that's a benefit of recognizing at least the function and
    > variable namespaces. That way, you can easily use normal Perl
    > functions, and your functions aren't second-class citizens (eg, you
    > can write a module that Perl coders can use directly, normally).
    >
    >

    Well yes, I know _that_ benefit of the 2 namespace approach. But I mean,
    forgetting about perl (ie if we were compiling to something else) can
    you think of any benefit of 2 namespaces?

    More sane integration with normal perl code would definitely be good.
    I'll do something about it if/when I get chance. I just wish I had more
    time to work on it. Ho hum.

    >>I guess the first thing to do in any case is to extend the compilartion
    >>functions so that they keep track of lexical scope.

    >
    >
    > That is the traditional thing to do when writing scheme compilers :)
    >

    Yeah, sounds like a good idea. I'm new at this :) Its a slightly unusual
    compilation target too.
     
    David, Jan 20, 2004
    #7
  8. David wrote:

    > Well, quite. There is something 'unsettling' about the concept. Really
    > I'm looking forward to the release of Perl 6, because then I could just
    > compile lisp to parrot vm code and still get the benefits of using perl
    > code from lisp. That doesn't help your perl generation of course. I'm
    > surprised to find that someone else has done something similar.


    Never say never:

    <http://okmij.org/ftp/Scheme/Scheme-in-Perl.txt>

    <http://www.venge.net/graydon/scm2p.html>

    --
    Jens Axel Søgaard
     
    Jens Axel Søgaard, Jan 20, 2004
    #8
  9. Lars Brinkhoff, Jan 20, 2004
    #9
  10. David <> writes:

    > Thomas F. Burdick wrote:
    >
    > > I'm sitting on it, pending my thinking about how much time/effort it
    > > would take to make it useful to the general public, and if there's a
    > > market for it or not. And it's a mess of unfactored hacks, because I
    > > was concentrating on the systems I was supposed to be writing, not the
    > > compiler itself.

    >
    > I know the feeling. There are lots of improvements I want to make to
    > mine as well. For one thing I want to change the way the whole thing
    > works as I've used a bit of a hack to get things like this to work:-
    >
    > (define (f x)
    > ...some function of x...)
    >
    > (defmacro my-macro (a b)
    > (list 'foo a (f b)))
    >
    > Basically, in order that the macro can use the function defined before
    > it the function must be evaluated at compile time as well as runtime.


    Since your system is self-hosting, that shouldn't be a problem. Do
    you have something like Common Lisp's eval-when? If not, you should
    read the page in the spec

    http://www.lispworks.com/reference/HyperSpec/Body/s_eval_w.htm#eval-when

    especially the Notes section at the bottom. In CL, forms like defun
    and defmacro (when they're toplevel) cause the function to be known at
    compile time by using eval-when. It also lets you write your own
    forms of this type.

    > I
    > think the root of this problem is that I want to be able to translate
    > the lisp code into a perl script which can be executed in the normal
    > way. I don't want to have to read in the lisp code and translate and
    > execute each form one by one. This means that the macro definitions must
    > executed at compile time (obviously) hence function definitions must be
    > executed at compile time. I don't think there should really be such a
    > distinction between compile time and runtime.


    Actually, I think the answer is probably having a clearer concept of
    time(s) in your Lisp dialect. When the compiler sees a toplevel form
    that macroexpands into (eval-when (compile) ...), it should
    recursively invoke itself, to deal with the eval-when, evaluate that
    in the Perl interpreter, then get back to the job at hand.

    > I have found a solution
    > though, which lies in having an intermediate lisp representation which
    > is just fully macro expanded lisp code which is generated as a side
    > effect of 'running' a lisp file. This means that you can't just compile
    > the lisp to perl as such, you have to run it, and it gets compiled
    > (partially) as a side effect. Separate modules could then be linked, and
    > fully translated to perl, later. Of course, running a file is not a
    > problem if all it does is define things (functions and macros).
    >
    > Does this seem sane?


    It seems messy. You should be able to start Perl, load all the
    functions that your macros use, compile the files defining and using
    them, then quit Perl and load the resulting .pl files. Using a
    CL-like concept of times (compile/macroexpand, load, and eval) would
    help keep things cleaner.

    (In my system, the macros are run in the hosting Common Lisp, so my
    issues were different. You define Perl functions with perlisp:defun,
    but they're not available at compile-time. Macros can use functions
    defined with common-lisp:defun).

    > > Horrific because it's a Lisp->Perl compiler, but not because of the
    > > namespace issue? :)

    >
    > Well, quite. There is something 'unsettling' about the concept. Really
    > I'm looking forward to the release of Perl 6


    I wouldn't hold my breath. And personally, given the
    backwards-incompatibilities of Perl's past, I hope it never comes.

    > > Yeah, that's a benefit of recognizing at least the function and
    > > variable namespaces. That way, you can easily use normal Perl
    > > functions, and your functions aren't second-class citizens (eg, you
    > > can write a module that Perl coders can use directly, normally).

    >
    > Well yes, I know _that_ benefit of the 2 namespace approach. But I mean,
    > forgetting about perl (ie if we were compiling to something else) can
    > you think of any benefit of 2 namespaces?


    Oh, certainly. In the realm of purely style issues, there's the
    nicety of being able to have a type (eg, cons) whose constructor
    function is the same as the type's name (cons), and being able to
    stick an instance of this type in a variable of the same name, while
    still being able to use the constructor function. Eg:

    (let ((cons (assoc 'foo alist)))
    (if cons
    (setf (cdr cons) 'bar)
    (setf alist (cons (cons 'foo 'bar) alist))))

    There are a lot of style issues like this. But the big reason I bring
    up something that I know Schemers would prefer to avoid, is that it's
    important for supporting defmacro-style macros. In a Lisp-1, you
    worry about binding some name with a let, and in the body of that let,
    having a macro expand into a call to that name. This is the problem
    that syntactic closures, and Scheme's pattern-matching macros were
    invented to solve. In a Lisp-2, there is still a *posibility* of
    doing this, using flet/labels, but it's much less likely to happen.
    If you add a package system, like in CL, the likelyhood of a user
    accidentally shadowing a global function definition is almost nil.
    It's essentially a non-issue.

    And remember, Lisp-2'ness is orthogonal to normal evaluation of the
    first element in forms. You could have a perfectly legit Lisp-2 that
    would allow

    (flet ((f (x) (lambda (y) (+ x y))))
    ((f 10) 5))

    > > That is the traditional thing to do when writing scheme compilers :)

    >
    > Yeah, sounds like a good idea. I'm new at this :) Its a slightly unusual
    > compilation target too.


    No kidding. I definately get a kick out of the fact that someone else
    is doing it too. BTW, you might be interested to know about Linj, a
    Lisp->Java compiler that produces readable Java from a CL-like dialect
    of Lisp. It's not currently available, but its developer has given
    presentations/demos at the last two International Lisp Conferences.

    --
    /|_ .-----------------------.
    ,' .\ / | No to Imperialist war |
    ,--' _,' | Wage class war! |
    / / `-----------------------'
    ( -. |
    | ) |
    (`-. '--.)
    `. )----'
     
    Thomas F. Burdick, Jan 20, 2004
    #10
  11. On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:

    > This means that the macro definitions must executed at compile time
    > (obviously) hence function definitions must be executed at compile time.
    > I don't think there should really be such a distinction between compile
    > time and runtime.


    I think there should. Please read
    <http://www.cs.utah.edu/plt/publications/macromod.pdf>

    --
    __("< Marcin Kowalczyk
    \__/
    ^^ http://qrnik.knm.org.pl/~qrczak/
     
    Marcin 'Qrczak' Kowalczyk, Jan 25, 2004
    #11
  12. Marcin 'Qrczak' Kowalczyk wrote:

    > On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:
    >
    >>This means that the macro definitions must executed at compile time
    >>(obviously) hence function definitions must be executed at compile time.
    >>I don't think there should really be such a distinction between compile
    >>time and runtime.

    >
    > I think there should. Please read
    > <http://www.cs.utah.edu/plt/publications/macromod.pdf>


    Oh dear. This is just another one of those bad examples of unbreakable
    abstractions. Unbreakability sucks.

    Generally speaking, layering or staging of approaches is probably a good
    idea. But there might be situations in which I need to mix up the
    layers/stages. If a proposed solution doesn't provide a back door for
    these things, it sucks IMHO.

    In general, computer scientists tend to confuse description and
    prescription. I can describe a good solution and explain why it works
    and what's good about it. But prescribing that everybody else should do
    it exactly the same is the wrong conclusion. Especially when you don't
    give them a way out.


    Pascal

    --
    Tyler: "How's that working out for you?"
    Jack: "Great."
    Tyler: "Keep it up, then."
     
    Pascal Costanza, Jan 26, 2004
    #12
  13. On Mon, 26 Jan 2004 22:18:20 +0100, Pascal Costanza wrote:

    > Generally speaking, layering or staging of approaches is probably a good
    > idea. But there might be situations in which I need to mix up the
    > layers/stages. If a proposed solution doesn't provide a back door for
    > these things, it sucks IMHO.


    The same module can be independently imported to multiple stages,
    so I don't see a problem.

    --
    __("< Marcin Kowalczyk
    \__/
    ^^ http://qrnik.knm.org.pl/~qrczak/
     
    Marcin 'Qrczak' Kowalczyk, Jan 26, 2004
    #13
  14. David

    Joe Marshall Guest

    Pascal Costanza <> writes:

    > Marcin 'Qrczak' Kowalczyk wrote:
    >
    >> On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:
    >>
    >>>This means that the macro definitions must executed at compile time
    >>>(obviously) hence function definitions must be executed at compile time.
    >>>I don't think there should really be such a distinction between compile
    >>>time and runtime.

    >> I think there should. Please read
    >> <http://www.cs.utah.edu/plt/publications/macromod.pdf>

    >
    > Oh dear. This is just another one of those bad examples of unbreakable
    > abstractions. Unbreakability sucks.
    >
    > Generally speaking, layering or staging of approaches is probably a
    > good idea. But there might be situations in which I need to mix up the
    > layers/stages. If a proposed solution doesn't provide a back door for
    > these things, it sucks IMHO.


    In this case, you don't want to mix up the layers or stages. Really.

    The macro/module system that Matthew is describing is one that allows
    the system to determine what to load and when to load it in order to
    ensure that you have the macro-expansion code in place *before* you
    attempt to expand the macros that use it.

    Mixing up the layers or stages means that you want to have circular
    dependencies in your macro expansions, i.e., macro FOO needs the
    QUASIQUOTE library to expand, but the QUASIQUOTE library is written
    using the macro FOO.
     
    Joe Marshall, Jan 26, 2004
    #14
  15. Marcin 'Qrczak' Kowalczyk wrote:

    > On Mon, 26 Jan 2004 22:18:20 +0100, Pascal Costanza wrote:
    >
    >>Generally speaking, layering or staging of approaches is probably a good
    >>idea. But there might be situations in which I need to mix up the
    >>layers/stages. If a proposed solution doesn't provide a back door for
    >>these things, it sucks IMHO.

    >
    > The same module can be independently imported to multiple stages,
    > so I don't see a problem.


    I haven't analyzed the issues in all details, but someone reported to me
    that my implementation of dynamically scoped functions wouldn't be
    possible because of the separation of stages in that module system.

    I don't see how my code would cause any real problems, so if a module
    system doesn't allow me to write it, there's something wrong with that
    module system, and not with my code.

    See my paper about DSF at http://www.pascalcostanza.de/dynfun.pdf that
    also includes the source code.


    Pascal

    --
    Tyler: "How's that working out for you?"
    Jack: "Great."
    Tyler: "Keep it up, then."
     
    Pascal Costanza, Jan 26, 2004
    #15
  16. Joe Marshall <> writes:

    > Pascal Costanza <> writes:
    >
    > > Marcin 'Qrczak' Kowalczyk wrote:
    > >
    > >> On Tue, 20 Jan 2004 00:18:17 +0000, David wrote:
    > >>
    > >>>This means that the macro definitions must executed at compile time
    > >>>(obviously) hence function definitions must be executed at compile time.
    > >>>I don't think there should really be such a distinction between compile
    > >>>time and runtime.
    > >> I think there should. Please read
    > >> <http://www.cs.utah.edu/plt/publications/macromod.pdf>

    > >
    > > Oh dear. This is just another one of those bad examples of unbreakable
    > > abstractions. Unbreakability sucks.
    > >
    > > Generally speaking, layering or staging of approaches is probably a
    > > good idea. But there might be situations in which I need to mix up the
    > > layers/stages. If a proposed solution doesn't provide a back door for
    > > these things, it sucks IMHO.

    >
    > In this case, you don't want to mix up the layers or stages. Really.
    >
    > The macro/module system that Matthew is describing is one that allows
    > the system to determine what to load and when to load it in order to
    > ensure that you have the macro-expansion code in place *before* you
    > attempt to expand the macros that use it.
    >
    > Mixing up the layers or stages means that you want to have circular
    > dependencies in your macro expansions, i.e., macro FOO needs the
    > QUASIQUOTE library to expand, but the QUASIQUOTE library is written
    > using the macro FOO.


    I agree with Pascal: being able to go only forward in time really sucks.

    :)
     
    Matthias Blume, Jan 26, 2004
    #16
  17. David

    Joe Marshall Guest

    Matthias Blume <> writes:

    >
    > I agree with Pascal: being able to go only forward in time really sucks.


    For you, there's continuations.


    --
    ~jrm
     
    Joe Marshall, Jan 27, 2004
    #17
  18. David

    Joe Marshall Guest

    Pascal Costanza <> writes:

    > I haven't analyzed the issues in all details, but someone reported to
    > me that my implementation of dynamically scoped functions wouldn't be
    > possible because of the separation of stages in that module system.
    >
    > See my paper about DSF at http://www.pascalcostanza.de/dynfun.pdf that
    > also includes the source code.


    From a quick perusal of your code, I see nothing in it that wouldn't
    easily work with the module system.
     
    Joe Marshall, Jan 27, 2004
    #18
  19. Joe Marshall <> writes:

    > Matthias Blume <> writes:
    >
    > >
    > > I agree with Pascal: being able to go only forward in time really sucks.

    >
    > For you, there's continuations.


    I just hope you are trying to continue (no pun intended) my attempt at
    being sarcastic...
     
    Matthias Blume, Jan 27, 2004
    #19
  20. Joe Marshall wrote:

    > Pascal Costanza <> writes:
    >
    >
    >>I haven't analyzed the issues in all details, but someone reported to
    >>me that my implementation of dynamically scoped functions wouldn't be
    >>possible because of the separation of stages in that module system.
    >>
    >>See my paper about DSF at http://www.pascalcostanza.de/dynfun.pdf that
    >>also includes the source code.

    >
    > From a quick perusal of your code, I see nothing in it that wouldn't
    > easily work with the module system.


    OK, I have checked my email archive to see what the issue was. Here it
    is: In MzScheme, there is obviously no way to access run-time values at
    compile time / macro expansion time. (I hope I have gotten the
    terminology right here.)

    This means that you can't say this:

    (define a 1)
    (define-macro (foo x) `(+ ,a ,x))
    (foo 2)

    This will result in a "reference to undefined identifier" error, because
    the foo macro doesn't see the a reference.

    In my implementation of dynamically scoped functions in Common Lisp, I
    make use of this collapsing of stages in order to be able to redefine
    existing functions as dynamically scoped functions, without changing
    their defined behavior. See my paper for an example how I turn a CLOS
    generic function into a dynamically scoped one.

    Even if I can go only forward in time, I like the fact that I can change
    the decisions I have made in the past. It escapes me why this should be
    an evil thing to do in programs.


    Pascal

    --
    Tyler: "How's that working out for you?"
    Jack: "Great."
    Tyler: "Keep it up, then."
     
    Pascal Costanza, Jan 27, 2004
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. David

    Lisp2Perl compiler

    David, Aug 14, 2004, in forum: Perl
    Replies:
    2
    Views:
    414
    David
    Aug 14, 2004
  2. Xah Lee
    Replies:
    11
    Views:
    597
    Bart Willems
    May 8, 2007
  3. ekzept
    Replies:
    0
    Views:
    386
    ekzept
    Aug 10, 2007
  4. nanothermite911fbibustards
    Replies:
    0
    Views:
    388
    nanothermite911fbibustards
    Jun 16, 2010
  5. nanothermite911fbibustards
    Replies:
    0
    Views:
    329
    nanothermite911fbibustards
    Jun 16, 2010
Loading...

Share This Page