the perens in lisp dilects is there for a reson... macros.

F

Francis Cianfrocca

M. Edward (Ed) Borasky said:
First of all, while a Turing machine is a great "experimental animal",
our "practical computing machines" are Von Neumann machines rather than
Turing machines. And Von Neumann machines derive from a model of a human
working with a mechanical desk calculator. In fact, the people -- rooms
full of people -- who operated them were called "computers". Properly
speaking, a Von Neumann machine is an "electronic" computer or
"automatic" computer -- a machine doing what people used to do.

Second, I don't think the actual unification of Church and Turing models
occurred *long* before digital computers existed. The basic Turing and
Godel and Church papers were written in the early 1930s, and by the late
1930s there were working relay-based digital computers. Vacuum tube
machines started showing up (in public, anyhow) in the early 1950s and
in "war rooms" shortly after the end of WW II.


Ed, I'll certainly defer to the memory of a guy who was there ;-). I
wasn't, but here's my recollection of the history: You're absolutely
right about the big stuff being done by Turing and Church in the early
30s (and don't forget Godel's papers published in the middle of the
decade, although he was addressing different questions). Turing came to
Princeton on sabbatical for the 1937-38 academic year, and there he and
Church worked together, and they successfully unified Church's lambda
calculus with Turing's cellular-automaton-based universal machine that
winter.

By 1948, von Neumann had suggested what became known as the "Harvard
architecture" (stored-program digital computer) which is the model for
all of the "computers" we recognize as such today. The ten-year interval
between 1938 and 1948 is what I had in mind when I said "long before."

You're also right about the early impetus for mechanized calculation. It
was ballistic equations for artillery, and also the hydrodynamics of
chemical explosives for plutonium-based atomic bombs. The various
machines that were invented to solve these problems basically had
programs hard-wired into them, with solder. von Neumann's suggestion was
to store what later became called "software" directly into the "memory
cells" of the machines, and in so doing he invented the model that made
computers valuable for general use.

But note how completely reminiscent both models (the "ENIAC" style and
the "Harvard" style) are of classic Turing machines! Computer memory is
just a tessellation structure, and a computer program (even a
hard-soldered one) is a linear sequence of values corresponding
precisely to Turing's "tape."

The fact is that a Turing machine is described with components that not
only have direct physical analogs (his 29-state tape can easily be
modeled with five of what we now call "bits", and you will please pardon
the pun on "analogs"), but these physical analogs (electronic circuits)
turned out to be both easy to make and blazingly fast, once the
transistor came into wide use in the mid-Fifties. None of this is true
of a logical computing machine based on lambda-calculus. And that,
logically enough, is the reason why for the very longest time, languages
like Lisp suffered from horrible performance and incredible memory
requirements. And these problems have only recently been solved, but
only by extreme cleverness on the part of compiler writers.

Whereas almost 50 years ago, John Backus and his crew already had
Fortran compilers that could execute numerical methods at a pretty high
fraction of the available machine speed. (Interestingly, according to
Backus' notes, the first Fortran compiler took 18 man-years to build,
and *most* of that time went into hand-writing the front-end, a job that
become automated and near-trivial by the mid-Seventies.)

When I was a kid, I spent all my allowances buying TTL gates and
hand-wiring digital logic. Then when I first got my hands on a computer,
the process of learning machine language and then assembler was
near-instantaneous, because I knew exactly what those instructions were
doing physically in the hardware. Memory-addressing and pointers made
complete sense from the first moment. I was shocked when I first
encountered C, years later, that it was possible to program computers at
such a high level! My point is that putting values into memory cells,
moving them around, and adding and subtracting them is not only the
easiest known way to build a useful (Turing-complete) computer, but also
may be the easiest way for most people to model computations. As an
economic proposition, then, regardless of all this theory and history,
I'm no longer interested in FP for mainstream use. Ruby, on the other
hand, is a step in the right direction.

Aside: I used to think the fad that swept our university CS departments
10 years or so ago of teaching only Scheme was extraordinarily wasteful
and damaging. They thought that teaching people how to understand
computation using the "more-pure" functional style was preferable to
just teaching people how to program. Well, compared to what they
replaced it with (algorithm cookbooks in Java), it may not have been so
bad.

But now that I've declared Lisp to be a fascinating part of computing's
past (<dons asbestos underwear />), the question is: what lies ahead? It
would be charitable to compare our current computer technology, and the
applications we put it to, to the model T and the candlestick telephone.
The more interesting challenge is to go beyond the solved problem of
Turing-complete computations, and figure out how to linguistically and
metaphorically express the large-scale information-driven processes that
humans work with every day. Someone upthread started hinting at that.
There are already some beautiful but flawed examples of this new
thinking (Erlang for one, the original conception of Smalltalk for
another), but it's till very very early.
 
S

Simen Edvardsen

Whereas almost 50 years ago, John Backus and his crew already had
Fortran compilers that could execute numerical methods at a pretty high
fraction of the available machine speed. (Interestingly, according to
Backus' notes, the first Fortran compiler took 18 man-years to build,
and *most* of that time went into hand-writing the front-end, a job that
become automated and near-trivial by the mid-Seventies.)

When I was a kid, I spent all my allowances buying TTL gates and
hand-wiring digital logic. Then when I first got my hands on a computer,
the process of learning machine language and then assembler was
near-instantaneous, because I knew exactly what those instructions were
doing physically in the hardware. Memory-addressing and pointers made
complete sense from the first moment. I was shocked when I first
encountered C, years later, that it was possible to program computers at
such a high level! My point is that putting values into memory cells,
moving them around, and adding and subtracting them is not only the
easiest known way to build a useful (Turing-complete) computer, but also
may be the easiest way for most people to model computations. As an
economic proposition, then, regardless of all this theory and history,
I'm no longer interested in FP for mainstream use. Ruby, on the other
hand, is a step in the right direction.

If the "natural" way of modeling computation is that of moving around
bits in memory and doing trivial operations on them, we wouldn't be
building all these kinds of fancy abstractions, now would we? Also,
Ruby is full of higher order functions, so saying FP is not for
mainstream use but Ruby is a step in the right direction seems a bit
od.
Aside: I used to think the fad that swept our university CS departments
10 years or so ago of teaching only Scheme was extraordinarily wasteful
and damaging. They thought that teaching people how to understand
computation using the "more-pure" functional style was preferable to
just teaching people how to program. Well, compared to what they
replaced it with (algorithm cookbooks in Java), it may not have been so
bad.

Only teaching one language is probably not the way to go, but I see
Scheme as an excellent beginner language. It contains a simple core
and yet a bunch of powerful abstractions that are useful to learn. It
doesn't lock someone into a single mindset. Java, on the other hand,
is probably making it more difficult for people to grasp the essence
of programming, by making them think that anything that doesn't have a
hundred classes is a failure.
But now that I've declared Lisp to be a fascinating part of computing's
past (<dons asbestos underwear />), the question is: what lies ahead? It
would be charitable to compare our current computer technology, and the
applications we put it to, to the model T and the candlestick telephone.
The more interesting challenge is to go beyond the solved problem of
Turing-complete computations, and figure out how to linguistically and
metaphorically express the large-scale information-driven processes that
humans work with every day. Someone upthread started hinting at that.
There are already some beautiful but flawed examples of this new
thinking (Erlang for one, the original conception of Smalltalk for
another), but it's till very very early.

There's always room for improvement. Should people not have invented
the clock because someone had solved the problem with an hourglass or
whatever?
 
F

Francis Cianfrocca

bits in memory and doing trivial operations on them, we wouldn't be
building all these kinds of fancy abstractions, now would we? Also,
Ruby is full of higher order functions, so saying FP is not for
mainstream use but Ruby is a step in the right direction seems a bit
od<<<

Give that a bit more thought. Most of the "fancy abstractions" are
attempts to map aspects of particular problem domains into language
constructs. And what are the typical things we do with our fancy
abstractions? They are mostly about *transport* and communications. We
create purchase orders and store them in databases, then we email them
to vendors. We make accounting calculations and then generate reports.
We send email and instant messages to each other. These operations are
undertaken primarily for the side effects they generate (or
alternatively, the persistent changes they make to the computational
environment). Constructing mathematical fields to solve these
problems rather than scripting them imperatively is arguably a less
natural approach. True functional languages do not have side effects
at all, or they are considered hybrid if they do (SML for example).

Now you and many others may argue with much justification that true FP
is just as valid for constructing nontrivial programs and in many ways
better. Fair enough. But in the real world, I've become convinced that
more programmers will do "better" work (where "better" is defined in
economic terms as high quality at low cost) with imperative languages.
And the reasons why are certainly of interest, but only of theoretical
interest.

Ruby's "higher order functions": aren't you confusing the true
functional style with the simple ability of Ruby (and many other
imperative languages) to treat functions and closures as objects?
 
S

Simen Edvardsen

bits in memory and doing trivial operations on them, we wouldn't be
building all these kinds of fancy abstractions, now would we? Also,
Ruby is full of higher order functions, so saying FP is not for
mainstream use but Ruby is a step in the right direction seems a bit
od<<<

Give that a bit more thought. Most of the "fancy abstractions" are
attempts to map aspects of particular problem domains into language
constructs. And what are the typical things we do with our fancy
abstractions? They are mostly about *transport* and communications. We
create purchase orders and store them in databases, then we email them
to vendors. We make accounting calculations and then generate reports.
We send email and instant messages to each other. These operations are
undertaken primarily for the side effects they generate (or
alternatively, the persistent changes they make to the computational
environment). Constructing mathematical fields to solve these
problems rather than scripting them imperatively is arguably a less
natural approach. True functional languages do not have side effects
at all, or they are considered hybrid if they do (SML for example).

Understanding why something works and how it works and how we might
make it work more easily seems worth making a science about. For the
record, I find adding complexity to satisfy mathematics at best an
academic exercise.
Now you and many others may argue with much justification that true FP
is just as valid for constructing nontrivial programs and in many ways
better. Fair enough. But in the real world, I've become convinced that
more programmers will do "better" work (where "better" is defined in
economic terms as high quality at low cost) with imperative languages.
And the reasons why are certainly of interest, but only of theoretical
interest.

Pure FP seems to complicate making complex systems. I've been trying
to learn Haskell, but lifting values in and out of layers of monads to
achieve simple state seems needlessly complicated. I believe Clean has
an alternative solution to state in a pure functional language -- I
don't know if it's any better. Anyway, my point is that non-pure FP
languages produce shorter and more readable -- more "pure" and
"elegant" if you like -- solutions to many problems. I'm no hardcore
FP advocate. There is no one solution that fits all problems.
Ruby's "higher order functions": aren't you confusing the true
functional style with the simple ability of Ruby (and many other
imperative languages) to treat functions and closures as objects?

True, they're not really mathematical functions, but how is map, each,
inject etc. not higher-order functions? Ruby favors functions (well,
methods) that take blocks (aka closures). The definition of higher
order function from Wikipedia:

In mathematics and computer science, higher-order functions or
functionals are functions which do at least one of the following:

* take one or more functions as an input
* output a function

According to this definition, Ruby's standard library is filled with
higher order functions, as are the libraries of Ocaml, Scheme etc.
 
F

Francis Cianfrocca

Simen said:
In mathematics and computer science, higher-order functions or
functionals are functions which do at least one of the following:

* take one or more functions as an input
* output a function

According to this definition, Ruby's standard library is filled with
higher order functions, as are the libraries of Ocaml, Scheme etc.


This comment is extremely interesting. It is actually *not* the case
that Ruby's "functions" are functions in the mathematical sense. They
are actually objects that point to blocks of executable code or to
closures. They have the ability to execute instructions that modify the
contents of memory or produce other side effects so as to make visible
the results of operations that simulate mathematical functions.

But true functional languages directly expose composable mathematical
functions (mappings from a possibly infinite domains to ranges). Pure
functional languages don't expose any side effects at all. They are
extremely effective at modeling computations (I once wrote a complete,
correct qsort in 7 lines of ML) but not very good at executing
communications or data transport tasks, which are all about side
effects.

I've often been mystified when people talk about Ruby's metaprogramming,
closures, lambdas, etc. as constituting a "functional" programming
style. Your comment is striking because it made me realize that perhaps
many people who learn languages like Scheme are actually trying to apply
the imperative, Turing-style computational model to languages that don't
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

Years ago, I spent a lot of time trying to teach professional
programmers to grasp the functional style. And most people couldn't even
let go of the concept of variables as little boxes holding readable and
writable values. That's when I gave up on FP.
 
A

ara.t.howard

This comment is extremely interesting. It is actually *not* the case
that Ruby's "functions" are functions in the mathematical sense. They
are actually objects that point to blocks of executable code or to
closures. They have the ability to execute instructions that modify the
contents of memory or produce other side effects so as to make visible
the results of operations that simulate mathematical functions.

But true functional languages directly expose composable mathematical
functions (mappings from a possibly infinite domains to ranges). Pure
functional languages don't expose any side effects at all. They are
extremely effective at modeling computations (I once wrote a complete,
correct qsort in 7 lines of ML) but not very good at executing
communications or data transport tasks, which are all about side
effects.

I've often been mystified when people talk about Ruby's metaprogramming,
closures, lambdas, etc. as constituting a "functional" programming
style. Your comment is striking because it made me realize that perhaps
many people who learn languages like Scheme are actually trying to apply
the imperative, Turing-style computational model to languages that don't
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

Years ago, I spent a lot of time trying to teach professional
programmers to grasp the functional style. And most people couldn't even
let go of the concept of variables as little boxes holding readable and
writable values. That's when I gave up on FP.

interesting. but then you have ocaml, which is a bastard of oop/fp and
impertive features - what's wrong there?

cheers.

-a
 
D

Daniel Martin

Francis Cianfrocca said:
And my (unproved) intuition is that
mathematically restricted DSLs (whether they are Lisp programs, Ruby
programs, graphical widgets, physical constructions, or any other such
equivalent thing) may be so much easier to tool that they can give a
vast productivity improvement.

I believe that there are certainly programming constructs that can be
easier to reason about than the standard type of programming, but I'm
not sure how ease-of-reasoning relates to Turing completeness. For
instance, Haskell is certainly Turing-complete as a language, yet
chunks of Haskell code seem like they should be very easy to reason
about. On the other hand, to take a pathological example, Befunge-93
is not Turing-complete, but I doubt it's any easier to reason about
than, say, Brainfuck, which is Turing complete.

I have a theory that a big advantage to tool makers is the ability to
perform some sort of detailed type inference, where the types that can
be inferred are useful in the problem domain. (e.g. it does no good
to be able to infer a type of "int", when what you really need to know
is which of twenty different types all typedef'ed to int this is
supposed to be) So far as I can tell, the ease of useful type
inference is at least a different question from whether the language
being considered is Turing complete, and may in fact be completely
orthogonal to the question of Turing completeness.
 
S

Simen Edvardsen

This comment is extremely interesting. It is actually *not* the case
that Ruby's "functions" are functions in the mathematical sense. They
are actually objects that point to blocks of executable code or to
closures. They have the ability to execute instructions that modify the
contents of memory or produce other side effects so as to make visible
the results of operations that simulate mathematical functions.

Yes, in the post you quoted I said that Ruby functions are not true
mathematical functions. My point still holds: functions, or methods or
whatever you'd like to call them, that operate on closures (regardless
of whether the closures are themselves purely functional) is a very
functional idea. Whether you call it "Higher order method" or
something else, it's still a functional idea.
But true functional languages directly expose composable mathematical
functions (mappings from a possibly infinite domains to ranges). Pure
functional languages don't expose any side effects at all. They are
extremely effective at modeling computations (I once wrote a complete,
correct qsort in 7 lines of ML) but not very good at executing
communications or data transport tasks, which are all about side
effects.

What about Erlang? It's not purely functional, but it certainly favors
a functional style and is used extensively at tasks that seem exactly
like the ones you describe.
I've often been mystified when people talk about Ruby's metaprogramming,
closures, lambdas, etc. as constituting a "functional" programming
style. Your comment is striking because it made me realize that perhaps
many people who learn languages like Scheme are actually trying to apply
the imperative, Turing-style computational model to languages that don't
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

Perhaps not pure FP. I can agree with you on that. But a functional
style and ideas like closures, higher-order functions, pattern
matching etc. are all good tools which the mainstream could benefit
greatly from.
Years ago, I spent a lot of time trying to teach professional
programmers to grasp the functional style. And most people couldn't even
let go of the concept of variables as little boxes holding readable and
writable values. That's when I gave up on FP.

But as long as *you* understand FP, there's no reason to give up on
it. Surely there are other intelligent programmers out there who
understand it too. I'm not saying we should abandon imperative
constructs altogether, as you seem to imply.
 
C

Chad Perrin

Ruby's "higher order functions": aren't you confusing the true
functional style with the simple ability of Ruby (and many other
imperative languages) to treat functions and closures as objects?

Minor tangential quibble:
A closure, in essence, *is an object* by definition (definition of
"object", that is). Perhaps Jonathan Rees' menu of OOP features would
help make my point: http://mumble.net/~jar/articles/oo.html

I think what you mean is something more like "treat closures as the type
of object specified by a language's non-closure-centered object model",
which is not quite the same thing.
 
C

Chad Perrin

I've often been mystified when people talk about Ruby's metaprogramming,
closures, lambdas, etc. as constituting a "functional" programming
style. Your comment is striking because it made me realize that perhaps
many people who learn languages like Scheme are actually trying to apply
the imperative, Turing-style computational model to languages that don't
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

1. A functional "style" is not the same as a functional "language".
The adoption of certain coding techniques that arose in functional
languages for use in a language that is more object oriented and
imperative in syntax in no way changes the fact that those techniques
come from a programming style that arose due in large part to a
functional syntax.

2. I agree that *pure* functional programming is not well-suited to
mainstream professional computing, but then, neither is *pure*
imperative programming. That doesn't change the fact that a whole lot
of functional programming syntactic characteristics can be crammed into
a language before passing the point of diminishing returns. In fact, I
think that something as simple as teaching students to use prefix
arithmetic notation in grade school would turn the tables such that
functional programming syntax would probably be by far the "most
natural" to nascent programmers by quite a wide margin.
 
M

M. Edward (Ed) Borasky

M. Edward (Ed) Borasky said:
My apologies if someone has posted this already -- I just received it:

http://www.americanscientist.org/template/AssetDetail/assetid/51982

"


The Semicolon Wars


Every programmer knows there is one true programming language. A new
one every week

Brian Hayes
<http://www.americanscientist.org/template/AuthorDetail/authorid/490;jsessionid=baagDBWk-NRVHf>"
Yet another mirroring of our discussions on languages. This one has
some good things to say about Ruby :)

http://t-a-w.blogspot.com/2006/07/programming-language-of-2010.html
 
F

Francis Cianfrocca

Chad said:
1. A functional "style" is not the same as a functional "language".
The adoption of certain coding techniques that arose in functional
languages for use in a language that is more object oriented and
imperative in syntax in no way changes the fact that those techniques
come from a programming style that arose due in large part to a
functional syntax.

2. I agree that *pure* functional programming is not well-suited to
mainstream professional computing, but then, neither is *pure*
imperative programming. That doesn't change the fact that a whole lot
of functional programming syntactic characteristics can be crammed into
a language before passing the point of diminishing returns. In fact, I
think that something as simple as teaching students to use prefix
arithmetic notation in grade school would turn the tables such that
functional programming syntax would probably be by far the "most
natural" to nascent programmers by quite a wide margin.


This thread has probably gone on long enough ;-) but as I read it, what
you're saying here amounts simply to "let's take what's good from each
approach and apply it generally." Can't argue with that.

I find it very revealing that you see the primary (borrowable) value in
functional languages as coming primarily from *syntactic* elements.
Again I may be misreading you, but you can get syntactic innovations
from anywhere. The thing that really distinguishes functional programs
is that you can reason mathematically (and one hopes, automatically)
about their correctness. Rather than being stuck with the whole sorry
round of unit-test/debug/regress/repeat that we use with Ruby and every
other non-functional language. Apart from the reference to Ada/Diana (a
sub-language with a rigorous formal semantics), I haven't heard anyone
on this thread try to relate this benefit of functional languages to
non-functional ones.

To answer someone else, the ability to automatically process programs is
why I'm interested in mini-languages (DSLs) written in Ruby that are
non-Turing complete, because they may be decidable, if designed
properly. That opens the way (in theory) to the kind of automatic
correctness analyzers that you see in pure functional programming. The
potential economic benefit is staggering. What proportion of your
programming time do you spend debugging today?

If people see the distinctions between "functional" and "non-functional"
languages purely in syntactic terms, then this whole discussion is
really no deeper than yet another Ruby vs. Python debate.
 
S

Simen Edvardsen

This thread has probably gone on long enough ;-)
Probably.


I find it very revealing that you see the primary (borrowable) value in
functional languages as coming primarily from *syntactic* elements.
Again I may be misreading you, but you can get syntactic innovations
from anywhere. The thing that really distinguishes functional programs
is that you can reason mathematically (and one hopes, automatically)
about their correctness. Rather than being stuck with the whole sorry
round of unit-test/debug/regress/repeat that we use with Ruby and every
other non-functional language. Apart from the reference to Ada/Diana (a
sub-language with a rigorous formal semantics), I haven't heard anyone
on this thread try to relate this benefit of functional languages to
non-functional ones.

You can reason mathematically about the correctness of a pure FP
program, but not about the time and effort it took to create it. I see
productivity and maintainability as better arguments for using
functional languages.
To answer someone else, the ability to automatically process programs is
why I'm interested in mini-languages (DSLs) written in Ruby that are
non-Turing complete, because they may be decidable, if designed
properly.

Any internal DSL has access to the whole of the parent language, and
so it can not be non-Turing complete as long as the parent language
is. Restricting access to features of the parent language to get this
automatical correctness tester or whatever to work seems plain silly.
Besides, there's no mathematical notation for a programmers intent,
and probably never will be, so a processor will probably never be able
to prove that a program does what its author intended to do.
If people see the distinctions between "functional" and "non-functional"
languages purely in syntactic terms, then this whole discussion is
really no deeper than yet another Ruby vs. Python debate.

True.
 
M

M. Edward (Ed) Borasky

Simen said:
Probably.
Well ... I've sort of stood back the past day, except for spotting some
semi-hemi-demi-quasi relevant web links courtesy of the ACM and
del.icio.us. :) But the fundamental discussion here is highly
interesting, and certainly relevant if Ruby is to be a "significantly
better language" than its scripting cousins Perl, Python and PHP, or
other programming languages in general. Among the *popular* languages,
Ruby seems to be more "computer-science-aware" than any of the others.

As I've noted, I got interested in proving programs correct in the early
1970s, when I was in graduate school but not as a computer science
student. I continued that interest, collecting books on the subject and
attempting to introduce the concepts at my workplace when I became a
professional programmer again upon my escape from graduate school. I
still have most of the books, although I haven't looked at them recently.

I still look at the literature on this subject occasionally, although
I'm much more interested in the literature on the Markov/queuing/Petri
net/performance modeling arena of computer science now than the
"correctness" arena.

I don't think it's necessarily either functional syntax or semantics
that makes this possible, but restrictions on the size of programs,
strong and static typing, restrictions on non-reversible operations like
assignment, etc. Yes, Dijkstra's maxim, "Testing can only show the
presence of defects, never their absence," is true. But I have seen
(tiny) FORTRAN programs proved correct, (tiny) Lisp programs proved
correct, and as I noted a post or two ago, I think the original effort
applied to a program expressed as a (small) flowchart.
Any internal DSL has access to the whole of the parent language, and
so it can not be non-Turing complete as long as the parent language
is. Restricting access to features of the parent language to get this
automatical correctness tester or whatever to work seems plain silly.
Besides, there's no mathematical notation for a programmers intent,
and probably never will be, so a processor will probably never be able
to prove that a program does what its author intended to do.
Well ...

1. No programmer I've ever known has stood by idly while someone took
Turing completeness away from him or her, either deliberately or
accidentally. :)
2. Part of the whole "science" of proving programs correct is designing
ways for the customer to formally specify what a program must do to be
correct. A processor not only has to know what the programmer intended
but what the customer wanted as well. :)

In any event, the academic thread that appears to have the strongest
legs these days in the whole realm of proving reliability of our
software is the Pi-calculus. Most of what I've seen (which is the
abstract of one paper!) is in reference to Erlang. See
http://www.erlang.se/workshop/2006/. I'm not at all familiar with Erlang
the language, although I'm quite familiar with Erlang the queueing
theorist. :) I downloaded and installed it, though.
 
F

Francis Cianfrocca

1. No programmer I've ever known has stood by idly while someone took
Turing completeness away from him or her, either deliberately or
accidentally. :)
2. Part of the whole "science" of proving programs correct is designing
ways for the customer to formally specify what a program must do to be
correct. A processor not only has to know what the programmer intended
but what the customer wanted as well. :)


Ed, the biggest problem with abstruse conversations like this is when
you get two or three guys who enable each other and keep it going ;-).

To take a step back from CS, here's the high-level thing that really
troubles me: why is there such an impedance mismatch between the
things we need to achieve in software and the available tools? Why do
I always feel like it takes mountains of "code" to do things that can
be described without too much difficulty? I'm not saying that
genuinely complex problems should be reducible beyond what makes
sense. I'm also not underrating the difficulty of good software design
or the difficulty of communicating well with the clients. But I'd
rather that *most* of the effort go into those areas. As things stand,
I have to solve a lot of computer-language-generated problems that do
not contribute value to the solution. It's like our machines have too
much friction.


I would *gladly* give up Turing-completeness in order to get a
(domain-specific) language that was more expressive (less
"frictional") in terms of the problems I need to solve. I've been
trying to present an intuition (which I don't know how to prove or
disprove) that the practical power required to solve circumscribed
problems does not come from Turing completeness, which of course *is*
required to solve general computing problems. Much depends on how you
define (and constrain) the problem space, and we need to get better
and more disciplined about that.

If this sounds like the old Lisp mantra of "design a language to fit
your problem and then just use that language," well, yes. How does one
do it effectively, though? You mentioned techniques for making
general-purpose programs easier to reason about, and they boiled down
to making them "smaller" in multiple dimensions. But our problem
domains are big and getting bigger. So something is missing. Back to
impedance mismatch again.

I'm going to try to have the discipline to stand back from this thread
now because until I can prove some of this crap I've been spouting, I
don't think I'm adding much value.
 
C

Chad Perrin

This thread has probably gone on long enough ;-) but as I read it, what
you're saying here amounts simply to "let's take what's good from each
approach and apply it generally." Can't argue with that.

Yeah, that one's usually difficult to argue. Since the first time I
started working with a language that allowed (actually encouraged)
prefix notation, I've wished for other languages I've used to allow it
as well (such as Perl, et cetera -- though I suppose I could write a
module to handle that, or possibly even find one in CPAN).

I find it very revealing that you see the primary (borrowable) value in
functional languages as coming primarily from *syntactic* elements.
Again I may be misreading you, but you can get syntactic innovations
from anywhere. The thing that really distinguishes functional programs
is that you can reason mathematically (and one hopes, automatically)
about their correctness. Rather than being stuck with the whole sorry
round of unit-test/debug/regress/repeat that we use with Ruby and every
other non-functional language. Apart from the reference to Ada/Diana (a
sub-language with a rigorous formal semantics), I haven't heard anyone
on this thread try to relate this benefit of functional languages to
non-functional ones.

My commentary about a functional "style" being largely centered around
syntax wasn't meant to indicate that only the syntax of functional
programming should be borrowed. In fact, while there are certain
syntactic features that might be borrowed from functional languages for
certain purposes in Ruby, I think Ruby's syntax is pretty much perfect
for it as it is. It might be nice to borrow a little more from its
semantic structure, too, though.

I'm not really sure how that could be accomplished without causing
problems for some of the excellent characteristics already embodied by
Ruby, though. If we made Ruby too "functional", it would just be a Lisp
dialect anyway -- and while Lisp is great, so is Ruby.

To answer someone else, the ability to automatically process programs is
why I'm interested in mini-languages (DSLs) written in Ruby that are
non-Turing complete, because they may be decidable, if designed
properly. That opens the way (in theory) to the kind of automatic
correctness analyzers that you see in pure functional programming. The
potential economic benefit is staggering. What proportion of your
programming time do you spend debugging today?

That certainly sounds useful.

If people see the distinctions between "functional" and "non-functional"
languages purely in syntactic terms, then this whole discussion is
really no deeper than yet another Ruby vs. Python debate.

I'm really not a fan of Python. It's just . . .

Oh, wait, scratch that -- I don't want to start a flame war.
 
C

Chad Perrin

To take a step back from CS, here's the high-level thing that really
troubles me: why is there such an impedance mismatch between the
things we need to achieve in software and the available tools? Why do
I always feel like it takes mountains of "code" to do things that can
be described without too much difficulty? I'm not saying that
genuinely complex problems should be reducible beyond what makes
sense. I'm also not underrating the difficulty of good software design
or the difficulty of communicating well with the clients. But I'd
rather that *most* of the effort go into those areas. As things stand,
I have to solve a lot of computer-language-generated problems that do
not contribute value to the solution. It's like our machines have too
much friction.


I would *gladly* give up Turing-completeness in order to get a
(domain-specific) language that was more expressive (less
"frictional") in terms of the problems I need to solve.

For that, you pretty much have to choose the language that is least
"frictional" for the specific task you've undertaken. For certain web
development tasks, nothing beats PHP -- PHP programming mostly consists
of looking up already extant core language functions and gluing them
together with some control structures. For something that doesn't
neatly, and roughly exactly, fit the mold of any of PHP's core
functions, you tend to find yourself battling the language quite a lot,
fighting with its half-baked semantic structure and trying to squeeze
blood from its very anemic syntax. Perl provides a (slightly less, but
still) very accessible analog to the PHP core function bucket in the
form of CPAN, but you have to choose our modules wisely -- meanwhile,
Perl's semantic structure is better designed and more enabling, while
its syntax is far, far richer, but you have to be careful about how you
write programs so that you don't do something you won't understand later
(like, tomorrow) and be unable to maintain. It's easy to write
maintainable Perl, contrary to the common "wisdom", but it takes
attention to detail to do so.

Lisp is about the best language you'll ever find for metaprogramming.
If you're going to be working on a large project with many people,
you're better off using a language that handles OOP better than Perl's
tacked-on object model. Ruby manages this excellently, as does
Smalltalk. There are other reasons to use each as well, but I'm getting
tired of describing languages, so I'll keep this short.

The point is that you're probably best served for any nontrivial
programming task by choosing the right language for the job from among
Turing-equivalent/Turing-complete languages that already exist. Each
has its own benefits in terms of enabling easier solutions for the
problems at hand, and to get some of that enabling you have to trade off
with other enabling (at least, you do if you want your language
interpreter to finish parsing code this century).
 
C

Chad Perrin

Well ... I think I disagree with this, having written some moderately
complicated FORTRAN programs and having read through many more. What
they all almost always contain, however, is a domain-specific language
that controls where they find their inputs, what processing the do on
them, where they put their outputs and in what formats, etc., on what we
used to call "control cards". Some of the parsers and DSLs of this
nature are quite complex and rich, but they are in no sense of the word
"Lisp".

I'm not really sure how to address this in a manner that gets my
understanding of Greenspun's Tenth Rule across (that's why it has taken
me this long to reply). I guess, to solve the problem of figuring out
what's going on with the complicated FORTRAN problems you indicate, you
could hand them to a long-time Lisp programmer who also knows some
FORTRAN and ask him or her to point out any Lisplike semantics you've
implemented in the process of developing fully-contained domain specific
languages as part of solving your problem, assuming there are any such
examples of Greenspun's Tenth Rule in action. Perhaps there aren't.

Maybe we should ask Greenspun.

Curiously enough, if you read Chuck Moore's description of the evolution
of FORTH, you'll find something similar -- every sufficiently
complicated FORTRAN program contained a huge "case" statement which was
a "language interpreter".

Not to be insulting, or anything, but perhaps Greenspun's Tenth Rule
should have specified that these FORTRAN programs were "well-written".
If you're implementing a nontrivial domain-specific language within a
program entirely by way of case/switch statements, I shudder to
contemplate the source code. Then again, I've never written a complex
program in FORTRAN, nor bothered to figure out how I might do so --
perhaps that's the only way to do it.

By the way, one of the classic "demonstrations of Artificial
Intelligence", ELIZA, was written not in Lisp but in a list-processing
language patterned on Lisp but implemented in (pregnant pause) FORTRAN.
But the SLIP interpreter, rather than being "sufficiently complicated"
was in fact "delightfully simple."

I'm pretty sure Greenspun didn't mean to say that creating a (partial)
Lisp implementation was itself complicated -- only that the problems
that necessitate doing so are complicated.
 
M

M. Edward (Ed) Borasky

Francis said:
Ok, so how about this for a hypothesis?
So here's my hypothesis: find a radically different discpline for building
large systems in which "components" are developed using non-FP languages
like Java and Ruby and even C++. These components are much coarser-grained
than the ones you normally think of from object-orientation. They are more
like relatively large subsystems, but self-contained and fully
unit-testable. The idea is that the upper size limit for these functional
blocks is correlated with the point at which (speaking informally) they
become "too hard" to debug.

Now take these components, which have side effects like writing databases,
sending purchase orders, etc etc, and knit them together into larger
systems
using a pure functional language. This captures the intuition that no
program should ever get too "large" in terms of function points, but also
the distinction between functional components (which are most easily
written
imperatively) and large-scale process definitions (which are most easily
written functionally).

1. Hmmm ... can/should the "top-level" part of this be done in Ruby as well?

2. "Can your programming language do this?"

http://www.joelonsoftware.com/items/2006/08/01.html

3. Can "MapReduce" be implemented in Ruby?

http://labs.google.com/papers/mapreduce.html


Why the heck should *Erlang* programmers have all the fun?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top