off-topic: Why is lisp so weird?

D

David Steuber

David Magda said:
Statistically lawn bowling is the safest I think, followed by fencing
of all things. (I'm not sure whether all styles of fencing (epee,
sabre, foil (what's the one in the Norse tradition?)) were counted
together or whether one style is safer.)

Whatever happened to lawn darts?
 
R

Rahul Jain

Anyway, if speed of execution is the most important thing to
your application, then Lisp may not be the best choice.
Lisp performs okay (or even "well"), but speed is not
what it is optimized for.

Of course, Lisp is excellent for building compilers for such low-level
languages, and you get the control structures provided by Lisp for free.
 
R

Rahul Jain

Claudio Puviani said:
On the research side, there are various problems such as searching
through N-spaces, optimizations of various kinds, etc. that also use close to
100% of the CPU of whatever system we run them on. Again, efficiency allows us
to do more with what's available. If we can get 1% better results that our
competitors by being more efficient, this more than pays for the programmer
time. In fact, trying to economize programmer time in such cases can be too
costly.

Of course, the issue of a compiler's quality of code generation is just
a constant-factor improvement. Being able to redirect programmer time
towards improving the algorithm and allowing the programmer to explore
the problem and solution space to discover better algorithms might make
the problem take O(n) time instead of O(n^4) time.
 
R

Rahul Jain

Besides, most such technical management business decisions (in all
domains, not just the financial services sector) are knowingly based
on non-technical criteria, anyway. Some bozo somewhere has decided
that the a particular technology is today's panacea, or that business
needs to be done with a particular vendor because of their reputation,
track record, company stability, or special deals or special contacts.

I know for a fact that many financial insitiutions base their technology
decisions solely on these factors. In fact, they will replace an
existing system with a _slower_ one that provides no more features
(other than ones that could have been added to the existing system at a
fraction of the man-hours) just because they'll now be used the "latest"
technologies, not that anyone using the system will care.
 
T

Tom Plunket

Corey said:
Well, I guess it kind of depends on your definitions. Comparison
against machine code is probably more useful, since that should
give us a baseline value to work against. Since C is closest to
that, it's the best baseline we have until something better comes
along.

Maybe so, but at some level the actual machine code is irrelevant
anyway due to multi-pipeline processors and instruction
reordering. Consider that interpreted languages also have the
opportunity to optimize as they execute, while compiled languages
do not.

Lisp offers tail recursion optimization, too, which C does not.
This means that heavily recursive operations can actually be
faster and consume less memory than their C counterparts.
It's close, but it enforces certain rules which I think are
unrealistic. For instance, some languages are much better with
recursive algos than the iterative equivalent.

The point was simply that he assembled this. He actually got all
of this "benchmarking" together. If someone wants to optimize
any of it, they should feel free. :)

I would agree with you though in principle, although I remember
the website author making a compelling case last time I really
read his argument.

I'm not a Lisp zealot. I haven't touched the stuff in ten years.
I do believe that C++ sucks though (even though I code in it
every day) despite the fact that it may compile to something
resembling fast, and despite being really good with it. ;)

-tom!
 
C

CBFalconer

Tom said:
.... snip ...

Lisp offers tail recursion optimization, too, which C does not.
This means that heavily recursive operations can actually be
faster and consume less memory than their C counterparts.

Not so. Optimization is not language dependant, but
implementation dependant.
 
A

Alan Mackenzie

Corey Murtagh said:
Christian Lynbech wrote:
Since it helps to prove that Lisp is slow it's hardly vital
information. Feel free to pull it out if you want though.

Is it perhaps worth mentioning, that these benchmarks have all been on
processors optimised for languages like C? If you wrote a C compiler for
a processor, that processor being optimised for Lisp-like languages, I
suspect lisp would be faster, quite a bit faster. Has anybody ever
written a C compiler for a Symbolics?

[ .... ]
When was the last time you found a niche market screaming out for a
Lisp solution? A solution that could /only/ be implemented in Lisp?

Er, how about Emacs? I don't think Emacs could have been written in a
C-like language, and RMS was wise enough to realise that 20 years ago.
 
M

MSCHAEF.COM

Corey Murtagh said:
While the time taken to create a program is important, it can be just as
important to know that the program when completed will execute as
efficiently as possible...

The key word in that phrase is _can_. With Moore's 'law' doubling CPU
performance every 18 months, CPU time gets progressively cheaper and
less and less important. Aside from the one-time gain associated
with moving to offshore development houses, the same cannot be said
for programmer time.

I agree with your thesis that you can't totally abrogate performance
in the name of developer efficiency, but each incremental increase in CPU
power makes it more and more possible to shift the burden from the
developer to the tools. This is supported by the last 50 years of
the computer history, and more generally, by everybody who's ever
spent time developing a tool, jig, or fixture to save time later on
in their work.
If 10% of those CPU cycles are being
wasted because the programs are written in an inefficient language, I'd
be pissed.

I find myself getting a lot more aggrevated when a language costs me
a few hours of development time than over piddling little 10% efficiency
penalties. Back to Moore's 'law', 10% represents only about 2.5 months
of CPU development.

-Mike
 
D

Daniel.

Corey> Brainf*ck has fewer keywords... are you saying that it's a simple
Is it? I wouldn't have thought this was controversial at all.
Not that I know Brainf*ck or that it matters, but I was merely
challenging the thinking that the size of the language translates to
the complexity of it. If there was such a causality, you could claim
Lisp to be simpler than C which probably wasn't what nobody was trying
to say.

Hey, you snipped the bit where I was agreeing with you :>

Language complexity can't be readily measured in terms of specification
size, number of keywords, etc. Closer might be the size of the minimum
syntax description in whatever form. I'd tend to add the size of the
required [by the standard] RTL, although I'd weight that fairly low.
After all, you have to know the libraries you're working with to get the
most out of the language, right?

Brainfuck is one of the simplest Turing-complete languages in
existence by any reasonable measure. The syntax, for instance, is:
<bf> ::= <bf> <bf> | "[" <bf> "]" | <other> | ""
where <other> is any single character that isn't '[' or ']'.
The semantics are extremely simple too (the most complex part being
the i/o commands), and the libraries are so small as to be
nonexistent.
All this shows is that the simplest languages are not necessarily the
easiest to perform complex tasks in.

-Daniel Cristofani.
++[<++++++++[<[<++>-]>>[>>]+>>+[-[->>+<<<[<[<<]<+>]>[>[>>]]]<[>>[-]]>[>[-
<<]+<[<+<]]+<<]<[>+<-]>>-]<.[-]>>]http://www.hevanet.com/cristofd/brainfuck/
 
C

Claudio Puviani

Rahul Jain said:
Of course, the issue of a compiler's quality of code generation is just
a constant-factor improvement. Being able to redirect programmer time
towards improving the algorithm and allowing the programmer to explore
the problem and solution space to discover better algorithms might make
the problem take O(n) time instead of O(n^4) time.

True, but that O(n) algorithm written in FastLanguage might run 10 times as fast
as the same algorithm implemented in SlowLanguage. The company that implements
that algorithm in FastLanguage can do 10 times more than their competitor that
uses SlowLanguage or it can use a system that's 10 times slower to achieve the
same goal.

The nonsensical argument that Pascal was making is that it might take a bit less
time to code the algorithm in one language and that that is the only valid
criterion for selecting the language.

Claudio Puviani
 
C

Claudio Puviani

Pascal Bourguignon said:
No I'm not.

When you say "the only valid benchmark", you're presuming to think for everyone
else. If you're going to be the conscience of the universe, at least try having
ideas that aren't absurd.
It's the other people who try to apply _their_ criteria to lisp.

People are allowed to apply whatever criteria they choose to select a language
for their own use. You don't get to legislate what they can or cannot consider
to be desirable attributes.

Feel free to love lisp for whatever reason, but your opinion of "the only valid
benchmark" is completely worthless and can only bring you ridicule.

Claudio Puviani
 
R

Rahul Jain

Tom Plunket said:
Maybe so, but at some level the actual machine code is irrelevant
anyway due to multi-pipeline processors and instruction
reordering. Consider that interpreted languages also have the
opportunity to optimize as they execute, while compiled languages
do not.

Consider that languages are not interpreted or compiled. They are merely
implementation strategies. You can compile something, keep the source
around, and recompile at runtime with optimizations based on dynamic
observations.
Lisp offers tail recursion optimization, too, which C does not.
This means that heavily recursive operations can actually be
faster and consume less memory than their C counterparts.

Eh? GCC does this. The only tail recursion that can be optimized is
recursion that is used to express an iterative process, anyway. TCO is
not anything unique to Lisp.

In fact, Common Lisp does not require it, as that would destroy
information useful for debugging and prevent redefinition of a function
from taking effect as a programmer might expect. Not to mention that no
one has yet given a universal definition of what is and is not a tail
call in Common Lisp because the details of handling stack vs. heap vs.
multiple-stack storage are left to the implementation to optimize for
the target architecture and the intended customers' needs.

In order to control these behaviors, there are declarations that can be
appled to any block of code to tell the compiler to optimize
(specifically sub-options for debuggability and speed) and to inline or
not inline certain functions. A self-tail call that has been optimized
to a jump can be considered a weak form of inlining.
 
M

Matthias Blume

CBFalconer said:
Not so. Optimization is not language dependant, but
implementation dependant.

Not so. The presence of certain language features can very well
preclude certain optimizations from being applicable.
 
M

Michael Sullivan

Corey Murtagh said:
A matter of referrents and definitions really. 10% slower than the
equivalent in another language may well fit many definitions of 'slow' :>

I think the point here is that if this fits your definition of "slow"
then nearly every implementation of every high level language aside from
C, C++, Fortran and a few special purpose languages is "slow".

If a 10% drop in execution speed is a significant issue, then C may be
your language for that reason alone. 20 years ago, that was true for a
*lot* of problems. Or assembler might be your language.

These days, it's true for a significant but fairly small subset of
potential development tasks. People do plenty of real work in languages
that are an order of magnitude slower than lisp.


Michael
 
R

Rahul Jain

Claudio Puviani said:
True, but that O(n) algorithm written in FastLanguage might run 10
times as fast as the same algorithm implemented in SlowLanguage. The
company that implements that algorithm in FastLanguage can do 10 times
more than their competitor that uses SlowLanguage or it can use a
system that's 10 times slower to achieve the same goal.

And if, by using said FastLanguage, it takes one a certain amount of
time to implement the O(n^4) algorithm, but with said SlowLanguage
(which is really only going to be at most 50% slower on the same
algorithm) the programmer was able to experiment with a half-dozen
different algorithms and managed to implement the O(n) algorithm in the
same amount of time. The remaining time could even be used to
reimplement the algorithm in FastLanguage to get the last teeny bit of
speedup, if it's really worth that time.
The nonsensical argument that Pascal was making is that it might take
a bit less time to code the algorithm in one language and that that is
the only valid criterion for selecting the language.

Only because it's just as nonsensical as claiming that programmer time
can't be spent on speeding up the algorithm itself, but instead on
fiddling with minor details of the implementation so that the language
will accept the code.
 
R

Rahul Jain

Matthias Blume said:
Not so. The presence of certain language features can very well
preclude certain optimizations from being applicable.

That's true. I've been spoiled by lisp where there is a culture of
always being able to flip a switch to turn off some genericity in
exchange for speed. (Except for inlining sealed generic-functions when
called with args that are sealed classes, but that's rather simple to
implement anyway, given a metaobject protocol: just get the effective
method and stick that in the code.)
 
M

Matthew Danish

True, but that O(n) algorithm written in FastLanguage might run 10 times as fast
as the same algorithm implemented in SlowLanguage. The company that implements
that algorithm in FastLanguage can do 10 times more than their competitor that
uses SlowLanguage or it can use a system that's 10 times slower to achieve the
same goal.

However if one company implements a product in 10 months using
traditional-language and another company only takes 2 months to do it
using rapid-devel-language then is there not an advantage for the second
company? They will have a product out and earning money for 8 months
longer than the other company. In that time they can work on a faster
version if it actually becomes necessary. This is a far more likely
situation in real-world production, and I think a better argument, than
the time gain from the faster implementation of an individual algorithm
used in a small portion of some program. Obviously there may be other
considerations too. And the reason why this point is still very
debatable is because it is very difficult to conduct a conclusive and
objective study.


P.S. Languages are not fast and slow, implementations are. However a
language design could inhibit or promote rapid development.

P.P.S. Welcome to the 21st century. High level language compilers are
damn good these days. And yet, people are still satisfied with
so-called scripting languages. How can you complain about Lisp, with
its mature compilers?
 
R

Raymond Toy

Corey> Now run a game, or a graphics app, or a sound processing app,
Corey> or... you should have the idea by now. My computer often runs at 100%
Corey> CPU because I often do things that require it. If 10% of those CPU
Corey> cycles are being wasted because the programs are written in an
Corey> inefficient language, I'd be pissed. As it is a huge enough number of

Raymond> How would you know 10% is being wasted? How do you know it's not 50%?
Raymond> 0.001%?

Christopher> Or that the CPU meter is really showing you the truth? How much of
Christopher> the machine is that GUI CPU meter in the corner using, anyway?
Christopher> And how do you know that C++ is not wasting those cycles?
Christopher> Or that 10% is significant compared to what's being wasted by the OS?
Christopher> Or that 10% is significant at all?

Christopher> There are certainly some speed-critical things in the world,
Christopher> but analyzing them is more subtle than most people seem to think.

Of course. But I was hoping Corey Murtagh would answer this. I'd
really like to know how he knows 10% (or x%) is being wasted.

In my own work, I know how much is wasted because I look at the
assembly code, measure the actual number of cycles taken on real
hardware, and rewrite the code in either C or assembly and re-measure
to find out if it got better. There's no cache, so the effects are
much easier to measure.

But there are also lots of other places where we use C code because we
can, and we have no idea whether the C function call overhead and
other issues don't swamp out whatever savings I just made to that
assembly routine. [1]

Ray

Footnotes:
[1] Of course, I only go with assembly for known profiled hotspots.
I'm not into assembly-for-the-sake-of-assembly-because-I-can.
 
C

Claudio Puviani

Matthew Danish said:
However if one company implements a product in 10 months using
traditional-language and another company only takes 2 months to do it
using rapid-devel-language then is there not an advantage for the second
company?

Time to market is an important aspect, and one reason why systems are often
prototyped in one language and then rewritten in another. However, the 5:1
productivity ration that you're presenting is an extreme exaggeration, except
for very rare cases. The point, if you bother to actually READ what's written
instead of hunting for windmills to tilt at, is that programmer productivity is
only ONE criterion, not the ONLY criterion as the poster to whom I replied
asserted. Time to market is another. The skill set of your staff is yet another.
It's up to whoever is managing a project to weigh the pros an cons of a
particular approach. It is not up to the OP to make those decisions based on his
own distorted philosophies.
P.S. Languages are not fast and slow, implementations are.

Incorrect. Language features can render a language inherently slower than
another. For example, in Java, object composition forces multiple levels of
indirection that no amount of compilation and optimization can remove. The
language itself mandates that. Any language that provides a better approach to
composition, such as C++, has a distinct and measurable advantage. Fortran's
calling conventions offer an inherently more efficient mechanism than that
offered by C or C++, albeit at the cost of flexibility. There's no philosophical
debate to be had on this topic. It's completely quantifiable even in the
abstract.
High level language compilers are damn good these days. And yet,
people are still satisfied with so-called scripting languages.

I use perl all the time where it's appropriate. I also use lisp and other
languages where appropriate. I just happen to know enough to NOT use them when
it's not appropriate and I certainly know enough to disregard absurd
generalizations.
How can you complain about Lisp, with its mature compilers?

I don't know what you're smoking or sniffing, but it can't be good for your
health. Go back and read my posts and you'll see that I never once complained
about lisp, only about the ridiculous assertions of one poster. If you feel a
desperate need to defend lisp, you might want to do so against someone who has a
beef with lisp.

Claudio Puviani
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,007
Latest member
obedient dusk

Latest Threads

Top