How does Ruby compare to Python?? How good is DESIGN of Ruby compared to Python?

C

Christian Seberino

How does Ruby compare to Python?? How good is DESIGN of Ruby compared to Python?

Python's design is godly. I'm wondering if Ruby's is godly too.

I've heard it has solid OOP design but then I've also heard there are

lots of weird ways to do some things kinda like Perl which is bad for me.

Any other ideas?

Thanks!

Chris
 
G

Greg Ewing (using news.cis.dfn.de)

Christian said:
Python's design is godly. I'm wondering if Ruby's is godly too.

Actually, Python's design is Guidoly, which seems to be
almost as good in practice.

As for Ruby -- if it is, Japanese gods seem to have somewhat
different tastes in language design.

Personally I much prefer Python. You'll probably get the same
answer from most people here, since this is a Python newsgroup...
I've heard it has solid OOP design

It's more rigidly OO in the sense that there are no stand-alone
functions, only methods. But that's just a surface issue. As far
as I can see, Python's foundation is as solidly OO as anything
can get, no less so than Ruby's.
but then I've also heard there are
lots of weird ways to do some things kinda like Perl which is bad for me.

Ruby code is liberally sprinkled with @-signs, which tends to
make it look slightly Perl-ish. But again that's a surface
issue, and Ruby is really no more like Perl than Python is.

Some areas of real, important differences I can see are:

* Ruby makes heavy use of passing code blocks around as
parameters, to implement iteration constructs and so forth.
Ruby is very much like Smalltalk in this respect. Python
uses a different mechanism (the iteration protocol) to achieve
these things. Python's way is both more and less powerful
than Ruby's. Ruby makes it easy to define new control
structures which look just like the built-in ones, which
you can't do with Python. On the other hand, Python has
its amazingly powerful generators, for which there is no
direct equivalent in Ruby.

* In Python, functions are first-class, and
methods are implemented in terms of functions. In Ruby,
methods are the fundamental concept, and there are no
first-class functions. The result is that Python lets
you obtain a bound method from an object and use it like
any other function. You can't do that in Ruby. You can
get a method object in Ruby, but you can't call it using
normal calling syntax.
 
J

Joe Mason

* Ruby makes heavy use of passing code blocks around as
parameters, to implement iteration constructs and so forth.
Ruby is very much like Smalltalk in this respect. Python
uses a different mechanism (the iteration protocol) to achieve
these things. Python's way is both more and less powerful
than Ruby's. Ruby makes it easy to define new control
structures which look just like the built-in ones, which
you can't do with Python. On the other hand, Python has
its amazingly powerful generators, for which there is no
direct equivalent in Ruby.

Not built in, but you can implement them in Ruby using continuations
pretty easily. See http://www.rubygarden.org/ruby?RubyFromPython for an
example. The only problem I can see is maybe performance issues, but
the performance characteristics of the languages are pretty different
apart from that, I'd assume.
* In Python, functions are first-class, and
methods are implemented in terms of functions. In Ruby,
methods are the fundamental concept, and there are no
first-class functions. The result is that Python lets
you obtain a bound method from an object and use it like
any other function. You can't do that in Ruby. You can
get a method object in Ruby, but you can't call it using
normal calling syntax.

I don't see the distinction. "normal calling syntax" in ruby involves
an object, so "unbound function" isn't a meaningful concept. I mean, if
you get a method the begins with the self parameter, you still need an
object to call it, right? Even if you're calling it as "foo(obj,
params)" instead of "obj.foo(params)". I don't see what the ability to
use the other syntax gets you, except the ability to pass functions
around independantly of objects, which I'm pretty sure you can do with
methods in Ruby anyway.

Joe
 
M

MetalOne

Ruby is easy to learn.
I suggest downloading it.
The distribution comes with ProgrammingRuby.chm which is the online
version of the ProgrammingRuby book.
You can read most of what you need in a couple days.
Then decide for yourself.

Ruby is a fine language, but the community is smaller, the bindings to
external libraries are smaller and the number of extra packages are
smaller.
 
D

Dave Brueck

Joe said:
I don't see the distinction. "normal calling syntax" in ruby involves
an object, so "unbound function" isn't a meaningful concept. I mean, if
you get a method the begins with the self parameter, you still need an
object to call it, right?

No - that's the difference between a bound and unbound method (see below).
Even if you're calling it as "foo(obj,
params)" instead of "obj.foo(params)". I don't see what the ability to
use the other syntax gets you, except the ability to pass functions
around independantly of objects, which I'm pretty sure you can do with
methods in Ruby anyway.

As for whether or not Ruby supports this, I'm in the don't-know-don't-care
camp, but to clarify: a bound method "knows" which object instance it belongs
to. Given:

def someFunc(callback):
print callback(5,6)

def functionCallback(a, b):
return a + b

class Foo:
def methodCallback(self, a, b):
return a * b

then both these work:

someFunc(functionCallback)
f = Foo()
someFunc(f.methodCallback)

This is pretty darn useful and IMO quite Pythonic: the creator of the function
and the creator of the callback have to agree on only the most minimal set of
details - just those relating to the calling interface - leaving completely
open any implementation details.

-Dave
 
L

Lothar Scholz

How does Ruby compare to Python?? How good is DESIGN of Ruby compared to Python?

Python's design is godly. I'm wondering if Ruby's is godly too.

I've heard it has solid OOP design but then I've also heard there are

lots of weird ways to do some things kinda like Perl which is bad for me.

At least the design of the Ruby implementation is very very bad.

But you should use google to find more answers to your frequently asked question.
 
J

Joe Mason

def someFunc(callback):
print callback(5,6)

def functionCallback(a, b):
return a + b

class Foo:
def methodCallback(self, a, b):
return a * b

then both these work:

someFunc(functionCallback)
f = Foo()
someFunc(f.methodCallback)

This is pretty darn useful and IMO quite Pythonic: the creator of the function
and the creator of the callback have to agree on only the most minimal set of
details - just those relating to the calling interface - leaving completely
open any implementation details.

I still don't see how this is notable. Seems perfectly straightforward to
me - I'd just assume that's how it worked except in C++, about which I
never assume anything.

A better example of buond vs. unbound methods is this:

def boundFunc(callback):
print callback(5, 6)

def unboundFunc(obj, callback):
print callback(obj, 5, 6)

def functionCallback(a, b):
return a + b

class Foo:
def methodCallback(self, a, b):
return a * b + self.c
def setc(self, c):
self.c = c
33

For anyone who does care, the Ruby version is

def boundFunc(callback)
puts callback.call(5, 6)
end

def unboundFunc(obj, callback)
callback.bind(obj).call(5, 6)
end

def functionCallback(a, b)
return a + b
end

class Foo
def methodCallback(a, b)
return a * b + @c
end
def setc(c)
@c = c
end
end
boundFunc(method:)functionCallback))
11
=> nil
f = Foo.new
=> # said:
f.setc(3) => 3
boundFunc(f.method:)methodCallback))
33
=> nil
unboundFunc(f, Foo.instance_method:)methodCallback))
=> 33

It's a little more cumbersome to manipulate functions because of the
extra calls to "call" and "bind", because "f.methodCallback" actually
calls the method with no params instead of returning a reference to it.
This is one of the things I dislike about Ruby, but it's not like
unbound methods are missing from the language.

(I was wrong when I said "unbound method" was a concept that had no
meaning to Ruby - it even had a "bind" method to support them. Didn't
know about that until I looked it up just now.)

Joe
 
J

John Roth

Joe Mason said:
I don't see the distinction. "normal calling syntax" in ruby involves
an object, so "unbound function" isn't a meaningful concept. I mean, if
you get a method the begins with the self parameter, you still need an
object to call it, right? Even if you're calling it as "foo(obj,
params)" instead of "obj.foo(params)". I don't see what the ability to
use the other syntax gets you, except the ability to pass functions
around independantly of objects, which I'm pretty sure you can do with
methods in Ruby anyway.

I think you've missed the point here. Python has a concept
of a "callable," that is, some object that can be called. Bound
methods are useful precisely because they carry their instance
around with them and also because they look exactly like any
other callable; there is no special syntax that is required either
to create one or to invoke it.

Unbound methods, on the other hand, require the caller to provide
the instance explicitly which limits their usefulness quite a bit.

John Roth
 
C

Cameron Laird

.
[much good counsel]
.
.
Ruby code is liberally sprinkled with @-signs, which tends to
make it look slightly Perl-ish. But again that's a surface
issue, and Ruby is really no more like Perl than Python is.
.
.
.
While I'm all in favor of distinguishing superficial from
fundamental characteristics, I think the last sentence
above is misleading. Ruby is a direct descendant from
Perl, I'm sure; I thought I had the word from Matz himself
that he deliberately modeled a great deal of Ruby on Perl
(and Smalltalk, of course). Although I can't find the
passage now, I'm confident enough to repeat it here. If
necessary, I expect we can confirm the language's parentage.

If you're saying that good Ruby style often differs from
good Perl 5 style, I heartily agree.
 
J

John Roth

Cameron Laird said:
.
[much good counsel]
.
.
Ruby code is liberally sprinkled with @-signs, which tends to
make it look slightly Perl-ish. But again that's a surface
issue, and Ruby is really no more like Perl than Python is.
.
.
.
While I'm all in favor of distinguishing superficial from
fundamental characteristics, I think the last sentence
above is misleading. Ruby is a direct descendant from
Perl, I'm sure; I thought I had the word from Matz himself
that he deliberately modeled a great deal of Ruby on Perl
(and Smalltalk, of course). Although I can't find the
passage now, I'm confident enough to repeat it here. If
necessary, I expect we can confirm the language's parentage.

To quote Matz's preface in the pickaxe book:

[begin quote]
I wanted a language more powerful than Perl, and more
object-oriented than Python.

Then, I remembered my old dream, and decided to design my
own language. At first I was just toying around with it at work.
But gradually it grew into a tool good enough to replace Perl.
[end quote]

To try to put it into the Perl lineage misses the point that,
for Matz, being object oriented was a primary goal, and while
Perl is a lot of things, object oriented isn't one of them.

The "funny characters" in Perl are type indicators, in
Ruby they are namespace controls. My personal opinion
(which I suspect isn't shared by very many Pythonistias)
is that Python would be improved by requiring explicit
access to the module and built-in namespaces, rather than
the default searches it uses now. To make that work, of
course, would require editor/ide support.

John Roth
 
J

Joe Mason

Joe Mason said:
I think you've missed the point here. Python has a concept
of a "callable," that is, some object that can be called. Bound
methods are useful precisely because they carry their instance
around with them and also because they look exactly like any
other callable; there is no special syntax that is required either
to create one or to invoke it.

Unbound methods, on the other hand, require the caller to provide
the instance explicitly which limits their usefulness quite a bit.

Ah. Finally I see - I just misread "bound" as "unbound" in the original
post. I wondered why Greg brought up unbound methods and then gave me
examples of bound ones...

Joe
 
S

Stephen Horne

I still don't see how this is notable. Seems perfectly straightforward to
me - I'd just assume that's how it worked except in C++, about which I
never assume anything.

I don't know how widespread it is among very high level languages, but
there is a reason (in the not-that-high-level sense) that C++, and
other 3GLs with classes hacked on don't. I'm assuming object Pascals
and Basics (esp Delphi and VB) count here, though don't sue me if I'm
wrong - I'm not familiar with object support in either.

A bound method needs more information that the address of the function
that implements it. It also needs the address of the object that it
applies to, stored along with the function address. So a bound method
may look like a function when it is called, but in low level
implementation detail, it is not the same thing. In the Python
internals, I imagine there is a bound method type which contains
pointers to both the object and the unbound method.

The thing is that a bound method amounts to a restricted version of
currying - in effect, you have curried the 'self' parameter.

With currying, you can define a more sepecialised function using a
more general one by specifying some subset of the parameters. A simple
Python-like syntax may be...

def original_fn (a, b, c) :
pass

specialist_fn = original_fn (, 2, )

specialist_fn (1, 3)

The currying in the assignment statement supplies parameter b, with
the remaining parameters a and c not being provided until the call.

To implement a curry'd function, you need a closure in addition to the
original function address - a record recording the values of the
parameters that have been supplied so far, and which parameters have
yet to be supplied. Just as you need the object bound-to as well as
the method to implement a bound function.

Imperative 3GLs kept functionality and data pretty thoroughly
separated - hence a lot of the OO fuss. Currying is standard in
functional languages. But currying predates object orientation by a
long time. In fact, I'm pretty sure it was invented about a century
ago as math (lambda calculus) rather than as programming, though I
could easily be confused.

Anyway, from this perspective, the bound method is an interesting
in-between concept. I've sometimes wondered if full currying support
may have been a better choice, but then a bound method is much more
efficient simply because it's simpler - not to mention avoiding some
syntactic difficulties I ignored above.
 
J

Joe Mason

I don't know how widespread it is among very high level languages, but
there is a reason (in the not-that-high-level sense) that C++, and
other 3GLs with classes hacked on don't. I'm assuming object Pascals
and Basics (esp Delphi and VB) count here, though don't sue me if I'm
wrong - I'm not familiar with object support in either.

I didn't think Pascals and Basics supported function pointers at all,
but I haven't used them since high school, so maybe I just didn't
encounter them at the time.
Anyway, from this perspective, the bound method is an interesting
in-between concept. I've sometimes wondered if full currying support
may have been a better choice, but then a bound method is much more
efficient simply because it's simpler - not to mention avoiding some
syntactic difficulties I ignored above.

I've always found the performance differences between functional and imperative
languages fascinating (well, ever since I found out about it) - on the
one hand, pure functional languages can prove facts about the code
mathematically, so in theory the compiler can optimize much more away.
But on the other hand, supporting all the extra function state they need
is very costly.

Of course, hybrid languages like Python and Ruby have the worst of both
worlds - side effects everywhere AND extra function baggage to pass
around.

Joe
 
C

Cameron Laird

.
.
.
separated - hence a lot of the OO fuss. Currying is standard in
functional languages. But currying predates object orientation by a
long time. In fact, I'm pretty sure it was invented about a century
ago as math (lambda calculus) rather than as programming, though I
could easily be confused.
.
.
.
Ouch! I'm old enough to take some of this personally.
Unless you're making some extremely recondite point
(about Skolem's influences?), "a century" is too rough
an approximation for me. Church and Kleene introduced
the lambda calculus in the '30s; Schoenfinkel, then
Curry, invented currying the decade before.
 
C

Cameron Laird

.
.
.
.
.
.
Ouch! I'm old enough to take some of this personally.
Unless you're making some extremely recondite point
(about Skolem's influences?), "a century" is too rough
an approximation for me. Church and Kleene introduced
the lambda calculus in the '30s; Schoenfinkel, then
Curry, invented currying the decade before.
.
.
.
Wrong. Well, true, all of it, but I'm neglecting Frege.
When I first posted my follow-up, I thought there was no
need even to mention him, because his work on anonymous
functions was too far on the other side of a century ago.
However, I poked around a bit, and the earliest analysis
of anonymous functions I know to this point is his 1891
*Function un Begriff*. I agree that's "about a century
ago". He *must* have written about them earlier, though
....
 
P

Paul Prescod

Joe said:
...


I've always found the performance differences between functional and imperative
languages fascinating (well, ever since I found out about it) - on the
one hand, pure functional languages can prove facts about the code
mathematically, so in theory the compiler can optimize much more away.
But on the other hand, supporting all the extra function state they need
is very costly.

What do you mean by "extra function state?" Are you talking about LAZY
functional languages?
Of course, hybrid languages like Python and Ruby have the worst of both
worlds - side effects everywhere AND extra function baggage to pass
around.

I don't know what you mean by "extra function baggage" and I don't see
how (e.g.) Python has some and Java or C# dn't. Maybe Haskell (but maybe
not). I don't know what you mean about Python.

Paul Prescod
 
D

Duncan Booth

A better example of buond vs. unbound methods is this:

def boundFunc(callback):
print callback(5, 6)

def unboundFunc(obj, callback):
print callback(obj, 5, 6)

def functionCallback(a, b):
return a + b

class Foo:
def methodCallback(self, a, b):
return a * b + self.c
def setc(self, c):
self.c = c

33

Say I modify your example so that we only have one Func which accepts
either a bound or unbound (functionCallback and Foo definitions are
unchanged):
print callback(*(extra_args+(5,6)))


Python doesn't care what type of callable it is passed, so long as it
somehow ends up with the right arguments to call it. Can Ruby handle this
case?
 
S

Stephen Horne

I didn't think Pascals and Basics supported function pointers at all,
but I haven't used them since high school, so maybe I just didn't
encounter them at the time.

I'd be surprised if Delphi and VB don't have function pointers.
Without them, it would be impossible to use Windows API calls that
require callback functions for instance. Whether they are in the
Pascal and Basic standards, though, is an entirely different thing.
I've always found the performance differences between functional and imperative
languages fascinating (well, ever since I found out about it) - on the
one hand, pure functional languages can prove facts about the code
mathematically, so in theory the compiler can optimize much more away.
But on the other hand, supporting all the extra function state they need
is very costly.

Hmmm - I think you may be confusing the functional vs imperative
distinction with something else. Supporting a functional style does
not, in itself, require any special baggage.

Python carries some extra baggage in each object. However, that
baggage has nothing to do with functional support - it is needed for
dynamic typing, run-time binding, reference counting etc. In the case
of dynamic typing and run-time binding, for instance, Python objects
use a system similar to that used in C++ for run-time binding (ie
virtual methods) and run-time type identification - each object has a
pointer to a block of data describing its type and how to use it (a
virtual table in C++, the type object in Python).

Haskell is a good pure-functional language to use as a comparison.
Haskell has dynamic typing, but you can restrict types in a static way
when you choose. Haskell therefore only includes dynamic typing
'baggage' when it is necessary.


In Python, we pay a bit in runtime efficiency for scripting-language
flexibility - for dynamic typing, garbage collection, run-time symbol
tables etc - and we get pretty good value, or else we'd be using some
other language. But I can't really think of a functional-style feature
of Python that results in extra baggage for Python objects.

Perhaps first class functions? No - the necessary 'overhead' in
comparison with, say, C function pointers is already present in order
to support dynamic typing.


Imperative languages can have run-time efficiency advantages, of
course. One is that a human programmer usually knows whether in-place
modification or replacement of data is appropriate, and tells an
imperitive language which to use. In a functional language, the
compiler has to work this out for itself as an optimisation, and it
can't always be sure.

For instance, it can be quite hard for programmers with imperitive
blinkers on to understand how Haskell can handle sorting. Many have
claimed that it can't - and immediately been proved wrong by someone
presenting the code that does it. But to imperitive-biassed eyes, that
code may look a lot like a joke.

To illustrate, a literal Python translation of a Haskell quicksort
would be something like...

def quicksort (p) :
if len(p_Data) > 1 :
l_Pivot = p [0]

return quicksort ([i for i in p [1:] if i < l_Pivot])
+ [l_Pivot]
+ quicksort ([i for i in p [1:] if i >= l_Pivot])

else :
return p_Data

In Python, of course, this would be bad code - it creates huge numbers
of small sorted sequences only to discard them almost instantly. The
average performance is still the same old O(n long n) and the worst
case still O(n^2), but the problem is the scaling constants involved
in all the unnecessary work.

The key difference between Python and Haskell here is that a good
Haskell compiler should be able to optimise out all the throwaway
intermediate lists, and should even be able to determine for itself
whether an in-place modification of the original list is valid.
Python, of course, does not - it is the programmers responsibility to
handle sorting in a more efficient way.

My guess would be that Haskells method might fall down when, for
instance, the quicksort function is held in a separately compiled
library. In that case, the compilers analysis of the quicksort cannot
determine whether the initial unsorted list is used again after the
sort, so it can't know whether it is safe to overwrite the unsorted
list using an in-place method or not. Though even then, the library
could hold variants for both cases. I don't know enough about Haskell
to say, really.


One possibility where both Python and Haskell might have related
run-time issues is in cache friendliness. In Pythons case, this
results from the fact that all variables are bound using references,
all values are passed using references, etc. If you access a C
array-of-simple-values/structs sequentially, the way that RAM caching
and virtual memory operate on modern machines tends to mean that most
accesses are to fast cache memory because the data has been brought in
already along with earlier items. In Python, that doesn't tend to work
out because the actual values within the list are not normally
adjacent in memory - only the references are adjacent. This happens in
Haskell for much the same reason, though a Haskell compiler should
have a much better chance of optimising.

But even with cache friendliness, this isn't 'baggage from functional
features' in any sense. In fact, it often happens in C++ too - if you
are not certain of the concrete type of an object (you only know that
it is a subtype of a particular base class) you have to pass and store
a pointer rather than the object itself because you can't know at
compile time how much space the object takes up in memory, so you
can't reserve the space for it statically. So if you need (perhaps in
a drawing program) an array of shape objects, where each shape may be
an arbitrary subtype of the main shape base class, you actually have
to implement that as an array of pointers to shape objects. And the
shape objects are not likely to be at adjacent locations in memory, so
the drawing loop will make a much higher proportion of accesses to
relatively slow memory.

BTW - the reason this came to mind is because I'm currently working on
a Python extension module which supports a dynamically constructed
C-like struct (which can hold both Python objects and simple C-like
values - various sizes of integers and floats, plus fixed-length
null-padded string buffers using either 8- or 16-bit characters) and
can use them as both keys and values in a range of containers. For
records with no Python objects at all, I may add some
fixed-record-size file access stuff - but I have concerns about
portability and things ending with 'endian' etc. I've covered some of
the ground using the Boost/Python libraries before, but am now
discovering that the Python/C APIs aren't so very hard after all ;-)


Anyway, the real costs of using functional languages don't occur at
run-time. They occur at compile-time, as deep optimisations are very
far from trivial. I can't imagine turning off optimisations in a C++
compiler just to improve build speed, but in the Haskell world there
are compilers considered appropriate for development and prototyping
(minimal optimisations) and compilers considered appropriate for the
final release. I don't have the Haskell experience to know, but when
programmers make the effort to use a completely different compiler
just to reduce build times, I figure the difference must be pretty
substantial for large programs.
Of course, hybrid languages like Python and Ruby have the worst of both
worlds - side effects everywhere AND extra function baggage to pass
around.

Not true. You do get quite high run-time costs in Python (and Perl,
and ...), but you get a lot in return - very good value IMO. And the
run-time costs in Python don't derive from functional features.
 
S

Stephen Horne

Oops - sorry
Wrong. Well, true, all of it, but I'm neglecting Frege.

That's interesting, but I was actually thinking of Church and Kleene.
I'm just so rusty on my history that I only had a *very* vague sense
of the dates and didn't even remember that anonymous functions and
currying predate lambda calculus. You can rest assured that my wrist
has been slapped.

I don't think I ever heard of Frege before. Nor Skolem. Which is quite
bad, really, if the impressions I've just got from some quick searches
are right.
 
J

Joe Mason

Following Cameron's lead, I think it's time to retitle this thread...

What do you mean by "extra function state?" Are you talking about LAZY
functional languages?

I was just being over-general. Lemme see how much of this I can remember
without walking across the room to get a textbook...

Down in the depths of the compiler, the simplest way to implement
(C-style) functions is just to total up all the parameters and local
space it will need and lay out a stack frame. Then the optimizer runs
through and tries to remove as much overhead as possible, inlines some
functions, etc. The basic C++ object method is just a C function with
an extra parameter, just like Python's "self".

This works fine if functions are just named subroutines that can't be
passed around as values. If you add the ability to pass functions
around - C can do this with function pointers - that's fine too.

If you add nested functions, but not function pointers, you still don't
need to change too much. The only difference is that the amount of data
you need to make accessible goes up: as well as the function local
variables and parameters, and the global scope, you need to add all the
enclosing scopes. You can either just append all these to the local
scope, and then optimize out any variables that aren't actually used, or
add a pointer to the enclosing stack frame. This works because, since
you can't pass functions around, you're guaranteed that a nested
function can only be entered while its parent function has a frame on
the stack.

As soon as you add both function pointers and nested functions, this
isn't good enough. Now you can call a function, creating a scope, and
then return a function that needs access to that scope. If you destroy
the scope on return like usual, the function you just returned will
break. So scope is no longer just a stack frame. In functional
languages, all functions are full closures (meaning they carry with them
all the environment they need to run), which is more expensive.

The reason C is fast is that it's pretty close to the bare machine -
it's really just a thin shell over assembly language. C++ isn't much
more, for all it's bells and whistles. (Of course, the more the
machines mangle the code in complicated ways - branch prediction and
code morphing and whatnot - the less that's true. But of course CPU's
are designed to run mainly C/C++ code, so of course none of these
optimizations will hurt.) In pure functional languages, ironically,
functions hurt performance, but in theory they can be optimized much
more because the sections which can have side effects are clearly marked
off. I haven't actually looked at any benchmarks to see what this
translates to in the real world, but I thought it was interesting.
(Maybe "fascinating" was too strong.)

Now somebody will surely chime in about Lisp machines...
I don't know what you mean by "extra function baggage" and I don't see
how (e.g.) Python has some and Java or C# dn't. Maybe Haskell (but maybe
not). I don't know what you mean about Python.

....hybrid languages like Python and Ruby and Java and C#, then. It's
the combination of first-order functions *and* side effects that kills
you. (I don't know enough Java/C# to know if they have nested functions
and "function pointers" or equivalent - it actually wouldn't surprise me
if Java doesn't.)

Dynamic scripting languages aren't trying to go toe to toe with C on
performance, of course, so this isn't a criticism.

Joe
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top