What's better about Ruby than Python?

  • Thread starter Brandon J. Van Every
  • Start date
J

Jacek Generowicz

Andrew Dalke said:
Ndiya kulala. -- I am going for the purpose of sleeping.

And here's an example of Swedish with a translation into
English, which lack some of the geneaological terms

min mormor -- my maternal grandmother

I can combine those and say

Umormor uya kulala -- my maternal grandmother is going
for the purpose of sleeping.

See how much more precise that is because I can select
words from different Dialects of Speech?

You are absolutely right. "Umormor uya kulala" is less readable than
"My maternal grandmother is going for the purpose of sleeping", to
someone who is familiar with English, but unfamiliar with Xhosa and
Swedish.

Now explain the Mor/Far concept and the "going for a purpouse"
concept" to said English speaker, and present him with text it which
combinations of the concepts are user repeatedly.

_Now_ ask yourself which is more readable.

For this reason it is rarely a good idea to define a macro for a
single use. However, it becomes an excellent idea if the idea the
macro expresses must be expressed repeatedly. The same is true of
functions, classes, modules ...
 
B

Borcis

Anton said:
The ability to maintain internal consistency and the tendency of other
people to fill in the gaps so that the final product seems coherent is
IMO the main reason for this strange time-travel-like ability of
making the right decisions even before all the facts are available.

Wow :)
 
K

Kenny Tilton

Andrew said:
Kenny Tilton:



I have about no idea of what that means. Could you explain
without using syntax? My guess is that it caches function calls,
based only on the variable names. Why is a macro needed
for that?

C? does something similar to what you think, but at with an order of
magnitude more power. Estimated. :) Here is how C? can be used:

(make-instance 'box
:left (c? (+ 2 (right a)))
:right (c? (+ 10 (left self))))

So I do not have to drop out of the work at hand to put /somewhere else/
a top-level function which also caches, and then come back to use it in
make-instance. That's a nuisance, and it scatters the semantics of this
particular box all over the source. Note, btw:

(defun test ()
(let* ((a (make-instance 'box
:left 0
:right (c? (random 30))))
(b (make-instance 'box
:left (c? (+ 2 (right a)))
:right (c? (+ 10 (left self))))))
(print (list :a a (left a) (right a)))
(print (list :b b (left b) (right b)))))

....that different instances of the same class can have different rules
for the same slot. Note also that other plumbing is necessary to make
slot access transparent:

(defun get-cell (self slotname) ;; this fn does not need duplicating
(let ((sv (slot-value self slotname)))
(typecase sv
(function (funcall sv self))
(otherwise sv))))

(defmethod right ((self box)) ;; this needs duplicating for each slot
(get-cell box right))

But I just hide it all (and much more) in:

(defmodel box ()
((left :initarg :left :accessor left)
(right :initarg :right :accessor right)))

....using another macro:

(defmacro defmodel (class superclasses (&rest slots))
`(progn
(defclass ,class ,superclasses
,slots)
,@(mapcar (lambda (slot)
(destructuring-bind
(slotname &key initarg accessor)
slot
(declare (ignore slotname initarg))
`(defmethod ,accessor ((self ,class))
(get-cell self ',slotname))))
slots)))

... cache = {}
... def cached_call(self, *args):
... if args in cache:
... return cache[args]
... x = f(self, *args)
... cache[args] = x
... return x
... return cached_call
...
... def compute(self, i):
... time.sleep(i)
... return i*2
... compute = CachedCall(compute)
...
6
3.01400005817
6
0.00999999046326

Cool. But call cachedCall "memoize". :) Maybe the difference is that you
are cacheing a specific computation of 3, while my macro in a sense
caches the computation of arbitrary code by writing the necessary
plumbing at compile time, so I do not have to drop my train of thought
(and scatter my code all over the place).

That is where Lisp macros step up--they are just one way code is treated
as data, albeit at compile time instead of the usual runtime consideration.



--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Career highlights? I had two. I got an intentional walk from
Sandy Koufax and I got out of a rundown against the Mets."
-- Bob Uecker
 
B

Borcis

Jacek said:
You criticize macros for not encouraging code sharing (they do, by
encouraging you to share the (vast) underlying language while reaching
out towards a specific domain), while your preferred solution seems to
be the ultimate code non-sharing, by throwing away the underlying
language, and re-doing it.

This criticism can't help looking frivolous, imho. You appear to be confusing
"language" with "speech". But I do believe there *must* exist a sane niche for
(perhaps mutated) macros (in some lisp-like sense).

Cheers, B.
 
K

Kenny Tilton

Andrew said:
> Olivier Drolet:
>
>
>
> Making new words don't cause Speech to fork any more than
> making new sentences does.

Hunh? This one doesn't work, and this is the one you have to answer.

Forget argument by analogy: How is a macro different than an API or
class, which hide details and do wonderful things but still have to be
mastered. Here's an analogy <g>: I could learn Java syntax in a week,
but does that mean I can keep up with someone who has been using the
class libraries for years? Nope.

And Java doesn't even have macros.

> In short, no one is denying that the ability to create new macros is
> a powerful tool, just like no one denies that creating new words is
> a powerful tool. But both require extra training and thought for
> proper use, and while they are easy to write, it puts more effort
> for others to understand you. If I stick to Python/English then
> more people can understand me than if I mixed in a bit of Erlang/
> Danish, *even* *if* the latter makes a more precise description
> of the solution.

One of the guys working under me had no respect for readability, he just
got code to work, which was nice. I once had to work on his code. In
about half an hour, with the help of a few macros, a great honking mass
of text which completely obfuscated the action had been distilled to its
essense. One could actually read it.

Of course every once in a while you would notice something like "Ndiya",
but if you went to the macrolet at the top of the function you would
just think "oh, right" and get back to the code.

Maybe sometimes these macros could be functions, but then I'd just
call the function Ndiya.

So what is the difference?

btw, I am on your side in one regard: the LOOP macro in Lisp has
phenomenally un-Lispy syntax, so I have never used it. I am slowly
coming around to it being more useful than irritating, but I did not
like having a new /syntax/ invented. (LOOP /can/ be used with
conventional syntax, but I have seen such code only once and I think I
know why they broke the syntax. <g>). So I get that point, but normally
macros do not deviate from standard Lisp synatx.

Ndiya kulala.


--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Career highlights? I had two. I got an intentional walk from
Sandy Koufax and I got out of a rundown against the Mets."
-- Bob Uecker
 
J

Jacek Generowicz

Borcis said:
This criticism can't help looking frivolous,

Only in so far as the original thesis is frivolous.
You appear to be confusing "language" with "speech".

I'm not sure what you mean by this.

Are you saying that macros are "language" because you've heard the
buzz-phrase that "macros allow you to modify the language", while
functions, classes and modules are "speech", because no such
buzz-phrases about them abound ?

If so, then you are erecting artificial boundaries between different
abstraction mechanisms. (All IMHO, of course.)
 
K

Kenny Tilton

Alex said:
Andrew Dalke wrote:




That's an oldstyle class -- use a newstyle one for smoothest
and most reliable behavior of descriptors




This equivalence holds today as well -- the getattr
builtin has identical semantics to direct member access.




Yes! So what is it that you say you don't get?





If obj is such that it can be used as a key into a dict
(weak or otherwise), sure. Many class instances of some
interest can't -- and if they can you may not like the
result. COnsider e.g.

class Justanyclass:
def __init__(self, x): self.x = x
def compute(self, y): return self.x + y

pretty dangerous to cache THIS compute method -- because,
as a good instance method should!, it depends crucially
on the STATE of the specific instance you call it on.





An automatically cachable method on general objects is
quite tricky.

Lisp hashtables can key off any Lisp datum, but...
I don't think the Lisp code did anything
to deal with that trickiness,

No, it did not. That snippet was from a toy (and incomplete)
implementation of my more elaborate Cells package. the toy was developed
over the keyboard during a talk I gave on Sunday. The next step would
have been to determine when the closure had best re-execute the code
body to see if the world had changed in interesting ways, but that is a
big step and requires dependency tracking between cells.

Once a Cell is told an input has changed, it re-runs its body to see if
it comes up with a different result, in which case it caches that and
tells other dependents to rethink their caches.

So what is shown in my example is fun but halfbaked. I was just trying
to show how a macro could hide the plumbing of an interesting mechanism
so the reader can focus on the essence.




--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Career highlights? I had two. I got an intentional walk from
Sandy Koufax and I got out of a rundown against the Mets."
-- Bob Uecker
 
A

Alex Martelli

Anton Vredegoor wrote:
...
tiny chance. Suppose you could make a bet for a dollar with an
expected reward of a thousand dollars? Statistically it doesn't matter
whether you get a .999 chance of getting a thousand dollars or a
.00999 chance of getting a million dollars.

This assertion is false and absurd. "Statistically", of course,
expected-value is NOT the ONLY thing about any experiment. And
obviously the utility of different sums need not be linear -- it
depends on the individual's target-function, typically influenced
by other non-random sources of income or wealth.

Case 1: with whatever sum you win you must buy food &c for a
month; if you have no money you die. The "million dollars chance"
sees you dead 99.9901 times out of 100, which to most individuals
means huge negative utility; the "thousand dollars chance" gives
you a 99.9% chance of surviving. Rational individuals in this
situation would always choose the 1000-dollars chance unless the
utility to them of the unlikely million was incredibly huge (which
generally means there is some goal enormously dear to their heart
which they could only possibly achieve with that million).

Case 2: the sum you win is in addition to your steady income of
100,000 $/month. Then, it may well be that $1000 is peanuts of
no discernible use to you, while a cool million would let you
take 6 months' vacation with no lifestyle reduction and thus has
good utility to you. In this case a rational individual would
prefer the million-dollars chance.

Therefore, the only thing pertinent to this question seems to be the
risk and gain assessments.

Your use of 'therefore' is inapproprite because it suggests the
following assertion (which _is_ mathematically speaking correct)
"follows" from the previous paragraph (which is bunkum). The
set of (probability, outcome) pairs DOES mathematically form "the
only thing pertinent" to a choice (together with a utility function
of course -- but you can finesse that by expressing outcome as
utility directly) -- the absurdity that multiplying probability
times outcome (giving an "expected value") is the ONLY relevant
consideration is not necessary to establish that.

Another relevant meme that is running around in this newsgroup is the
assumption that some people are naturally smarter than other people.
While I can certainly see the advantage for certain people for keeping
this illusion going (it's a great way to make money, the market
doesn't pay for what it gets but for what it thinks it gets) there is
not a lot of credibility in this argument.

*FOR A GIVEN TASK* there can be little doubt that different people
do show hugely different levels of ability. Mozart could write
far better music than I ever could -- I can write Python programs
far better than Silvio Berlusconi can. That does not translate into
"naturally smarter" because the "given tasks" are innumerable and
there's no way to measure them all into a single number: it's quite
possible that I'm far more effective than Mozart at the important
task of making and keeping true friends, and/or that Mr Berlusconi
is far more effective than me at the important tasks of embezzling
huge sums of money and avoiding going to jail in consequence (and
THAT is a great way to make money, if you have no scruples).

Note that for this purpose it does not matter whether the difference
in effectiveness at given tasks comes from nature or nurture, for
example -- just that it exists and that it's huge, and of that, only
a madman could doubt. If you have the choice whom to get music
from, whom to get Python programs from, whom to get as an accomplice
in a multi-billion scam, you should consider the potential candidates'
proven effectiveness at these widely different tasks.

In particular, effectiveness at design of programming languages can
be easily shown to vary all over the place by examining the results.

Of course there is a lot of variation between people in the way they
are educated and some of them have come to be experts at certain
fields. However no one is an island and one persons thinking process
is interconnected with a lot of other persons thinking processes. The

Of course Mozart would have been a different person -- writing
different kinds of music, or perhaps doing some other job, maybe
mediocrely -- had he not been born when and where he was, the son
of a music teacher and semi-competent musician, and so on. And
yet huge numbers of other people were born in perfectly similar
circumstances... but only one of them wrote *HIS* "Requiem"...

there are those that first leap and then look. It's fascinating to see
"look before you leap" being deprecated in favor of "easier to ask
forgiveness than permission" by the same people that would think twice
to start programming before being sure to know all the syntax.

Since I'm the person who intensely used those two monickers to
describe different kinds of error-handling strategies, let me note
that they're NOT intended to generalize. When I court a girl I
make EXTREMELY sure that she's interested in my advances before I
push those advances beyond certain thresholds -- in other words in
such contexts I *DEFINITELY* "look before I leap" rather than choosing
to make inappropriate and unwelcome advances and then have to "ask
forgiveness" if/when rebuffed (and I despise the men who chose the
latter strategy -- a prime cause of "date rape", IMHO).

And there's nothing "fascinating" in this contrast. The amount of
damage you can infert by putting your hands or mouth where they
SHOULDN'T be just doesn't compare to the (zero) amount of "damage"
which is produced by e.g. an attempted access to x.y raising an
AttributeError which you catch with a try/except.



Alex
 
K

Kenny Tilton

Anton said:
IMO there is a strong tendency towards unification and standardization
among the readers of this newsgroup and the need to conform and the
rewards this brings are well understood.

Your comment reminds me of a brouhaha over the legendary IF* macro
(search comp.lang.lisp via Google). A fellow cooked up (IF* [THEN] ...
ELSE or ELSE-IF ... END-IF), and then used it in a big package his
employer released (so you had to go find the macro!). He took a little
heat for that, the gist being, "if you want to use Basic, use Basic."

Unrelated to macros, on Google you'll also see yours truly getting
eviscerated for using camelCase. "Dude, we use hyphens".

So, yeah, yer technically opening up the floodgates, but the social
pressure is pretty effective at keeping Lisp Lispy and would be at
keeping Python...Pythonic?


--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Career highlights? I had two. I got an intentional walk from
Sandy Koufax and I got out of a rundown against the Mets."
-- Bob Uecker
 
H

Heiko Wundram

We will, probably 2.4 or 2.5. (Whenever 3.0 starts getting off the
ground.)

Hmm... I still use <> exclusively for my code, and I wouldn't really
like it getting deprecated. At least for me, != is more difficult to see
when browsing source than <> is, as != has a striking similarity to ==,
at least at the first glance...

I know <> has been deprecated for long, but I'd much rather have both
syntaxes allowed, also for the future...

But anyway, I guess the way until it will be deprecated is still a long
one, so I won't bother to protest more now... :)

Heiko.
 
A

Andrew Dalke

Alex Martelli:
That's an oldstyle class -- use a newstyle one for smoothest
and most reliable behavior of descriptors

Oops! Yeah, forgot that too. I do consider it a (necessary)
wart that different classes have different behaviours.

Yes! So what is it that you say you don't get?

Before this I didn't realize the process of __getattr__
had changed to allow the __get__ to work. I thought
properties were done at assignment time rather than
lookup time, and that the hooks were located in
something done with __getattribute__

After stepping through it, with Raymond's descriptor
next to me, I think I now understand it.

If obj is such that it can be used as a key into a dict
(weak or otherwise), sure. Many class instances of some
interest can't -- and if they can you may not like the
result. COnsider e.g.

class Justanyclass:
def __init__(self, x): self.x = x
def compute(self, y): return self.x + y

pretty dangerous to cache THIS compute method -- because,
as a good instance method should!, it depends crucially
on the STATE of the specific instance you call it on.

But if someone were to use a method call cache on it
then I would have expected that person to know if its
use was relevant.
Anyway, I just wanted to
show how the descriptor concept lets you use a class,
rather than a function, when you want to -- indeed any
function now has a __get__ method, replacing (while
keeping the semantics of) the old black magic.

Yep. Using a class is to be prefered over my
def-with-nested-scope trick.

Andrew
(e-mail address removed)
 
A

A. Lloyd Flanagan

Alex Martelli said:
I think it was: it allowed C++ to enter into MANY places that just
wouldn't have given it a thought otherwise, and to popularize OO
in this -- albeit indirect -- way.


Alex

You're right. The characterization of C++ as a "better C" got it into
a lot of places. It also, unfortunately, resulted in a huge amount of
C++ code full of C idioms and procedural thinking.

So management thinks they're doing object-oriented programming because
they are using an object-oriented language. But the problems of C
become even worse when you do C++ wrong. The result: people end up
thinking this whole 'object-oriented' thing is a bunch of hooey.

Don't get me wrong: you can do great things with C++ if you're an
expert. Problem is, if you're not, you can do tremendous damage.
 
A

Andrew Dalke

Kenny Tilton
C? does something similar to what you think, but at with an order of
magnitude more power. Estimated. :) Here is how C? can be used:

[.. some lisp example ...]

You have got to stop assuming that a description in Lisp is
intuitive to me. I don't know anywhere near enough of that
language to know what's normal vs. what's novel.
So I do not have to drop out of the work at hand to put /somewhere else/
a top-level function which also caches,

Didn't you have to "drop out of the work at hand" to make the macro?

I tried to understand the lisp but got lost with all the 'defun' vs
'function' vs 'funcall' vs 'defmodel' vs 'defclass' vs 'destructuring-bind'
I know Lisp is a very nuanced language. I just don't understand
all the subtleties. (And since Python works for what I do, I
don't really see the need to understand those nuances.)
Cool. But call cachedCall "memoize". :) Maybe the difference is that you
are cacheing a specific computation of 3, while my macro in a sense
caches the computation of arbitrary code by writing the necessary
plumbing at compile time, so I do not have to drop my train of thought
(and scatter my code all over the place).

Sure, I'll call it memoize, but I don't see what that's to be prefered.
The code caches the result of calling a given function, which could
compute 3 or could compute bessel functions or could compute
anything else. I don't see how that's any different than what your
code does.

And I still don't see how your macro solution affects the train
of thought any less than my class-based one.
That is where Lisp macros step up--they are just one way code is treated
as data, albeit at compile time instead of the usual runtime
consideration.

The distinction between run-time and compile time use of code is
rarely important to me. Suppose the hardware was 'infinitely' fast
(that is, fast enough that whatever you coded could be done in
within your deadline). On that machine, there's little need for
the extra efficiencies of code transformation at compile time. But
there still is a need for human readablity, maintainability, and code
sharing.

And for most tasks these days, computers are fast enough.

Andrew
(e-mail address removed)
 
K

Kenny Tilton

Andrew said:
Kenny Tilton
Lisp hashtables can key off any Lisp datum, but...


Bear with my non-existant Lisp knowledge

Suppose the code is modified. How does the hash table
get modified to reflect the change? Written in Python,
if I have

a = (1, 2, (3, [4, 5]))

Lisp is sick. From the hyperspec on make-hash-table, the test for
lookups can be "eq, eql, equal, or equalp. The default is eql." EQUAL
would work for this case. EQL just looks at object identity.
I can't hash it because someone could come by later
and do

a[2][1].append(6)

so the hash computation and test for equality
will give different results.

The next step would
have been to determine when the closure had best re-execute the code
body to see if the world had changed in interesting ways, but that is a
big step and requires dependency tracking between cells.


Ahhh, so the Python code was comparable in power without
using macros?

No, it was completely different, as per my earlier post. What you did is
what is called MEMOIZE (not part of CL, but I saw some Perl refs pop up
when I googled that looking for Paul Graham's memoize code from On
Lisp). My code just calculates once ever! That is why it needs me to
give another talk in which I add dependency tracking and state change
propagation. And even then it does not memoize, tho you have me thinking
and I could certainly make that an option for Cells that need it. Kind
of rare, but it would be a shiniy new bell/whistle for the package.

No, to match the power of my code you need to do:

(let ((b (make-instance 'box
:left 10
:right (c? (* 2 (left self)))))
(print (list (left b) (right b)))

and see (10 20).

You need to then change the source above to say (c? (* 3 (left self)))
and see (10 30). It should be supported by no more than

(defmodel box ()
(left...)
(right...))

Macros are not functionality, they are about hiding the wiring/plumbing
behind neat new hacks, in a way functions cannot because they jump in at
compile time to expand user-supplied code into the necessary
implementing constructs.

Could the same code be written in Lisp using an approach
like I did for Python?

Download Paul Graham's On Lisp, he has a macro in there that hides all
the plumbing you used for cacheing. :) Look for "Memoize".
How would a non-macro solution look
like?

(let ((cache :unbound))
(lambda (self)
(if (eq cache :unbound)
(setf cache (progn (+ 10 (left self))))
cache)))
What's the advantage of the macro one over the non-macro
one? Just performance?

Again, arranging it so necessary wiring is not cut and pasted all over,
cluttering up the code to no good end, and forcing all the code to be
revisited when the implementation changes. ie, They are just like
functions, except they operate at compile time on source code. The
bestest I could do without macros would be:

(make-instance 'box
:left (c? (lambda (self)
(+ 2 (right a)))
....

Where C? becomes a function which returns an instance which has a slot
for the cache. But then, without macros, I have to hand-code the setters
and getters on every such slot. My defmodel macro writes those accessors
silently.

You know, with all due respect, the question really is not if macros are
useful/powerful. They are, and I think even you conceded that. Forgive
me if I am mistaken. At their most powerful they expand into multiple
top-level definitions and even stash info globally to assist
development, such as being able to inspect the source of a closure at
debug-time. They run at compile time and get a look at the source,a nd
can do interesting things functions cannot.

The anti-macronistas are better off with the argument, hey, if you want
Lisp, use Lisp. Let's keep it simple here. the question for you all is
how far you want to take Python.

Since there already is a Lisp -- stable, compiled, ANSI-standardized,
generational GC, etc, etc -- it would be easier to generate FFI bindings
for needed C libs than to play catch up against a forty year lead.


--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Career highlights? I had two. I got an intentional walk from
Sandy Koufax and I got out of a rundown against the Mets."
-- Bob Uecker
 
A

Andrew Dalke

Kenny Tilton:
Forget argument by analogy: How is a macro different than an API or
class, which hide details and do wonderful things but still have to be
mastered. Here's an analogy <g>: I could learn Java syntax in a week,
but does that mean I can keep up with someone who has been using the
class libraries for years? Nope.

And Java doesn't even have macros.

So adding macros to Java would make it easier to master?

Andrew
(e-mail address removed)
 
A

Andrew Dalke

Olivier Drolet:
Really cute intuition pump you've got there, Alex! :)

Err, it was me, no?
Macros don't require that you change variable names, just the syntax.
Macros in Python would then not be equivalent to new words, but new
constructs. And yes, you need to know their meaning, but is this
effort greater than managing without them? Have you or any opponents
of macros on this news group never seen a context in which the
advantages of macros outweigh their overhead?

Sure. Having macros would have let people experiment with
generators without tweaking the core language. Would have let
people implement their own metaclass mechanisms instead of
waiting until they were added to the C code.

The complaint has never been that there is no context for which
macros cannot provide the better solution.

I think this track of the discussion will go no further. Let's
assume that I am convinced that not only are macros the best
thing since chocolate but that it should be added to Python.

Following Aahz's suggestion -- what would a Python-with-macros
look like?

Andrew
(e-mail address removed)
 
J

Jeffrey P Shell

Alex Martelli said:
As for me, I have no special issue with "having to specify self" for
functions I intend to use as bound or unbound methods; otherwise I
would no doubt have to specify what functions are meant as methods
in other ways, such as e.g.

def [method] test(what, ever):
...

and since that is actually more verbose than the current:

def test(self, what, ever):
...

I see absolutely no good reason to have special ad hoc rules, make
"self" a reserved word, etc, etc. Python's choices are very simple
and work well together (for the common case of defining methods
that DO have a 'self' -- classmethod and staticmethod are currently
a bit unwieldy syntactically, but I do hope that some variation on
the often-proposed "def with modifiers" syntax, such as

It's especially nice that if you don't want to use 'self', you don't
have to. Comfortable with 'this' from Java and friends?

def test(this, what, ever):
this.goKaboom(what*ever-what)

will work just fine. 'self' is just a ferociously common idiom. But
it's only an idiom. I like that. So even if we used [method] as a
function decorator, I'd still prefer to see 'self' as part of the
signature. I just like knowing where my names come from.

I do like the concept of function/method decorators though. I'll have
to revisit that PEP.
 
A

Anton Vredegoor

Alex Martelli said:
Anton Vredegoor wrote:
...

This assertion is false and absurd. "Statistically", of course,
expected-value is NOT the ONLY thing about any experiment. And
obviously the utility of different sums need not be linear -- it
depends on the individual's target-function, typically influenced
by other non-random sources of income or wealth.

Non linear evaluation functions? Other random sources? Seems you're
trying to trick me. I did write statistically, which implies a large
number of observations. Of course people seldom get to experiment with
those kinds of money, but a simple experiment in Python using a random
number generator should suffice to prove the concept.

[snip tricky example cases]
*FOR A GIVEN TASK* there can be little doubt that different people
do show hugely different levels of ability. Mozart could write
far better music than I ever could -- I can write Python programs
far better than Silvio Berlusconi can. That does not translate into
"naturally smarter" because the "given tasks" are innumerable and
there's no way to measure them all into a single number: it's quite
possible that I'm far more effective than Mozart at the important
task of making and keeping true friends, and/or that Mr Berlusconi
is far more effective than me at the important tasks of embezzling
huge sums of money and avoiding going to jail in consequence (and
THAT is a great way to make money, if you have no scruples).

And you're eliminating mr. Berlusconis friends out of the equation?
Seems like trick play again to me. Why are there so few famous classic
female philosophers or musicians? Surely you're not going to tell me
that's just because only the very gifted succeed in becoming famous?
Since I'm the person who intensely used those two monickers to
describe different kinds of error-handling strategies, let me note
that they're NOT intended to generalize. When I court a girl I
make EXTREMELY sure that she's interested in my advances before I
push those advances beyond certain thresholds -- in other words in
such contexts I *DEFINITELY* "look before I leap" rather than choosing
to make inappropriate and unwelcome advances and then have to "ask
forgiveness" if/when rebuffed (and I despise the men who chose the
latter strategy -- a prime cause of "date rape", IMHO).

And there's nothing "fascinating" in this contrast. The amount of
damage you can infert by putting your hands or mouth where they
SHOULDN'T be just doesn't compare to the (zero) amount of "damage"
which is produced by e.g. an attempted access to x.y raising an
AttributeError which you catch with a try/except.

Somehow one has to establish that a certain protocol is supported.
However trying to establish a protocol implies supporting the protocol
oneself. Perhaps not initiating protocols that one doesn't want to see
supported is the best way to go here.

Anton
 
D

David Abrahams

Roy Smith said:
One of the few things I like about C++ is that between const, templates,
and inline, the need for the macro preprocessor has been almost
eliminated.

Har! If anything it has been increased! Boost, a haven for template
experts, has a whole library which formalizes a programming system for
the preprocessor (http://www.boost.org/libs/preprocessor) just so we
can eliminate the nasty boilerplate that arises in our template code.
Still, you see a lot of code which goes out of its way to
do fancy things with macros, almost always with bad effect.

I guess it's a question of how badly you hate maintaining 25 different
copies of similar code. And, BTW, I tried to "just use Python to
generate C++" first and using the preprocessor turns out to be
significantly better.

BTW, the C++ preprocessor is a fairly weak macro system. A higher
level metaprogramming facility that knows more about the underlying
language could be a lot cleaner, clearer, safer, and more expressive.
 
K

Kenny Tilton

Andrew said:
Maybe. But look for previous posts of mine on c.l.py to see that
my previous attempts at learning Lisp have met dead ends. I know
how it's supposed to work, but haven't been able to convert what
I see on a page into something in my head.

It's funny you say that. Someone is dropping by shortly to help me get
up to speed on Linux, and I am helping him with Lisp. But the last email
from him said he had been reading Winston&Horn since my talk on Sunday
(which apparently got him off the same "don't-get-it" square you're on)
and he now says he has no idea why he did not get it before.

I wonder if the (funny (syntax)) makes people think there is more there
than there is. Even in C I code functionally:

this( that( x), then (y))

so I do not think it is the functional thang.

BTW, I'm afraid I'm about at the end of my limits for this thread.
I'll only be able to do small followups.

OK. I might be tossing off a fun demo of my C? jobbies for my talk at
the upcoming Lisp conference in New York:

http://www.international-lisp-conference.org/

Maybe I'll preview it here. That might help us understand each other
better it.



--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Career highlights? I had two. I got an intentional walk from
Sandy Koufax and I got out of a rundown against the Mets."
-- Bob Uecker
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,049
Latest member
Allen00Reed

Latest Threads

Top