Comparing lists

C

Christian Stapfer

Ron Adam said:
An interesting parallel can be made concerning management of production vs
management of creativity.

In general, production needs checks and feedback to insure quality, but
will often come to a stand still if incomplete resources are available.

Where as creativity needs checks to insure production, but in many cases
can still be productive even with incomplete or questionable resources.
The quality may very quite a bit in both directions, but in creative
tasks, that is to be expected.

In many ways programmers are a mixture of these two. I think I and Steven
use a style that is closer to the creative approach. I get the feeling
your background may be closer to the production style.

Both are good and needed for different types of tasks. And I think most
programmers can switch styles to some degree if they need to.

Come to think of an experience that I shared
with a student who was one of those highly
creative experimentalists you seem to have
in mind. He had just bought a new PC and
wanted to check how fast its floating point
unit was as compared to our VAX. After
having done his wonderfully creative
experimenting, he was utterly dejected: "Our (old)
VAX is over 10'000 times faster than my new PC",
he told me, almost in despair. Whereupon I,
always the uncreative, dogmatic theoretician,
who does not believe that much in the decisiveness
of the outcome of mere experiments, told him
that this was *impossible*, that he *must* have
made a mistake...

It turned out that the VAX compiler had been
clever enough to hoist his simple-minded test
code out of the driving loop. In fact, our VAX
calculated the body of the loop only *once*
and thus *immediately* announced that it had finished
the whole test - the compiler on this student's
PC, on the other hand, had not been clever enough
for this type of optimization: hence the difference...

I think this is really a cautionary tale for
experimentalists: don't *believe* in the decisiveness
of the outcomes your experiments, but try to *understand*
them instead (i.e. relate them to your theoretical grasp
of the situation)...

Regards,
Christian
 
F

Fredrik Lundh

Christian said:
As to the value of complexity theory for creativity
in programming (even though you seem to believe that
a theoretical bent of mind can only serve to stifle
creativity), the story of the discovery of an efficient
string searching algorithm by D.E.Knuth provides an
interesting case in point. Knuth based himself on
seemingly quite "uncreatively theoretical work" (from
*your* point of view) that gave a *better* value for
the computuational complexity of string searching
than any of the then known algorithms could provide.

are you talking about KMP? I'm not sure that's really a good example of
how useful "theoretical work" really is in practice:

- Morris had already implemented the algorithm (in 1968) when Knuth "dis-
covered" it (1971 or later), so the "then known" part of your argument is
obviously bogus. "then known by theoretical computer scientists" might
be correct, though.

- (iirc, Knuth's first version wasn't practical to use; this was fixed by Pratt)

- (and iirc, Boyer-Moore had already been invented when Knuth published the
first paper on KMP (in 1977))

- for use cases where the setup overhead is irrelevant, Boyer-Moore is almost
always faster than KMP. for many such cases, BM is a lot faster.

- for use cases such as Python's "find" method where the setup overhead cannot
be ignored, a brute-force search is almost always faster than KMP.

- for use cases such as Python's "find" method, a hybrid approach is almost
always faster than a brute-force search.

in other words, the "better" computational complexity of KMP has turned out
to be mostly useless, in practice.

</F>
 
S

Steven D'Aprano

Come to think of an experience that I shared
with a student who was one of those highly
creative experimentalists you seem to have
in mind. He had just bought a new PC and
wanted to check how fast its floating point
unit was as compared to our VAX. After
having done his wonderfully creative
experimenting, he was utterly dejected: "Our (old)
VAX is over 10'000 times faster than my new PC",
he told me, almost in despair.

Which it was. It finished executing his code in almost 1/10,000th of the
time his PC could do.
Whereupon I,
always the uncreative, dogmatic theoretician,
who does not believe that much in the decisiveness
of the outcome of mere experiments, told him
that this was *impossible*, that he *must* have
made a mistake...

It wasn't a mistake and it did happen. The VAX finished the calculation
10,000 times faster than his PC. You have a strange concept of "impossible".

It turned out that the VAX compiler had been
clever enough to hoist his simple-minded test
code out of the driving loop.

Optimizations have a tendency to make a complete mess of Big O
calculations, usually for the better. How does this support your
theory that Big O is a reliable predictor of program speed?

For the record, the VAX 9000 can have up to four vector processors each
running at up to 125 MFLOPS each, or 500 in total. A Pentium III runs at
about 850 Mflops. Comparing MIPS or FLOPS from one system to another is
very risky, for many reasons, but as a very rough and ready measure
of comparison, a four processor VAX 9000 is somewhere about the
performance of a P-II or P-III, give or take some fudge factor.

So, depending on when your student did this experiment, it is entirely
conceivable that the VAX might have been faster even without the
optimization you describe. Of course, you haven't told us what model VAX,
or how many processors, or what PC your student had, so this comparison
might not be relevant.


In fact, our VAX
calculated the body of the loop only *once*
and thus *immediately* announced that it had finished
the whole test - the compiler on this student's
PC, on the other hand, had not been clever enough
for this type of optimization: hence the difference...

Precisely. And all the Big O notation is the world will not tell you that.
Only an experiment will. Now, perhaps in the simple case of a bare loop
doing the same calculation over and over again, you might be able to
predict ahead of time what optimisations the compiler will do. But for
more complex algorithms, forget it.

This is a clear case of experimentation leading to the discovery
of practical results which could not be predicted from Big O calculations.
I find it quite mind-boggling that you would use as if it was a triumph
of abstract theoretical calculation when it was nothing of the sort.

I think this is really a cautionary tale for
experimentalists: don't *believe* in the decisiveness
of the outcomes your experiments, but try to *understand*
them instead (i.e. relate them to your theoretical grasp
of the situation)...

Or, to put it another way: your student discovered something by running an
experimental test of his code that he would never have learnt in a million
years of analysis of his algorithm: the VAX compiler was very cleverly
optimized.

The fact that your student didn't understand the problem well enough to
craft a good test of it is neither here nor there.
 
R

Ron Adam

Christian said:
This diagnosis reminds me of C.G. Jung, the psychologist,
who, after having introduced the concepts of extra- and
introversion, came to the conclusion that Freud was
an extravert whereas Adler an introvert. The point is
that he got it exactly wrong...

As to the value of complexity theory for creativity
in programming (even though you seem to believe that
a theoretical bent of mind can only serve to stifle
creativity), the story of the discovery of an efficient
string searching algorithm by D.E.Knuth provides an
interesting case in point. Knuth based himself on
seemingly quite "uncreatively theoretical work" (from
*your* point of view) that gave a *better* value for
the computuational complexity of string searching
than any of the then known algorithms could provide.

Regards,
Christian

(even though you seem to believe that

No, that is not at all what I believe. What I believe is, "The
insistence of strict conditions can limit creative outcomes."

The lack of those limits does not prevent one from using any resources
(including theoretical ones) if they are available.

You seem to be rejecting experimental results in your views. And the
level of insistence you keep in that view, leads me to believe you favor
a more productive environment rather than a more creative one. Both are
good, and I may entirely wrong about you, as many people are capable of
wearing different hats depending on the situation.

I think the gist of this thread may come down to...

In cases where it is not clear on what direction to go because the
choices are similar enough to make the choosing difficult. It is almost
always better to just pick one and see what happens than to do nothing.

Cheers,
Ron
 
C

Christian Stapfer

Fredrik Lundh said:
are you talking about KMP?

Yes. I cannot give you the source of the story,
unfortunately, because I only have the *memory* of
it but don't know exactly *where* I happended to read
it. There, Knuth was said to have first analyzed the
theoretical argument very, very carefully to figure
out *why* it was that the theoretical bound was so
much better than all "practically known" algorithms.
It was by studing the theoretical work on computational
complexity *only* that the light dawned upon him.
(But of course, Knuth is "an uncreative dumbo fit
only for production work" - I am speaking ironically
here, which should be obvious.)
I'm not sure that's really a good example of
how useful "theoretical work" really is in practice:

Oh sure, yes, yes, it is. But my problem is to find
a good source of the original story. Maybe one
of the readers of this thread can provide it?
the "better" computational complexity of KMP has
turned out to be mostly useless, in practice.

Well, that's how things might turn out in the long run.
Still, at the time, to all appearances, it *was* a
case of practical creativity *triggered* by apparently
purely theoretical work in complexity theory.

More interesting than your trying to shoot down
one special case of the more general phenomenon of
theory engendering creativity would be to know
your position on the more general question...

It happens *often* in physics, you known. Einstein
is only one example of many. Pauli's prediction of
the existence of the neutrino is another. It took
experimentalists a great deal of time and patience
(about 20 years, I am told) until they could finally
muster something amounting to "experimental proof"
of Pauli's conjecture.

Regards,
Christian
--
"Experience without theory is blind,
but theory without experience is mere
intellectual play."
- Immanuel Kant

»Experience remains, of course, the sole criterion
of the *utility* of a mathematical construction.
But *the*creative*principle* resides in mathematics.«
- Albert Einstein: ‘The World As I See It’

»The astronomer Walter Baade told me that, when he
was dining with Pauli one day, Pauli exclaimed,
"Today I have done the worst thing for a theoretical
physicist. I have invented something which can never
be detected experimentally." Baade immediately offered
to bet a crate of champagne that the elusive neutrino
would one day prove amenable to experimental discovery.
Pauli accepted, unwisely failing to specify any time
limit, which made it impossible for him ever to win
the bet. Baade collected his crate of champagne (as
I can testify, having helped Baade consume a bottle of it)
when, just over twenty years later, in 1953, Cowan and
Reines did indeed succeed in detecting Pauli’s particle.«
- Fred Hoyle: ‘Astronomy and Cosmology’
 
R

Ron Adam

Christian said:
It turned out that the VAX compiler had been
clever enough to hoist his simple-minded test
code out of the driving loop. In fact, our VAX
calculated the body of the loop only *once*
and thus *immediately* announced that it had finished
the whole test - the compiler on this student's
PC, on the other hand, had not been clever enough
for this type of optimization: hence the difference...

I think this is really a cautionary tale for
experimentalists: don't *believe* in the decisiveness
of the outcomes your experiments, but try to *understand*
them instead (i.e. relate them to your theoretical grasp
of the situation)...

Regards,
Christian

True understanding is of course the ideal, but as complexity increases
even theoretical information on a complex system becomes incomplete as
there are often other influences that will effect the outcome.

So the you could say: don't *depend* on the completeness of your
theoretical information, try to *verify* the validity of your results
with experiments.

Cheers,
Ron
 
C

Christian Stapfer

Ron Adam said:
No, that is not at all what I believe. What I believe is, "The insistence
of strict conditions can limit creative outcomes."

That's agreed. But going off *blindly*experimenting*
without trying to relate the outcome of that experimenting
back to ones theoretical grasp of the work one is doing
is *not* a good idea. Certainly not in the long run.
In fact, muddling-trough and avoiding the question
of suitable theoretical support for one's work is
perhaps more typical of production environments.
The lack of those limits does not prevent one from using any resources
(including theoretical ones) if they are available.

You seem to be rejecting experimental results in your views.

Not at all. You must have mis-read (or simply not-read)
my posts in this thread and are simply projecting wildly,
as psychoanalysts would call it, that is all.
And the level of insistence you keep in that view,

A view that I do not really have: you are really projecting
indeed.
leads me to believe you favor a more productive environment
rather than a more creative one.

You are mistaken. Although I have some "practical background"
(originally working as a "self-taught" programmer - although,
ironically, for a "development and research department"),
I went on to study mathematics at the Federal Institute
of Technology here in Switzerland. Do you want to say that
having been trained as a mathematician makes one uncreative?
- But it is true that mathematicians are socialized in such
a way that they tend to take over rather high standards of
precision and theoretical grounding of their work.
Both are good, and I may entirely wrong about you,

... you are at least *somewhat* wrong about me,
that I am quite sure of...
as many people are capable of wearing different hats depending on the
situation.

I think the gist of this thread may come down to...

In cases where it is not clear on what direction to go because the choices
are similar enough to make the choosing difficult. It is almost always
better to just pick one and see what happens than to do nothing.

As it appears, not even my most recent post has had
*any* recognizable effect on your thoroughly
misapprehending my position.

Regards,
Christian
 
C

Christian Stapfer

Steven D'Aprano said:
Which it was. It finished executing his code in almost 1/10,000th of the
time his PC could do.


It wasn't a mistake and it did happen.

Yes, yes, of course, it was a mistake, since
the conclusion that he wanted to draw from
this experiment was completely *wrong*.
Similarly, blind experimentalism *without*
supporting theory is mostly useless.
The VAX finished the calculation
10,000 times faster than his PC.
You have a strange concept of "impossible".

What about trying, for a change, to suppress
your polemical temperament? It will only lead
to quite unnecessarily long exchanges in this
NG.

But, mind you, his test was meant to determine,
*not* the cleverness of the VAX compiler *but*
the speed of the floating-point unit. So his
experiment was a complete *failure* in this regard.
Optimizations have a tendency to make a complete mess of Big O
calculations, usually for the better. How does this support your
theory that Big O is a reliable predictor of program speed?

My example was meant to point out how
problematic it is to assume that experimental
outcomes (without carefully relating them
back to supporting theory) are quite *worthless*.
This story was not about Big-Oh notation but
a cautionary tale about the relation between
experiment and theory more generally.
- Got it now?
For the record, the VAX 9000 can have up to four vector processors each
running at up to 125 MFLOPS each, or 500 in total. A Pentium III runs at
about 850 Mflops. Comparing MIPS or FLOPS from one system to another is
very risky, for many reasons, but as a very rough and ready measure
of comparison, a four processor VAX 9000 is somewhere about the
performance of a P-II or P-III, give or take some fudge factor.

Well, that was in the late 1980s and our VAX
certanly most definitely did *not* have a
vector processor: we were doing work in
industrial automation at the time, not much
number-crunching in sight there.
So, depending on when your student did this experiment, it is entirely
conceivable that the VAX might have been faster even without the
optimization you describe.

Rubbish. Why do you want to go off a tangent like
this? Forget it! I just do not have the time to
start quibbling again.
Of course, you haven't told us what model VAX,

That's right. And it was *not* important. Since the
tale has a simple moral: Experimental outcomes
*without* supporting theory (be it of the Big-Oh
variety or something else, depending on context)
is mostly worthless.
or how many processors, or what PC your student had,
so this comparison might not be relevant.

Your going off another tangent like this is
certainly not relevant to the basic insight
that experiments without supproting theory
are mostly worhtless, I'd say...
Precisely. And all the Big O notation is the world will not tell you that.
Only an experiment will. Now, perhaps in the simple case of a bare loop
doing the same calculation over and over again, you might be able to
predict ahead of time what optimisations the compiler will do. But for
more complex algorithms, forget it.

This is a clear case of experimentation leading to the discovery
of practical results which could not be predicted from Big O calculations.

The only problem being: it was *me*, basing
myself on "theory", who rejected the "experimental
result" that the student had accepted *as*is*.
(The student was actually an engineer, I myself
had been trained as a mathematician. Maybe that
rings a bell?)
I find it quite mind-boggling that you would use as if it was a triumph
of abstract theoretical calculation when it was nothing of the sort.

This example was not at all meant to be any
such thing. It was only about: "experimenting
*without* relating experimental outcomes to
theory is mostly worthless". What's more:
constructing an experiment without adequate
supporting theory is also mostly worthless.
Or, to put it another way: your student discovered

No. You didn't read the story correctly.
The student had accepted the result of
his experiments at face value. It was only
because I had "theoretical" grounds to reject
that experimental outcome that he did learn
something in the process.
Why not, for a change, be a good loser?
something by running an experimental test of his code
that he would never have learnt in a million
years of analysis of his algorithm: the VAX compiler
was very cleverly optimized.

Ok, he did learn *that*, in the end. But he
did *also* learn to thoroughly mistrust the
outcome of a mere experiment. Experiments
(not just in computer science) are quite
frequently botched. How do you discover
botched experiments? - By trying to relate
experimental outcomes to theory.

Regards,
Christian
 
C

Christian Stapfer

Steven D'Aprano said:
Pauli's conjecture was the result of experimental evidence that was
completely inexplicable according to the theory of the day:

So was it mere experiment or was it the relation
between experiment and theory that provided
the spur for creative advancement? My position
is the latter. Mere experiment does not tell you
anything at all. Only experiment on the background
of suitable theory does that.
energy and
spin was disappearing from certain nuclear reactions. This was an
experimental result that needed to be explained, and Pauli's solution was
to invent an invisible particle that carried that energy and spin away.

Pauli's creativity lay in proposing *this*
particular solution to the puzzle. And, surely,
if it had not been for Pauli's characterization
of that hypothetical particle, experimentalists
like Cowan and Reines would not have *anything*
to aim for in the first place.

But I'm not going to argue Pauli's case any futher
in this NG, because this is, in the end,
not a physics NG...
(When I put it like that, it sounds stupid, but in fact it was an elegant
and powerful answer to the problem.)

The neutrino wasn't something that Pauli invented from theoretical first
principles. It came out of hard experimental results.

Physics of the last half century is littered with the half-forgotten
corpses of theoretical particles that never eventuated: gravitinos,
photinos, tachyons, rishons, flavons, hypercolor pre-quarks, axions,
squarks, shadow matter, white holes, and so on ad nauseum.

Neutrinos and quarks are exceptional in that experimental predictions of
their existence were correct, and I maintain that is because (unlike all
of the above) they were postulated to explain solid experimental results,
not just to satisfy some theoretical itch.

So yet again, your triumph of theory is actually a victory for experiment.

Well, I might tell now the story of Maxwell,
sitting in his garden - and deducing, from
his equations (which, admittedly, were inspired
by earlier experimental work by Faraday),
something really quite shockingly *new*:
the existence of electromagnetic waves.

Regards,
Christian
--
»From a long view of the history of mankind -
seen from, say, ten thousand years from now
- there can be little doubt that the most
significant event of the nineteenth century
will be judged as Maxwell's discovery of the
laws of electrodynamics. The American Civil
War will pale into provincial insignificance
in comparison with this important scientific
event of the same decade.«
- Richard P. Feynman: "The Feynman Lectures"
 
S

Steven D'Aprano

Pauli's prediction of
the existence of the neutrino is another. It took
experimentalists a great deal of time and patience
(about 20 years, I am told) until they could finally
muster something amounting to "experimental proof"
of Pauli's conjecture.

Pauli's conjecture was the result of experimental evidence that was
completely inexplicable according to the theory of the day: energy and
spin was disappearing from certain nuclear reactions. This was an
experimental result that needed to be explained, and Pauli's solution was
to invent an invisible particle that carried that energy and spin away.

(When I put it like that, it sounds stupid, but in fact it was an elegant
and powerful answer to the problem.)

The neutrino wasn't something that Pauli invented from theoretical first
principles. It came out of hard experimental results.

Physics of the last half century is littered with the half-forgotten
corpses of theoretical particles that never eventuated: gravitinos,
photinos, tachyons, rishons, flavons, hypercolor pre-quarks, axions,
squarks, shadow matter, white holes, and so on ad nauseum.

Neutrinos and quarks are exceptional in that experimental predictions of
their existence were correct, and I maintain that is because (unlike all
of the above) they were postulated to explain solid experimental results,
not just to satisfy some theoretical itch.

So yet again, your triumph of theory is actually a victory for experiment.
 
R

Ron Adam

Christian said:
That's agreed. But going off *blindly*experimenting*
without trying to relate the outcome of that experimenting
back to ones theoretical grasp of the work one is doing
is *not* a good idea. Certainly not in the long run.
In fact, muddling-trough and avoiding the question
of suitable theoretical support for one's work is
perhaps more typical of production environments.




Not at all. You must have mis-read (or simply not-read)
my posts in this thread and are simply projecting wildly,
as psychoanalysts would call it, that is all.

The term 'rejecting' was the wrong word in this case. But I still get
the impression you don't trust experimental methods.

As it appears, not even my most recent post has had
*any* recognizable effect on your thoroughly
misapprehending my position.

Regards,
Christian

In most cases being able to see things from different view points is
good. So I was offering an additional view point, not trying to
implying your's is less correct.

On a more practical level, Python as a language is a dynamic development
process. So the level of completeness of the documentation, and the
language it self, will vary a bit in some areas compared to others. So
as a programmer, it is often much more productive for me to try
something first and then change it later if it needs it. Of course I
would test it with a suitable range of data that represents the expected
range at some point.

In any case, this view point has already been expressed I think. <shrug>

Cheers,
Ron
 
O

Ognen Duzlevski

Optimizations have a tendency to make a complete mess of Big O
calculations, usually for the better. How does this support your
theory that Big O is a reliable predictor of program speed?

There are many things that you cannot predict, however if the compiler was sufficiently documented and you had the
knowledge of the abovementioned peculiarity/optimization, you could take it into account. Bear in mind that the example
was given to show a problem with a purely experimental approach - it tends to show a tree and ignore the forest.
Sometimes this tree can be respresentative of a forest but many times it might not be.

The way I understood these notations was in terms of algorithmic behavior and data input sizes. It is generally
expected that certain basic operations will have a certain complexity. If this is not the case on a
particular platform (language, interpreter, cpu etc.) then there are several questions to ask: a) is such a deviation
documented?, b) why is there such a deviation in the first place? For example, if something is generally known to be
O(1) and your particular platform makes it O(n) then you have to ask why that is so. There might be a
perfectly good reason but this should still either be obvious or documented.

IMHO, I would rather first explore the theoretical boundaries to a certain approach before wasting time on coding up
stuff. If it is immediately obvious that such an approach will not yield anything acceptable for my own purposes then
what is the point of squeezing performance out of what will be dead-beat code anyways?

Knowing the tool/language/os in depth is a formidable strength and it is always a pleasure to see someone squeeze time
out of a piece of code solely based on knowing the internals of the compiler and/or runtime environment. However, it is
most usually the case that this person will be squeezing time out of a certain order of performance - no amount of
this kind of optimization will move the code to the next order.

Ognen
 
P

Paul Rubin

Ognen Duzlevski said:
There are many things that you cannot predict, however if the
compiler was sufficiently documented and you had the knowledge of
the abovementioned peculiarity/optimization, you could take it into
account. Bear in mind that the example was given to show a problem
with a purely experimental approach - it tends to show a tree and
ignore the forest. Sometimes this tree can be respresentative of a
forest but many times it might not be.

Consider the claim that earlier in the thread that adding to a hash
table is approximately O(1):

[Stephen D'Aprano]
And knowing that hash tables are O(1) will not tell you that, will it?

There is only one practical way of telling: do the experiment. Keep
loading up that hash table until you start getting lots of collisions.

The complexity of hashing depends intricately on the the data and if
the data is carefully constructed by someone with detailed knowledge
of the hash implementation, it may be as bad as O(n) rather than O(1)
or O(sqrt(n)) or anything like that. Experimentation in the normal
will not discover something like that. You have to actually
understand what's going on. See for example:

http://www.cs.rice.edu/~scrosby/hash/
 
S

Steven D'Aprano

Experiments
(not just in computer science) are quite
frequently botched. How do you discover
botched experiments?

Normally by comparing them to the results of other experiments, and being
unable to reconcile the results. You may have heard the term "the
experiment was/was not replicable".

How do you discover whether your theory is correct? By comparing it to
itself, or by comparing it to experiment?

Of course you need some theoretical background in order to design your
experiment in the first place, otherwise you have no idea what you are
even looking for. And it is easy to mis-design an experiment, as your
student did, so that it is actually measuring X when you think it is
measuring Y. If you are about to argue against a naive view of the
scientific method where the scientist generates data in a mental vacuum,
don't bother, I understand that.

But fundamentally, calculated Big O values of algorithms on their own are
of virtually zero use in choosing between actual functions. If I tell you
that algorithm X is O(f(N)), what have I told you about it?

Have I told if it is unacceptably slow for some particular data size? No.

Have I told you how much work it will take to implement it? No.

Have I told you how fast my implementation is? No.

Have I told you that it is faster than some other algorithm Y? No.

The only thing I have told you is that in some rough and ready fashion, if
I increase the size of my data N, the amount of work done will increase
very approximately like f(N).

This disconnect between what Big O *actually* means and how you are
recommending we use it makes REAL PRACTICAL DIFFERENCE, and not in a good
way.

Here is a real problem I had to solve some time ago. Without using
regular expressions, I needed to find the position of a target
string in a larger string, but there were multiple targets. I
needed to find the first one.

I ended up using something like:

(untested, and from memory)

def findmany(s, *targets):
minoffset = (len(s)+1, "")
for t in targets:
p = s.find(t)
if p != -1 and p < minoffset[0]:
minoffset = (p, t)
return minoffset[0]

You would look at that, realise that s.find() is O(N) or even O(N**2) -- I
forget which -- and dismiss this as Shlemiel the Painter's algorithm which
is O(N**2) or worse.

http://www.joelonsoftware.com/articles/fog0000000319.html

And you would be right in your theory -- but wrong in your conclusion.
Because then you would do what I did, which is spend hours or days writing
a "more efficient" O(N) searching function in pure Python, which ended up
being 100 times slower searching for a SINGLE target string than my
Shlemiel algorithm was searching for twenty targets. The speed benefit I
got from pushing the character matching from Python to C was so immense
that I couldn't beat it no matter how "efficient" the algorithm was.

If I had bothered to actually profile my original code, I would have
discovered that for any data I cared about, it worked not just acceptably
fast, but blindingly fast. Why would I care that if I had a terrabyte of
data to search for a million targets, it would scale badly? I was
searching text strings of less than a megabyte, for five or six targets.
The Big O analysis I did was completely, 100% correct, and completely,
100% useless. Not just useless in that it didn't help me, but it actually
hindered me, leading me to waste a day's work needlessly looking for a
"better algorithm".
 
S

Steven D'Aprano

The complexity of hashing depends intricately on the the data and if
the data is carefully constructed by someone with detailed knowledge
of the hash implementation, it may be as bad as O(n) rather than O(1)
or O(sqrt(n)) or anything like that. Experimentation in the normal
will not discover something like that. You have to actually
understand what's going on. See for example:


Yes, that is a very good point, and I suppose if a hostile user wanted to
deliberately construct a data set that showed off your algorithm to its
worst behaviour, they might do so. But if you are unlikely to discover
this worst case behaviour by experimentation, you are equally unlikely to
discover it in day to day usage. Most algorithms have "worst case"
behaviour significantly slower than their best case or average case, and
are still perfectly useful.
 
J

James Dennett

Steven said:
So that they know how to correctly interpret what Big O notation means,
instead of misinterpreting it. Big O notation doesn't tell you everything
you need to know to predict the behaviour of an algorithm. It doesn't even
tell you most of what you need to know about its behaviour. Only actual
*measurement* will tell you what you need to know.

In my experience, I need both knowledge of algorithmic
complexity (in some pragmatic sense) and measurements.
Neither alone is sufficient.

The proponents of algorithmic complexity measures don't
make the mistake of thinking that constants don't matter
for real-world performance, but they also don't make the
mistake of thinking that you can always measure enough
to tell you how your code will perform in all situations
in which it might be used.

Measurement is complicated -- very often, it just shows
you that tuning to match cache sizes is greatly important
to keep the constant factors down. And yes, for small
data sizes often a linear-time algorithm can beat one
whose execution time grows only logarithmically, while
often a logarithmic time is close enough to constant over
the range of interest. How an operation runs on a heavily
loaded system where it shares resources with other tasks
can also be greatly different from what microbenchmarks
might suggest.

If we don't oversimplify, we'll measure some appropriate
performance numbers and combine that with some knowledge
of the effects of caches, algorithmic complexity and other
factors that might matter in given situations. And of
course there will be many situations where programmer time
and simplicity are more important than saving a millisecond,
or even a second, and we won't waste excessive resources in
optimising runtime at the expense of other factors.

-- James
 
P

Paul Rubin

Steven D'Aprano said:
But if you are unlikely to discover this worst case behaviour by
experimentation, you are equally unlikely to discover it in day to
day usage.

Yes, that's the whole point. Since you won't discover it by
experimentation and you won't discover it by day to day usage, you may
very well only find out about it when an attacker clobbers you. If
you want to prevent that, you HAVE to discover it by analysis, or at
least do enough analysis to determine that a successful attack won't
cause you a big catastrophe (ok, this is probably the case for most of
the stuff that most of us do).

Sure, there are some applications that are never exposed to hostile
users. That excludes pretty much anything that connects to the
internet or handles data that came from the internet. Any general
purpose development strategy that doesn't take hostile users into
account is of limited usefulness.
Most algorithms have "worst case" behaviour significantly slower
than their best case or average case, and are still perfectly useful.

Definitely true. However, a lot more of the time than many
implementers seem to think, you have to take the worst case into
account. There's no magic bullet like "experiments" or "unit tests"
that results in reliable software. You have to stay acutely aware of
what you're doing at every level.
 
A

Alex Martelli

Christian Stapfer said:
This is why we would like to have a way of (roughly)
estimating the reasonableness of the outlines of a
program's design in "armchair fashion" - i.e. without
having to write any code and/or test harness.

And we would also like to consume vast amounts of chocolate, while
similarly reclining in comfortable armchairs, without getting all fat
and flabby. Unfortunately, what we would like and what reality affords
are often pretty uncorrelated. No matter how much theoreticians may
love big-O because it's (relatively) easy to compute, it still has two
failings which are often sufficient to rule out its sufficiency for any
"estimate [of] the reasonableness" of anything: [a] as we operate on
finite machines with finite wordsize, we may never be able reach
anywhere even remotely close to the "asymptotic" region where big-O has
some relationship to reality; in many important cases, the
theoretical worst-case is almost impossible to characterize and hardly
ever reached in real life, so big-O is of no earthly use (and much
harder to compute measures such as big-Theta should be used for just
about any practical purpose).

Consider, for example, point . Quicksort's big-O is N squared,
suggesting that quicksort's no better than bubblesort or the like. But
such a characterization is absurd. A very naive Quicksort, picking its
pivot very systematically (e.g., always the first item), may hit its
worst case just as systematically and in cases of practical importance
(e.g., already-sorted data); but it takes just a little extra care (in
the pivot picking and a few side issues) to make the worst-case
occurrences into ones that will not occur in practice except when the
input data has been deliberately designed to damage by a clever and
determined adversary.

Designing based on worst-case occurrences hardly ever makes sense in any
field of engineering, and blind adherence to worst-case assessments can
be an unmitigated disaster, promoting inferior technology just because,
in the WORST imaginable case, the best available technology would fare
no better than the inferior one (even though in 99.99999% of cases the
best technology would perform better, if you're designing based on
worst-case analyses you may not even NOTICE that -- and NEVER, *NEVER*
forget that big-O is nothing BUT "extreme-worst-case" analysis!). Why
bother using prestressed concrete, when, should a large asteroid score a
direct hit, the costly concrete will stand up no better than cheap
bricks, or, for that matter, slightly-damp straw? Why bother doing
(e.g.) random pivot selection in quicksort, when its big-O (i.e.,
worst-case) behavior will remain N-squared, just like naive quicksort,
or, for that matter, bubblesort?


Alex
 
C

Christian Stapfer

Alex Martelli said:
And we would also like to consume vast amounts of chocolate, while
similarly reclining in comfortable armchairs,

Maybe some of my inclination towards design
based on suitable *theories* (instead of
self-conditioning through testing) goes back
to the fact that I tend to think about the
design of my programs when no computer happens
to be near at hand to do some such experimenting,
or self-conditioning...
without getting all fat and flabby.

Well, thinking can be hard work. There is no need
to suggest an image of laziness. Thought experiments
are also quite often successful. Hardware engineers
can design very often entire gadgets without doing
a great deal of testing. They usually need to resort
to testing only if they know (or feel?) not to have
a sufficiently clear *theoretical* grasp of the
behavior of some part of their design.
Unfortunately, what we would like and what reality affords
are often pretty uncorrelated. No matter how much theoreticians may
love big-O because it's (relatively) easy to compute, it still has two
failings which are often sufficient to rule out its sufficiency for any
"estimate [of] the reasonableness" of anything: [a] as we operate on
finite machines with finite wordsize, we may never be able reach
anywhere even remotely close to the "asymptotic" region where big-O has
some relationship to reality; in many important cases, the
theoretical worst-case is almost impossible to characterize and hardly
ever reached in real life, so big-O is of no earthly use (and much
harder to compute measures such as big-Theta should be used for just
about any practical purpose).


But the fact remains that programmers, somewhat
experienced with the interface a module offers,
have a *rough*idea* of that computational complexity
attaches to what operations of that interface.
And having such a *rough*idea* helps them to
design reasonably performing programs much more
quickly.
Big-Oh and other asymptotic complexity measures
really do have *this* advantage over having
acquired, by way of conditioning experiences,
some such *rough*idea* of computational complexity:
they capture at least some of that "rough idea"
in a somewhat more easily communicable and much
more precise fashion.

Maybe you and Steven prefer to be conditioned,
Pavlov style, by the wonderful experiences that
you get while testing? - This is perhaps really
one of my *worst* psychological handicaps, I must
admit: that I don't *like* to get conditioned
like that, no matter how good it feels, no matter
how effective it might be for "practical" work that
one has to do.
I want to be able to really think *about* what
I am doing. And in order to be able to think about
it one usually needs some information about the
implementation, performance wise, of the language
features and the system modules that one might
want to use. If you happen to know of any *better*
way of offering the requisite information than
asymptotic complexity measures then, of course,
I am very grateful to hear more about it.
Consider, for example, point . Quicksort's big-O is N squared,
suggesting that quicksort's no better than bubblesort or the like. But
such a characterization is absurd. A very naive Quicksort, picking its
pivot very systematically (e.g., always the first item), may hit its
worst case just as systematically and in cases of practical importance
(e.g., already-sorted data); but it takes just a little extra care (in
the pivot picking and a few side issues) to make the worst-case
occurrences into ones that will not occur in practice except when the
input data has been deliberately designed to damage by a clever and
determined adversary.

Designing based on worst-case occurrences hardly ever makes
sense in any field of engineering,


What's wrong with wanting to have a rough idea
of what might happen in the worst case? I believe
many engineers are actually expected to think
about at least some "worst-case" scenarios.
Think of nuclear reactors, airplanes, or
telephone exchanges (and dont' think of Google
for a change). Don't you expect engineers
and scientists designing, for example, a nuclear
reactor, to think hard about what the worst-case
scenario might be? And how likely it might happen?
(And *no* testing whatsoever in that direction,
please!) Not thinking is, admittedly, a lot easier.

Why bother doing
(e.g.) random pivot selection in quicksort, when its big-O (i.e.,
worst-case) behavior will remain N-squared, just like naive quicksort,
or, for that matter, bubblesort?

Because worst-case is not the only measure of
computational complexity that one might be
interested in. For some applications one may
be able to accept relatively bad worst-case
behavior, if it doesn't happen too often.
This is why for these people we might provide
information about average case behavior (and
then the difference between quicksort and
bubblesort clearly shows).
For others, such worst-case behavior may not
be acceptable. For those other applications,
a good worst-case may be what is required.
This is why this second category of programmers
needs to know about the worst-case. - But I am
certainly belaboring the obvious...

Of course, the programs that have been designed
on the basis of such information can (and ought
to be) tested. Then some surprises (*not* predicted
by theory) might happen: but if they do not happen
too often, theory has done a good job - and so has
the programmer...

Regards,
Christian
 
A

Alex Martelli

Christian Stapfer said:
Maybe some of my inclination towards design
based on suitable *theories* (instead of
self-conditioning through testing) goes back
to the fact that I tend to think about the
design of my programs when no computer happens
to be near at hand to do some such experimenting,
or self-conditioning...

Oh, I am as prone as anybody I know to do SW architecture and design in
bed when the lights are off and I'm sliding into sleep -- just about the
only case in which no computer is handy, or, rather, in which it's
generally unwise to turn the computer on (since it would interfere with
the sleep thing;-). Back before laptops were really affordable and
usable, I used to have a long bus commute, and did a lot of design with
pen and paper; and whiteboards are a popular group-design tool at
Google, no matter how many laptops or desktops happen to be around --
whiteboards are simply more suitable for "socialization" around a draft
design's sketch, than any computer-based tool I've ever seen.

But that's *design*, and most often in pretty early stages, too -- quite
a ways from *coding*. At that stage, one doesn't even generally commit
to a specific programming language or other for the eventual
implementation of the components one's considering! Rough ideas of
*EXPECTED* run-times (big-Theta) for various subcomponents one is
sketching are *MUCH* more interesting and important than "asymptotic
worst-case for amounts of input tending to infinity" (big-O) -- for
example, where I sketch-in (mentally, on paper, or on whiteboard) a
"hash table" subcomponent, I consider the *expected* (Theta) performance
(constant-time lookups), definitely NOT the big-O "linear time" lookups
which just MIGHT occur (if, say, all inputs just happened to hash to the
same value)... otherwise, I'd never use hash tables, right?-)

Well, thinking can be hard work. There is no need
to suggest an image of laziness. Thought experiments
are also quite often successful. Hardware engineers
can design very often entire gadgets without doing
a great deal of testing. They usually need to resort
to testing only if they know (or feel?) not to have
a sufficiently clear *theoretical* grasp of the
behavior of some part of their design.

Having been a hardware designer (of integrated circuits, for Texas
Instruments, and later briefly for IBM), before switching to software, I
can resolutely deny this assertion: only an utter madman would approve a
large production run of an IC who has not been EXTENSIVELY tested, in
simulations and quite possibly in breadboards and later in limited
pre-production runs. And any larger "gadget" USING ICs would be
similarly crazy to skimp on prototyping, simulation, and other testing
-- because, as every HW engineer KNOWS (SW ones often have to learn the
hard way), the distance between theory and practice, in practice, is
much larger than the distance between practice and theory should be in
theory;-).

Unfortunately, what we would like and what reality affords
are often pretty uncorrelated. No matter how much theoreticians may
love big-O because it's (relatively) easy to compute, it still has two
failings which are often sufficient to rule out its sufficiency for any
"estimate [of] the reasonableness" of anything: [a] as we operate on
finite machines with finite wordsize, we may never be able reach
anywhere even remotely close to the "asymptotic" region where big-O has
some relationship to reality; in many important cases, the
theoretical worst-case is almost impossible to characterize and hardly
ever reached in real life, so big-O is of no earthly use (and much
harder to compute measures such as big-Theta should be used for just
about any practical purpose).


But the fact remains that programmers, somewhat
experienced with the interface a module offers,
have a *rough*idea* of that computational complexity
attaches to what operations of that interface.
And having such a *rough*idea* helps them to
design reasonably performing programs much more
quickly.


A rough idea helps, particularly a rough idea of EXPECTED performance.
Big-Oh and other asymptotic complexity measures
really do have *this* advantage over having
acquired, by way of conditioning experiences,
some such *rough*idea* of computational complexity:

No: big-O is NOT AT ALL a measure, rough or otherwise, of EXPECTED
performance. It's *WORST-CASE*, by definition; and includes no
indications whatsoever of how close to the asymptote one can actually
get on a given finite-size machine. By both issues, it can be totally
misleading -- and, "it's not what you don't know, that hurts... it's
what you know WHICH IS NOT SO". By its MISLEADING characteristics,
big-O can be SERIOUSLY DAMAGING to the abilty of "designing on the back
of an envelope" (or in your head, or at a whiteboard).
they capture at least some of that "rough idea"
in a somewhat more easily communicable and much
more precise fashion.

Entirely precise, and therefore, in such cases as quicksort and hash
tables (hardly "obscure corner cases" -- CENTRAL PILLARS of many
designs!!!) all that more misleading.

Maybe you and Steven prefer to be conditioned,
Pavlov style, by the wonderful experiences that
you get while testing? - This is perhaps really
one of my *worst* psychological handicaps, I must
admit: that I don't *like* to get conditioned
like that, no matter how good it feels, no matter
how effective it might be for "practical" work that
one has to do.

I like to be able to reason FIRSTLY on the basis of EXPECTED, NORMAL
behavior, corrected in the first order by reasonable prudence and
caution, in the second order by accurate experimentation, and only in
the third order by considerations of worst-case scenarios -- the latter
normally tempered by estimates of their likelihood... which also
requires a rough understanding of CORRELATION between the causes of such
scenarios, which may be present, both directly or indirectly (assuming
that different components interacting in a system "may just happen" to
hit worst-case scenarios for each of them at once may be demonstrably
wrong in either direction -- and NO big-O analysis will ever help with
this crucial kind of task!).
I want to be able to really think *about* what
I am doing. And in order to be able to think about
it one usually needs some information about the
implementation, performance wise, of the language
features and the system modules that one might
want to use. If you happen to know of any *better*
way of offering the requisite information than
asymptotic complexity measures then, of course,
I am very grateful to hear more about it.

If you can't think about a design without knowing such details about one
specific implementation of one specific programming language in which
that design might get implemented, I think this strongly suggests you're
giving insufficient attention to the one crucial skill in a designer's
mental armory: *ABSTRACTION*. There are many ways to cultivate one's
abstraction abilities. One of my favorite pastimes is collecting
different renditions of Bach's "Art of the Fugue", for example: Bach
carefully abstracted away the information about which instrument(s) were
supposed to be playing which voice(s), as well as many other details --
because of this, the Art of the Fugue has the highest abstraction level
among all well-known musical compositions, and each rendition may take a
different concrete reading of it. Learn to listen to them and be able
to tell what they have in common, and where they differ: it's a great
lesson in the understanding of abstraction. But it's probably only
appropriate if you like music, and Baroque in particular; other arts and
discipline may no doubt afford similarly good learning experiences, if
you know where to look for them.

Once you're confident about your abilities of abstraction, I suggest you
continue by the exercise of *characterization* (of A FEW
implementations, ideally) which Jon Bentley (the programmer, not the
jazz player) suggests pretty close to the start of his masterpiece
"Writing Efficient Programs" (that great book may be hard to come by
these days, but the "Programming Pearls" books are also excellent and
communicate mostly the same messages, quite effectively). Get a sense
for the *EXPECTED* (NOT "asymptotic", for your inputs will NOT tend to
infinity; NOT "worst-case only", don't be THAT pessimistic) behavior of
some primitive operations - the couple of pages I devote to the subject
in the Nutshell chapter on optimization, profiling and testing may
suffice. Then refine your instinct by taking ONE complicated case, such
as the natural mergesort variant known as the timsort, and delving as
deep into its analysis as you dare -- characterize it as best you can
manage WITHOUT testing, simply by reasoning on its code (Tim's essay,
part of the Python source distribution, will be a helpful addition to a
thorought grounding in Knuth's chapter on sorting, for this purpose)...
then see what a difference it makes, to BE able to experiment!

You'll personally traverse, through such exercises, a curve not too
dissimilar from what the whole of Western thought went through in the
discovery and refinement of the experimental method (to some extent it
can be considered in full bloom in the thoughts and works of Galilei):
not the blind flailing around of pure trial and error (which HAD,
however, proved extremely fruitful in eliciting just about all technical
progress up to that age, and later), much less the ungrounded
elucubration of pure theoreticism (which HAD, mind you, given great
results once in a while, e.g. in Euclid, Nagarjuna, Archimedes...) --
but the powerful, fruitful merging of both strands into the incredibly
productive golden braid which has pulled progress up during the last few
centuries.

Consider, for example, point . Quicksort's big-O is N squared,
suggesting that quicksort's no better than bubblesort or the like. But
such a characterization is absurd. A very naive Quicksort, picking its
pivot very systematically (e.g., always the first item), may hit its
worst case just as systematically and in cases of practical importance
(e.g., already-sorted data); but it takes just a little extra care (in
the pivot picking and a few side issues) to make the worst-case
occurrences into ones that will not occur in practice except when the
input data has been deliberately designed to damage by a clever and
determined adversary.

Designing based on worst-case occurrences hardly ever makes
sense in any field of engineering,


What's wrong with wanting to have a rough idea
of what might happen in the worst case? I believe
many engineers are actually expected to think
about at least some "worst-case" scenarios.


Not extrapolating to infinity.
Think of nuclear reactors, airplanes, or
telephone exchanges (and dont' think of Google
for a change). Don't you expect engineers
and scientists designing, for example, a nuclear
reactor, to think hard about what the worst-case
scenario might be? And how likely it might happen?

A square hit by an asteroid of mass tending to infinity? No, I don't
expect nuclear reactors (nor anything else of human conception) to be
designed in consideration of what such an asteroid hit would do. And
yet, that's *EXACTLY* what would be indicated by your theory of big-O as
a guide to design: consider the absolute worst that could conceivably
happen, with *NO* indications WHATSOEVER of how unlikely it might be
(because for simplicity of computation you take limits for misfortune
tending to infinity!!!), and design for THAT.

If our collective ancestors had taken this attitude, we'd still all be
huddling in deep caves (possibly a better protection against "dinosaurs'
killer" levels of asteroid hits!), shivering in the cold (fire is FAR
too dangerous to survive a worst-case analysis, particularly with
damaging elements all tending to infinity, as your love affair with
big-O based designs would certainly indicate!!!). Animal skins? Forget
it!!! Do a perfectly pessimistic worst-case analysis with suitable
extrapolations to infinity and such skins would no doubt carry germs
enough to exterminate the budding human race (not that extinction might
not be preferable to the utterly miserable "lives" the few humans might
lead if "big-O" had guided their design concerns, mind you!-).

(And *no* testing whatsoever in that direction,
please!) Not thinking is, admittedly, a lot easier.

I would consider ANYBODY who built a nuclear reactor without AMPLE
testing dangerous enough for all of mankind to shoot on sight. I would
expect HUGE amounts of simulation-based testing, followed and
interspersed by prototypes (smaller and simplified) to validate that the
simulations' tests are indeed perfectly respondent to actual reality.

I haven't seen anybody on this thread advocating "not thinking"; if
you're somehow implying that I'm trying to discourage people from
thinking IN USEFUL AND PRODUCTIVE WAYS, I challenge you to point to
anything I wrote here that could be construed that way.
Because worst-case is not the only measure of
computational complexity that one might be
interested in. For some applications one may
be able to accept relatively bad worst-case
behavior, if it doesn't happen too often.

But big-O, which is what you advocate, gives *NO* indication of how
likely or unlikely it might be, in particular -- a TERRIBLE failing.
This is why for these people we might provide
information about average case behavior (and
then the difference between quicksort and
bubblesort clearly shows).

Of COURSE you might! Who, pray, is stopping you from so doing, except
perhaps your own laziness and the fact that you prefer to pontificate
about the work that OTHERS should (you believe) do for you FOR FREE, IN
ADDITION to the other work they're already so doing, rather than
constructively participating in the collective effort yourself?

You argued for big-O information, and now you're arguing for more
information (that's much harder to provide in mathematically rigorous
form) regarding "averages" (over WHAT distribution of input
permutations? Why would you think that all permutations are equally
likely? In the real world, they're not, but they ARE incredibly hard to
characterize -- you'd better pick a very specific, tiny subfield of
application for sorting, to be able to supply AMPLE experimental support
for your theories... theory without any experimentation to support it
can be worse than worthless, it can be truly DIS-informing!). Very
well, if you're so convinced this information will be precious (worth
more than, say, the optimizations and new components which people might
alternatively produce, investing as they prefer their own time and
efforts), LEAD BY EXAMPLE. Pick ONE relatively simple issue of
performance, and explore it to the level of breadth and depth you
believe "analytical" (as opposed to *experimental*) performance
characterization should go.

If you're willing to DO SOME WORK rather than SPEND YOUR TIME WHINING,
you will either produce some beautiful results whose practical
importance will stun others into following your lead, or find out that,
in the real world, there are more things in heaven and earth etc --
e.g., that behavior of virtual memory implementations SWAMPS any subtle
theoretical effects you thought would matter, for containers big enough
to matter, and well before getting anywhere close to the "asymptote" of
big-O and perhaps even big-Theta.

One way or another, something useful will be achieved, which surely
cannot be said about the present thread.

For others, such worst-case behavior may not
be acceptable. For those other applications,
a good worst-case may be what is required.
This is why this second category of programmers
needs to know about the worst-case. - But I am
certainly belaboring the obvious...

You are (perhaps without realizing it) pointing out that the big-O
characterization which you originally demanded is basicaly useless to
everybody (considering that a system has several subcomponents, and the
conditions under which each reaches worst-case are NOT necessarily
uncorrelated but might be positively or negatively correlated!). Except
perhaps theoreticians needing to publish theoretical papers, of
course;-).

Of course, the programs that have been designed
on the basis of such information can (and ought
to be) tested. Then some surprises (*not* predicted
by theory) might happen: but if they do not happen
too often, theory has done a good job - and so has
the programmer...

That would follow only if the total amount of work (properly accounting
for the huge amounts needed to collect "theoretically" sound
characterizations) was somehow reduced compared to alternative
approaches, based more on sound engineering practice and less on
reminescences of mathematics.

I believe that, out of all the iron suspension bridges designed and
built in the 19th century, ONE is standing -- the Brooklin Bridge.
Others were designed with "sounder theory" and (althought NOT real
"worst case analysis" -- none considered direct asteroid impacts, nor
did any designer "extrapolate to infinity"!!!-) tried to cover
"reasonable" worst cases as the best theory available to them afforded.
And they all went down in the following decades.

The designer of the Brooklin Bridge followed sound engineering practice:
he designed based on TYPICAL (NOT worst-case) behavior, supported by
small-scale prototypes, THEN, knowing perfectly well that he couldn't be
sure he knew exactly WHAT worst-case he needed to ward about... he
doubled the thickness of all steel ropes and load-bearing beams. So,
HIS bridge is still standing (as is, say, Renzo Piano's airport terminal
building in Japan, where ALL Japanese buildings all around crumbled to
dust in a terrible earthquake... Piano may be an architect rather than
an engineer, but my respect for his craft knows no bounds).

I gather you want nuclear reactors, and programs, designed by the praxis
of all those OTHER suspension bridge designers of the 19th century; I
want them designed by the praxis by which the Brooklin Bridge was
designed. But in this case, you have an excellent chance to prove me
wrong: just put some of your work where your mouth is! And I'll be
quite happy to help, because, even if (as I surmise) the attempt at
theoretical characterization proves pragmatically unsuccessful (in terms
of actual usefulness, and specifically of "bang for the buck"), even
then, if serious work has been put towards it, an empirically important
result is obtained.

I suspect you'll just find half-assed excuses to shirk the work which
your suggestions imply, but for once I'd be happy to be proved wrong...


Alex
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top