Malcolm McLean wrote On 07/19/07 16:38,:
(... still ducking the question about how strstr()
and its caller "talk to each other" with integers ...)
Because you've got a 32 bit machine, or one that is just transitioning. 64
bit machines can have more than 2GB of memory, which only costs about $100 a
gigabyte and has been falling for the past thirty years.
Not an issue, because even within the 32-bit scheme
there's still room to add 167% more memory to the machine
I've already got. Of course, it would cost me: the DIMM
slots are fully populated (2x512 and 2x128), so to get to
4GB I'd actually need to buy 4GB, not just 2.5GB. That'll
cost me -- checks an on-line store -- less than you thought,
only about $240!
Except that there are things I'd rather do with that
$240 than sacrifice it to your notions of purity. It takes
me -- well, "more than ten minutes" to earn $240, and I'd
prefer to spend it on something more worthwhile.
There is always a marginal case where getting an order of two improvement
actually makes the program twice as good, or allows it to work where
otherwise it would run out of resources.
Since you're saying factors of two can be ignored, I
guess you're justified (in your own mind) in saying "factor
of two" for a factor of four. Applied recursively, this
strategy allows us to ignore *all* inefficiencies:
"It's a thousand times slower!"
"Well, since factors of two are small enough to ignore,
we may as well call it five hundred times slower. And we
may as well call *that* two hundred fifty times, and that's
really indistinguishable from one twenty-five, which in turn
is the same as sixty (rounding down a little), and sixty is
really equivalent to thirty, which isn't noticeably different
from twelve, which is virtually the same as six, which might
as well be three, which is pretty close to two, which we've
already agreed is negligible. So there's really no speed
difference at all!"
However it is not a good
programming strategy to stress a general purpose computer like that, if you
can possibly avoid it.
My computer is *not* stressed as things stand now, without
your beloved fourfold bloat. I can browse the Net while I'm
recording from an LP, I can read and write E-mail, I can even
search for nonsense on Usenet. (The search is a short one.)
If the user wants to run two copies of your program
at once, he' stuck. Ditto if he want to run a video telling him how to use
the system. Then the program won't remain marginal for long, soon he'll
upgrade and all you micro-optimisation will then be so much wasted effort.
A fourfold speedup is a "micro"-optimization? I guess it
follows that your fourfold slowdown is a micro-pessimization.
If I'd known that factors of four don't bother you, I'd have
sent my old 300MHz machine to you instead of to the recyclers.
Sometimes of course it really can't be avoided, as with a games console
where you must fill memory and run the polygon engine to within a few
percentage points of its capacity. If we did ban all other integer types
you'd have to fake up 16 bit integers with logical ops, but that isn't the
proposal for now. They will remain for exceptional use, such as storing raw
audio samples.
And what language do you propose to use to manipulate
those samples? Not 64-bit-only C, that's for sure. Here's
my way of reducing the volume of one sample:
uint16_t *sample = ...;
*sample -= *sample / 10;
.... and here's what you want me to do instead:
/* 8-bit (?) */ unsigned char *samplebytes = ...;
/* 64-bit */ unsigned int sample;
sample = samplebytes[0] + (samplebytes[1] << 8);
sample -= sample / 10;
samplebytes[0] = sample;
samplebytes[1] = sample >> 8;
It won't wash, Malcolm. It wouldn't wash even if Herakles
ran a couple rivers over it.