That may be true, but keep in mind also that you can run g++ with
maximum optimization today, and it will get the benefit of todays
optimizer. ...
I'm not sure if I got you right. Your argument seems to be:
If I compile the C-program with g++ and full-optimization today,
then in a few years it will appear like an unoptimized compiled
binary compiled at that time, thus an unoptimized compile now
is like a binary that was compiled few years ago with optimization.
If you tried compiling the C-prog with, say, version 2.9.1 (now current
is 4.x) and -O6, and comparing that binary with both the unoptimized
contemporate g++ compiled one and with java, then there would be some
substance to that comparision, especially, if the gcc-2.9.1 -O6 compiled
binary were indeed slower, but without this extra contestant the
comparision is rather worthless, IMHO.
Now, I also don't have gcc-2.9.1 available anymore, but I did the
experiment, anyway, copied the source-samples from your page, and
compiled them and ran them:
Funnily, for this particular code sample
gcc-3.3: without -O: 210ms
with any -O: 275ms
gcc-3,4: without -O: 210ms
with any -O: 270ms
gcc-4.1: w/o 270ms
with: 280ms
java: 275ms
(timings are averages over these 10 values each run.)
So, it seems as if for your example, gcc's performance
is dropping over time, and it's optimization contra-productive.
So, preserving the binary with old gcc-3.3 appears like a good
choice.
This of course depends on the particular benchmark,
because I've run C-programs where the difference
between no opt, and -O2 / -O6 was impressively
in favour of optimization. (that was back when
gcc-3.3 was brand new)
The real point of the post is that the micro-benchmark can prove Java is
fast enough.
The point surely stands nowadays (It didn't a few java-versions earlier),
but it is in no way proven nor shown by your comparision.
PS: gcc and g++ were actually equivalent, but for gcc I had to
declare the loop-variables at block start, since C didn't
yet allow "for(long i=..."-syntax.