Roedy said:
How about "considerably".
I doubt it in theory, and there is plenty of counter-evidence in fact.
Consider adding two numbers. The numbers must have come from somewhere
(2 logical machine operations). The result must go somewhere (1
logical machine op). The numbers must be added together (1 logical
machine op). So that's 4 logical operations. Adding a
branch-on-overflow would add one more logical machine operation. So,
at this very abstract level, we see a 25% increase.
Next remember that no application spends all its time adding numbers
together. So you should divide the above marginal cost by an
application dependent scaling factor. I have real difficulty believing
that the appropriate scaling would ever be less than about 2, and would
nearly always be higher -- a /lot/ higher.
Already we are looking at small numbers. But now condsider that we are
also working with real machines, with instuction piplining, speculative
execution, branch prediction (and/or hinting), caching, and so on. I
haven't tried to work through the details for any particular machine
architecture (beyond my competance), but it seems highly unlikely that
the /real/ underlying cost is anything like the 25% I derived above.
In fact, to me it seems plausible that the marginal cost could be
exactly zero in many cases (i.e specific sequences of JITer-emmited
code), at least unless the numbers actually /did/ overflow (which would
stall the pipeline).
I won't buy "considerably" without considerable evidence. I won't buy
"measurably" without measurable evidence. I won't buy "enormously" at
all ;-)
-- chris