K
Kelsey Bjarnason
[snips]
And thus, two more instructions than need to be there.
Whether you call printf or not is irrelevant.
Actually, C does do some error checking, but you miss the point. The
point is it is not the language's job to detect every possible error
condition, particularly when such error checking imposes performance
issues on the code.
My preferred example is string handling. Sure, the standard could
mandate that every string-related call check for NULLs and overflows and
the like, but to what end?
It is only on input that we get strings of unknown, unpredictable sizes.
Once they're "in the app", so to speak - once they've been input and
parsed, scanned, length-checked, whatever - then we know how big they
are, what we can do with them: a length-checking strcat, at that point,
buys us *nothing* but inefficiency.
If I'm going to be doing a lot of string processing in my code, the
*last* thing I want is some library routine pointlessly wasting time to
tell me what I already know: that I have (or don't) space to store the
string. I figured that out at input time, checking again and again and
again doesn't buy me anything, it just slows the code down.
Error checking is good. *Pointless* error checking is not good. Richard
and others are attempting to get you to see that you can have both error
checking *and* efficiency, where you seem to be focused solely on error
checking, to the detriment of efficiency.
Nope. *Pointless* error checking is unnecessary overhead. Stop ignoring
the key term in there: "pointless".
The limits comparison is an integer comparison and a jump.
And thus, two more instructions than need to be there.
In most processors this is 2-3 instructions. Then, there is a call to
C99 printf, that is surely at least 3000 - 4000 instructions, several
function calls, and a quite BIG overhead.
Whether you call printf or not is irrelevant.
Your attitude towards errors and error checking is the same as the
committee. "Error checking is stupid. C doesn't do error checking".
Actually, C does do some error checking, but you miss the point. The
point is it is not the language's job to detect every possible error
condition, particularly when such error checking imposes performance
issues on the code.
My preferred example is string handling. Sure, the standard could
mandate that every string-related call check for NULLs and overflows and
the like, but to what end?
It is only on input that we get strings of unknown, unpredictable sizes.
Once they're "in the app", so to speak - once they've been input and
parsed, scanned, length-checked, whatever - then we know how big they
are, what we can do with them: a length-checking strcat, at that point,
buys us *nothing* but inefficiency.
If I'm going to be doing a lot of string processing in my code, the
*last* thing I want is some library routine pointlessly wasting time to
tell me what I already know: that I have (or don't) space to store the
string. I figured that out at input time, checking again and again and
again doesn't buy me anything, it just slows the code down.
Error checking is good. *Pointless* error checking is not good. Richard
and others are attempting to get you to see that you can have both error
checking *and* efficiency, where you seem to be focused solely on error
checking, to the detriment of efficiency.
You defend them becasue you and they have the same basic philosophy
towards error checking:
"Error checking is unnecessary overhead"
Nope. *Pointless* error checking is unnecessary overhead. Stop ignoring
the key term in there: "pointless".