Mark McIntyre a écrit :
This is a serius overstatement.
In cases where checking string length is done often, then doing it at
each usage is of course inefficient.
If on the other hand your programme /never/ needs to know the lengths
of strings, then being forced to use an object which includes the
overhead of stringsize is also inefficient.
Knowing the length is necessary for so many situations that in MOST
cases you will end up calling strlen explicitely or implicitely
(line in strcat)
In many cases you need to pass the string to another function.
In most cases you need to pass the length or the called function must
call strlen. To avoid an extra argument, most programmers will just
pass the string, forcing again a call to strlen. This is a horrible
waste with C string since in text handling programs, strings are passed
from function to function calling strlen for the same string over
and over.
String concatenation needs strlen implicitely.
Functions like strrchr can be coded MUCH more eficiently if the
length is known, since you start at the end of the string and stop
at the first match, instead of starting at the beginning, remembering
each match and returning the last...
To know if a string fits in a buffer you must know its length.
It is obvious that you can keep the length to avoid this
problems as heathfield says. But... if you need to keep the
length why not use a counted strings package anyway???
I recall working on some code that assiduously checked for
divide-by-zero conditions before performing any division, even for
divisors such as 1+e^x.
??? Maybe a typo? You mean 1e+0 probably.
> The check caused significant performance loss.
Maybe, anything can be exaggerated by did you find that code
crashing with division by zero?
It depends on the way it was done. Yes, with an actual constant
as divisor you know it can't be a division by zero but the check
was written maybe BEFORE the constant. When the variable was
replaced by a constant the check was kept, maybe because he
did not know if it was OK to replace a variable with that
constant...
Had we been using an object that included "for free" a div/0 check, we
would have been faced with a major rewrite.
Maybe, but it doesn't hurt so much to be careful in
high reliability code...
Elsewhere in the same code, there were numerous uses of strncpy
instead of strcpy. This gained nothing, not even performance, as the
source strings had been carefully loaded into a buffer of known width
and padded with blanks. The writer clearly thought it was a good idea
however, as they'd cleverly commented every single instance with a
warning not to change it.
It wazs a principle thing, and I do not find that bad. Why should be
strncpy less efficient than strcpy???
Yes, it may be 0.0001% less efficient but maybe he did not see
"efficiency" as a higher goal than SECURITY!!!
The lesson to take from this is that efficiency is not a
one-size-fits-all solution, and one should not assume .
Never said otherwise but really, you are showing us cases that
are more or less exceptional. The repeated use of strlen over and over
the same string is obvious MUCH MORE COMMON than the cases you mention.
[snip]
Really?
the second involves loading two values into a register and calling ADD
or somesuch. Zero function calls.
You misunderstand. I was saying the usage of normal C functions in
contrast to operator overloading the '+' operation.
If you have some new type of number (rationals say) and you
use
RationalAdd(Rat a,rat b);
or a+b
is the same. That was my point.
I've seen this done - I've seen novices who thought a clever way to be
overflow safe was to write functions to replace the operators, and
invariably their code ran like a dog.
Maybe.
1) lcc-win32 allows you to test for overflow, and the speed diffeerence
is almost zero when the compiler does it.
2) I am improving the operator overloading part, allowing inline and
assembly modules, so basically you will be able to do the overflow
test with almost no overhead.