We hear very often in this discussion group that
bounds checking, or safety tests are too expensive
to be used in C.
Several researchers of UCSD have published an interesting
paper about this problem.
http://www.jilp.org/vol9/v9paper10.pdf
Specifically, they measured the overhead of a bounds
checking implementation compared to a normal one, and
found that in some cases the overhead can be reduced
to a mere 8.3% in some cases...
I quote from that paper
< quote >
To summarize, our meta-data layout coupled with meta-check instruction
reduce the average overhead of bounds checking to 21% slowdown which is
a significant reduction when compared to 81% incurred by current
software implementations when providing complete bounds checking.
< end quote>
This 21% slowdown is the overhead of checking EACH POINTER
access, and each (possible) dangling pointer dereference.
If we extrapolate to the alleged overhead of using some extra
arguments to strcpy to allow for safer functions (the "evil
empire" proposal) the overhead should be practically ZERO.
Somehow, we are not realizing that with the extreme power of the
CPUs now at our disposal, it is a very good idea to try to
minimize the time we stay behind the debugger when developing
software. A balance should be sought for improving the safety
of the language without overly compromising the speed of the
generated code.
I quote again from that paper:
< quote >
As high GHZ processors become prevalent, adding hardware support to
ensure the correctness and security of programs will be just as
important, for the average user, as further increases in processor
performance. The goal of our research is to focus on developing
compiler and hardware support for efficiently performing software checks
that can be left on all of the time, even in production code releases,
to provide a signi cant increase in the correctness and security of
software.
< end quote >
The C language, as it is perceived by many people here, seems
frozen in the past without any desire to incorporate the changing
hardware/software relationship into the language itself.
When this issues are raised, the "argument" most often presented is
"Efficiency" or just "it is like that".
This has lead to the language being perceived as a backward and error
prone, only good for outdated software or "legacy" systems.
This pleases again the C++ people, that insist in seeing their language
as the "better C", and obviously, C++ is much better in some ways as
C, specially what string handling/common algorithms in the STL/ and
many other advances.
What strikes me is that this need not be, since C could with minimal
improvements be a much safer and general purpose language than it is
now.
Discussion about this possibility is nearly impossible, since a widely
read forum about C (besides this newsgroup) is non existing.
Hence this message.
To summarize:
o Bounds checking and safer, language supported constructs are NOT
impossible because too much overhead
o Constructs like a better run time library could be implemented in a
much safer manner if we would redesign the library from scratch,
without any effective run time cost.
jacob
P.S. If you think this article is off topic, please just ignore it.
I am tired of this stupid polemics.