Dave Vandervies a écrit :
If you are connecting a modern PC at 4800 bauds...
the main bottleneck is anyway outsiode string handling.
You change the rules after the fact, excuse me.
You asked:
< quote >
If knowing the length of strings is that important, can you explain
how counted strings would have made this code easier to write, clearer,
or less error-prone?
< end quote>
I don't see "faster" in my list. Is code that calls a library routine
that does a fixed-length memory scan easier to write, clearer, or less
error-prone than code that calls a library routine that stops at the
'\0' that terminates a string?
If you're going to complain about changing the rules after the fact,
a better place to start complaining might be with the comment about
replacing a terminating-character check with a size-bounded memory scan
(both wrapped inside a library routine) for, as far as I can tell,
no reason except that some processors might be able to run it a little
bit faster.
Again, you change the rules. You did NOT say that in the first
post!
Now it is obvious that you should do:
if (*sentence != '$') {
ptr = strchr(sentence,'$');
}
else ptr = sentence;
Why should I do that? It's neither easier to write, nor clearer,
nor less error-prone than letting strchr check the first character
for me. It's also quite unlikely to save a noticeable (or probably even
measurable) amount of time.
If the processor belongs to the x86 family definitely yes.
How many handheld devices that I would be able to get my hands on at
some point in the not-too-distant future are likely to be built around
an x86 processor that's slow enough that it might have trouble keeping
up with a 4800bps input stream?
I don't pay a whole lot of attention to what goes in embedded systems,
but it seems to me that the only way it would be short on CPU cycles
for this kind of operation is if it's been optimized for insanely low
cost and power consumption, and I'm not sure the x86 family is a major
player in that market.
Well, counted strings are more secure by design. It is not
much their intrinsic efficiency but the fact that they allow for
programs that do not start unbounded memory scans...
Secure by design, or just with a different set of potential
security-related bugs to watch out for? (Do your counted strings keep
track of the available space and make sure it's not exceeded? Does a
set of bytes with a random length field make a valid counted string?
Can a programmer write safe code without knowledge about how to use them
and a fanatical (or at least not-nonexistent) dedication to correctness?
Why should the answers be any different for code that uses null-terminated
strings?)
I've never written string code that starts unbounded memory scans;
they're just bounded by a condition other than "have I gotten to N
characters past the beginning?". If my sets of bytes don't have a '\0'
at the end, I don't call them strings and don't treat them as strings -
it's that easy.
(It's worth noting that even the end-of-string check bug in the code I
posted wouldn't've resulted in walking off the end of the string; giving
it carefully crafted bad data (rather than putting it downstream of an
input handler that only passed on complete lines) would have caused the
assert farther down to fail (immediately giving a clue that Something
Isn't Right, after which a brief examination of the execution on that
data would have turned up the error), or with NDEBUG defined would have
resulted in a deterministic failure to correctly report some types of
badly-formatted data - not exactly a terribly security flaw.)
No, I haven't optimized it yet. And I am not trying to sell you
something. The main thrust in my work was to demonstrate the far
reaching implications of a small change to the language itself to
encourage other people and implementers to do the same.
It seems to me that you've yet to convince anybody that these "far
reaching implications" are all that far-reaching, or for that matter
relevant at all.
dave