Douglas said:
No, I'm saying that intuition is a function of experience,
and thus it may vary among individuals. You have heard in
this thread from people who deny that their intuition about
the expected behavior of strcat(a,a) matches yours.
I have only heard from people disagree with this *after* they have been
indoctrinated by the dictates of the C standard. I have not heard of
people (in this thread or otherwise) *unindoctrinated* whose intuition
is different. In this thread, these are just people mischaracterizing
what is meant by the word intuition.
But of course there are no shortage of people in this thread who are
honest. Remember I never *defined* what I thought strcat(p,p) should
do -- the honest people knew what I meant without prompting.
On some platforms, compilers implement strcat, strcpy, and
similar functions using string-op microcoded instructions,
This has nothing to do with it ...
which can malfunction pretty badly when the source and
destination objects overlap. Thus to perform according to
your preferred specification, additional testing would be
necessary to detect that possibility and use alternate,
generally slower code when there would be a problem.
And that is incorrect. Here:
char * safestrcat (char * dst, const char * src) {
if (*src) {
char * dend = dst + strlen (dst);
strcpy (dend + 1, src + 1);
*dend = *src;
}
return dst;
}
Now tell me this cannot be translated to an optimzed solution on any
platform.
No, obviously I was talking about the effect on C programming.
So was I. It has *reduced* the number of C programmers. The problem
of memcpy() versus memmove() is minor by comparison, and programmers
left before they had to worry about such things. Some people in this
world carry things to their logical conclusion -- obviously memmove vs
memcpy is mere a single straw on the camel's back. And like any
classic public relations person, you argue for the straw.
If memmove's "superior" semantics were so attractive, it would
have supplanted memcpy *among C programmers*, but it hasn't.
First of all, its not necessarily superior if 99% of the time you know
ahead of time that the memory is not going to overlap. The C textbooks
do an incredibly shoddy job on this point, and its much like people
using p++ instead of p+=1. People just do what they are used to.
I didn't claim that memmove was necessarily superior. But
understanding the difference and using it for an analogy for people's
disingenuous claims about how they think intuition served my purposes
(its obvious from the pathetic responses in this thread that barely
anyone has a clue of how memmove is properly implemented, meaning that
implementation-based intuition is pretty ridiculous in real life.)
Indeed, we hear frequent demands that the C standard
ought to specify things so that programmers don't need
to know the specifications or think about what they're
doing.
[...] That approach to programming doesn't get one
very far before getting into trouble.
You, of course, have never attempted to program in the language
"Python".
Actually I have, but I stand by my statement that catering
to ignorance and laziness is not good for software quality.
So you are aware of how to program in a more serious programming
language, and yet you appreciate nothing of it. Then clearly this gulf
is ideological.
This isn't about this false dichotomy you cling to with so much
ferocious desperation. If a programmer is lazy there is nothing you
can do about it. But if a programmer has a finite amount of energy it
might be worth while to meet them in the middle -- especially when it
doesn't actually cost you anything (outside of your own delusions and
paranoia I mean).
Or, you could align the intuition and expectation with reality
and get that same effect.
That is why you fail.
You can't align people's intuition and expectation to anything. You
just can't do it. You can only get people to lie about it (obvious
reference to 1984); they "learn" that their intuition is wrong. You
will not get the same effect because you can't align people's
intiution. Worse yet, the attempt to do so is ridiculously expensive.
[...] strcat(a,a) is not something that
a reasonable C programmer would think of doing.
But it is something every reasonable programmer might think of doing.
Notice that the only real distinction is the word "C".
I have my own criticism of that TR, largely on the basis that
it misses the real problem, which is quality control.
Did you know that Microsoft has the largest quality control
organization of any software institution by far? They also produce the
most bugs of any software institution I have ever heard of too.
There have been real studies of this problem (I'm thinking of studies
cited by a luminary from Lucent who gave a talk on this from a few
years ago.) Post-development testing and q&a tends to capture some
percentage of bugs, but is hardly the answer; there are just too many
corner cases, and people just don't think about them from the outside.
According to these studies, the best solutions were always the ones
that lived closest to the programmer while they wre developing. The
best they found at the time was direct source code peer review (which
explains the rise of the recent "paired programming" paradigm in
"extreme programming".) But this is clearly too expensive and is going
to have a high degree of variability depending on the skill of the
reviewers.
I can attest to this, as the last serious bug I dealt with in Bstrlib
was based on a "memory overflow attack". An ordinary tester just would
not even be able to set up an appropriate test easily (I went back to
*16* bit compilers to set up my testing framework for this.) This is
one of those problems that would have lain dormant waiting to spring
its head as people started transitioning towards 64 bit systems for
standard development. The point, of course, is that nobody ever
reported this bug to me as nobody saw it fail in any test. It required
an insight by me, the developer, while reviewing the code. It was not
technically an independent review of course, except that I came back to
looking at it after a long time away from it (so it was an
approximation of "independent review".)
This leads us to obvious alternatives: 1) changing the programming
language 2) pervasive use of lint or defect detection tools 3)
modifying the language itself through libraries.
That vast majority of programmers have clearly chosen option #1.
Probably because #2 costs money and are not guarantees, and there
hasn't been much culture for #3. Changing the standard could be
address #1 and #3 simultaneously -- if only there were some motivation
to do so.
[...] Any
attempt at a technological solution to thoughtless programming
practice is doomed to fail, or at best to be an incomplete
solution.
That's why of course, they don't make automatic shifting cars. And of
course, that's why they don'tput seatbelts in cars either. Afterall
they are just doomed technologies.
[...] To the extent that attention is diverted from the
real causes of erroneous programs, it's a bad thing.
If you make the real issue disappear through avoiding it at the point
of design, then it isn't a diversion. That is to say, its possible to
solve some of these problems completely at the level of the programming
language itself. Once you consider this in the light of the
realization that programmers have a finite amount of energy with which
to produce programs, this should make the motivation for making the
programming language less error prone pretty compelling.
I haven't heard anybody saying that. I have said that these
kinds of technical solutions try to solve the wrong problem.
Yeah, well its easy to say things. Especially when you are not being
called into account for the things you say.