P.J. Plauger wrote:
.....
What is different here compared to std::copy and all other functions
that
take two iterators and expect [first, last) to be valid? Am I missing
something?
Yes and no. One philosophy of C++, inherited from C, is that you should
be given enough rope to hang yourself, and speedily at that. Another,
inherited from OOP, is that a class has the right and obligation to
protect itself from misuse.
The second philosophy may have been documented in design papers for C++;
if
so, however, it does not show in the standard.
Nonsense. Look at all the requred "protected" members, and all the
internals documented as "exposition only" (so you can't reliably
muck with them. This is the very stuff of OOP.
The standard defines C++ as
a set of mechanism that one can use to implement all sorts of misschief.
Or not.
C++ does in no way enforce or even promote OOP; designing an OO-approved
class with guarded invariants is not even simpler in C++ than designing an
OO-nightmare.
You're overstating the case to the point of incorrectness.
To propose an amendment of the standard based on a philosophical
inclination
that so far has not shown in the standard seems a little bit of a stretch.
Huh? All I was doing was supporting an oft-expressed desire to
guarantee that container size() be O(1). My observation is that
there are even hygienic reasons for disallowing the one bit of
latitude that requires size() be O(N). I'm personally quite
content with the state of the C++ Standard (in this regard).
Having said that, I still dispute your claim that there is no
flavor of OOP in Standard C++. And I can report that *many*
proposed changes to the C++ Standard have philosophical
rationale.
Often? With regard to range checking I wonder: if every iterator carries a
pointer to its container one can check the validity of a range in linear
time (and constant time for random access iterators). Any algorithm that
takes a range allows for linear time on non-random access iterators
anyway.
Could you give an example where a checking implementation is ruled out by
the complexity requirements of the standard?
Many of the complexity requirements are expressed as specific
maxima -- you just can't do more comparisons, or assignments,
or whatever. In fact, complexity requirements that merely
dictate big-O time complexity are essentially untestable and
de facto toothless, except as a QOI issue.
True! Now, opinions may vary as to whether that is a good thing or not.
It's one thing to permit it; it's quite another to *mandate* it.
That was my point.
By and large, I get by with putting a safety layer between my code and the
language: e.g., a pointer_to<T> template that wraps raw pointers and
allows
for checking 0-dereferencing and tracing allocations and deallocations, or
a safe<int> type that checks for arithmetic overflows. I am not sure
whether I want the standard to change in this regard. After all, if you
want safety, you can have it. It just comes at a price.
Agreed. The issue is whether you can simultaneously pay the price
and conform. There are compelling reasons to permit both at once,
wherever possible.
That would be the only place in the standard then. Maybe you are onto
something in setting a precedence.
Indeed this *is* a rare circumstance, which is why I've pointed
it out repeatedly.
I am by and large happy with the undefined behavior solution that the
current standard employs, although I would love to have a debug-version of
the standard library that defines all those unsound ranges and out of
bounds accesses to violate some assert().
I know where you can license one...
What bothers me though, is the latitude within the complexity
requirements.
I'd rather have the standard declare unambiguously that size() is constant
time. That would at least be predictable. (Similarly, I think the standard
could require sort() to be O(n log(n)) and nth_element() to be linear.)
I agree that big-O time complexity should favor the programmer,
wherever practically possible.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com