Necessity of multi-level error propogation

J

James Kanze

To the level of favoring a particular branch of an if/else
statement? Could you please provide some links? That sounds
useful, but I've never seen that kind of automated feedback
from the profiler to the compiler.

It should be documented in the compiler documentation. For Sun
CC, see the -xprofile option, for example, see the section
"Advanced compiler options: Profile feedback" in
http://developers.sun.com/solaris/articles/options.html, for a
quick survey. The linker for VC++ has a /PROFILE, although I'm
not very familiar with it.
No, I'm saying that every error should have a corresponding
exception, and every exception should represent some error.

In other words, you're defining exception as error, and error as
exception.
The reason we call it an error is that it's an error, and
that's the same reason we (well, I) throw an exception.

But that really doesn't advance us much. What is an error, and
what isn't? And surely you won't dispute that there are
different types of errors: dereferencing a null pointer is not
the same thing as a format error in an input file; for that
matter, a format error in an input file that the program itself
wrote earlier is not the same thing as a format error in a
configuration file, neither of which are the same thing as a
format error in interactive input.

If all of these are to be called "errors", they we definitely
have a case where one size does NOT fit all.
I'm not trying to change any definitions; I'm trying to stick
with definitions that are consistent and meaningful.
Yes, or that exceptions aren't always the best response to
errors. It seems to me that you started with the latter
point, but have drifted to the former.

In the terminology I use, many different things are considered
"errors". If my compiler core dumps, it's an error, and if my
C++ code which its compiling is missing a semicolon, it's an
error, but I certainly don't expect the compiler to treat the
two in the same fashion.
But you call elseDefaultTo (or whatever other interface
functions Fallible supports) at your leisure. That's in sharp
contrast to exceptions; they demand attention, unless
explicitly squashed.

OK. I understand. You mean that I can keep the error status
around, and test it later. That's true to a point: what I can't
do with Fallible is use the return value if the error status
wasn't OK. The usual use of Fallible (and return codes in
general) is to check the error status immediately (or very soon)
after calling the function. When I read "check at your
leisure", I imagined something more like ostream or IEEE
floating point, where you can continue processing as if the
error hadn't occured, only checking it at some later state.
Here's an example of conceptually similar case, combining
exceptions and a Fallible-like approach. I recently wrote
some code to handle exceptions that occur in asynchronous
worker threads. The exception is caught in the thread where
it occurs, and info about it is stuffed into a variable. If
the main thread ever tries to retrieve the product of the
worker thread, an exception is thrown in the main thread. If,
on the other hand, the main thread never attempts to retrieve
the worker thread's product, then the main thread will never
even know the exception occurred. The meaning, given a
consistent relationship between exceptions and errors, is that
errors in worker threads do not necessarily translate to
errors in the main thread; i.e., the "erroneousness" of a
given piece of code depends on the point of view. This is
important for thread-level concurrency, because it allows
speculative execution of code that might turn out to be
meaningless, or even self-contradictory.

Sounds reasonable to me.
Modern processors use similar tricks, e.g. non-faulting loads.
Interestingly, I just noticed that IA64 associates an extra
bit (called the "Nat" bit for some reason) with each datum,
indicating whether a non-faulting load "would have" faulted.
The bit is automatically propagated through arithmetic
calculations, so you can just let the whole sequence proceed,
to be checked if and when you so require. This seems
analogous to Fallible, to your preferred ostream use, and to
my opinion that erroneousness is in the eye of the beholder.

Fallible and my preferred ostream use *are* distinct. Fallible
is designed to be checked immediately, and normally is---it is
for functions which return a value (but sometimes can't). And
you normally stop trying on the first error. The ostream use
(and IEEE floating point) is just to continue on, as if nothing
had happened, until the end, and then check once and for all.

Note that the typical ostream usage is a case where you *cannot*
use exceptions. Because of internal buffering, the error may
not show up until close, and if another error occurs, triggering
an exception, close will be called from a destructor, leading to
termination. My usual idiom (wrapped in an OutputFile class
which derives from ofstream) is to have a commit function which
does the close. If this fails, or if the destructor is called
before commit, the class deletes the file it was writing, to
prevent users from accidentally using the incomplete file.
That certainly suggests that the intent was to support your
favored use model of checking once, at the end, rather than
after each operation. I had not previously heard of that.

Do you really check the status after each output? E.i. you
never do anything like:
std::cout << "label = " << someData << std::endl ;
without having activated exceptions for errors?

Note that output errors *are* generally sufficiently serious to
warrent exceptions. The idiom I use was born before exceptions
were available, but since it works, I haven't bothered "fixing"
it. And the fact that output is often desirable in a destructor
makes me hesitant about switching to exceptions. The advantage
to the "check once at the end" model is that if you encounter
some other error, which means that what you're writing is
irrelevant anyway, you can (and will) skip the error checking
completely---in those cases, you can simply let the close in the
destructor do its job, and not worry about it.
Uh oh, we're getting dangerously close to sanity. If we're
not careful, people might get the idea that it's OK for
reasonable people to disagree. :)

Only if they're not disagreeing with me:).

Seriously, I think that there are a number of different
solutions for error reporting. I use at least four in my
applications: return codes, deferred checking (the iostream/IEEE
floating point model), exceptions, and assertion failures. For
any given error, which one is most appropriate is a judgement
call, and the fact that you choose a different one than I would
isn't necessarily wrong. Refusing to consider all of the
possibilities, and insisting that everything from derferencing a
null pointer to a trivial and recoverable input error should be
handled the same way, is. Refusing to consider any one of the
possibilities (e.g. exceptions, which started this thread) is
also wrong, although I can easily imagine that there are certain
applications where one or more of the techniques never actually
finds concrete use.
 
A

Alf P. Steinbach

* James Kanze:
My take on this: a long, long time ago, the model of all IO
being a sequential stream was probably valid. At that time,
however, most OS's didn't handle them that way: outputting to
the printer was a different system request than outputting to a
mag tape or a file, for example. After a while (but still
before Unix came along), it was realized that the OS should
support device independent sequential stream output; this
evolution took place at different times, depending on the
system, and even as late as 1979, MS-DOS 1.0 had separate
requests for outputing to a printer or to a disk file.

Regretfully, about the time the OS's finally started adopting
the sequential stream model, it became a bit caduc, as disk
files (with random access) and interactive terminals
(particularly screens---even non graphic screens support a full
screen mode). So we have all of the OS's rushing to be "up to
date" by implementing a model which is no longer current.







The problem isn't hardware. At the lowest level, there are two
possible modes: the OS receives an interrupt for each key
stroke, at which time it can read the scan code of the key and
the state of the "modifier" keys (shift, alt, etc.). Or the OS
receives an interrupt for each key down and key up event
(including those of the modifier keys), and manages everything
itself. In the second case, autorepeat is handled by the OS.

No, autorepeat is independent of undecoded versus decoded.

On the PC, the keyboard is undecoded but with autorepeat in the physical keyboard.

The former (undecoded) is good, the latter and independent feature (autorepeat
in physical keyboard, at front of data flow chain) is ungood to the extreme.

I'm not sure (it's been a very long time since I worked at this
low level), but I think X usually uses the second possibility.
At any rate, it's certainly possible to e.g. map the tab key to
behave like control (which obviously doesn't auto-repeat,
because that doesn't have any meaning for it).

I'm also pretty sure that you've simplified things greatly.
There are several buffers: the one the application sees is after
the auto repeat, but the lower level ones aren't.

Yes, yes, I've simplified. :)

Note that this could easily be handled in Firefox itself. All
it has to do is purge its input each time it finishes a screen
update. Whether this is a good idea, however, depends
because...

No, an application, being at the end of the chain, can only do some limited
things. As you say it can try to empty the buffer always -- when it has the
chance to read from the buffer. That can alleviate some problems to some degree
(and should therefore be done as a matter of course, but few apps do), but it
can't remove the problems entirely, being at the end of the chain.

We old dinosaurs understand that just because we haven't seen
the echo doesn't mean that the command hasn't been taken into
account. So we occasionally "type ahead" a sequence of
commands, even if the system doesn't seem to be responding to
them.

Yes. Typing ahead is problematic with the current universal backward design. It
wouldn't be problematic with a more rational design.

I'm not sure what difference this would make.

I'm not that bad at explaining things, am I?

Anyway, it makes a huge difference for all aspects of keyboard handling.

With auto-repeats being synthesized as necessary on demand, instead of being
buffered, they're not being buffered. All problems associated with that (like
"delayed delete all") therefore gone. And the application then has a different
default model where e.g. it doesn't react to arrow key characters but instead to
arrow key state, whether an arrow key is currently down or not.

The reason Xerox
PARC could do things differently was that it intervened at a
lower level.

No, they were exploring the fundamentals, on what would be needed for a
reasonable personal workstation. In addition to timestamped events and keyboard
handling they were focusing on things such as blitter chips, as of 2009 known as
graphics cards. They were arguing (yes, arguing) that any personal workstation
should have an undecoded keyboard and a blitter chip, that those were essential.
Today we have the graphics cards. And we have managed to get the undecoded
keyboards, *but* with the dataflow all messed up, *wrong order* of processing...

Except that you don't use the sequential stream interface for
GUI I/O. You use specific functions in the X or MS Windows
libraries.

No and yes. No, that backwards model is not only with sequential stream
interface, instead, it's embodied in the hardware and OS but it seems to be
associated with the stream i/o point of view: to the degree that it makes any
sense at all, it wouldn't make sense without the stream i/o view. And yes, this
is how it is via the OS API, although e.g. Windows "on the side" provides a not
quite reliable current keyboard key state map (which is possible because a PC's
keyboard differentiates between between first actual key-down event and later
synthesized key-down events for auto-repeat; it's unreliable because the
generating logic is in the physical keyboard, at the wrong end, and most
keyboards aren't able to handle the situation with 4 or more keys pressed).

I don't think there's a problem with the hardware.

Well, perhaps amend that conclusion after my clarifying comments above? :)

It's the hardware.

And it's the OS interface to that hardware.

But the OS's
have been designed to use it in a way that isn't necessarily
appropriate in todays world.

Yes, it's all backwards... :-(


Cheers,

- Alf
 
J

James Kanze

* James Kanze:

[...]
No, autorepeat is independent of undecoded versus decoded.
On the PC, the keyboard is undecoded but with autorepeat in
the physical keyboard.

I didn't say anything about encoded or not. I don't think that
there's a machine around today where the hardware does the
encoding. The two possibilities here concern the generation of
events (interrupts): in the one case, the system gets an
interrupt (and only one) each key press (except for the mode
keys like shift), and auto-repeat is handled by the hardware
(since the software doesn't know how long the key was pressed);
in the other, there is absolutely no difference between the mode
keys and any other keys, and the system gets an interrupt both
for key down and key up---in this case, auto-repeat is handled
by the OS.

At least when I was working at that level (a very long time
ago---CP/M 86 was still relevant:)), the actual hardware was
capable of being programmed for either mode, depending on what
the OS wanted. The BIOS that I knew for CP/M86 and MS-DOS
programmed it for a single interrupt per key press, but the
xmodmap features of X would probably be easier to implement with
the key up/key down model (which does require more handling in
the OS).

[...]
I'm not that bad at explaining things, am I?
Anyway, it makes a huge difference for all aspects of keyboard handling.
With auto-repeats being synthesized as necessary on demand,
instead of being buffered, they're not being buffered. All
problems associated with that (like "delayed delete all")
therefore gone. And the application then has a different
default model where e.g. it doesn't react to arrow key
characters but instead to arrow key state, whether an arrow
key is currently down or not.

I'm still not too sure where the interface hard/soft is in this
model. If auto-repeats are "synthesized" (by the OS), then
you'd have to adopt the two event model, so that the OS could
know whether the key was still being pressed or not.

And you'd still doubtlessly want a timer, to ensure that the
auto-repeat didn't act too fast.

And finally, I'm not sure what the relationship here is with
regards to decoding; I don't see any problem with decoding at
the very last point in the chain, just before returning the
character to the application, and only if the application
requests it.

I'm also very unsure with regards to how modern Windows handles
this. The Java Swing KeyListener interface provides
notifications for keyPressed and keyReleased, as well as
keyTyped, so presumably, you can get this information from
Windows as well.

[...]
No and yes. No, that backwards model is not only with
sequential stream interface, instead, it's embodied in the
hardware and OS but it seems to be associated with the stream
i/o point of view: to the degree that it makes any sense at
all, it wouldn't make sense without the stream i/o view.

The only really intensive stuff I've done with GUIs was in Java,
and for most of the stuff, we handled the low level keyboard
events ourselves---the auto-repeat, if there had been any, would
have been in the application. So the model isn't cast in stone
in the OS or hardware; you can do things however you want in the
application, if you want to go to the effort.
And yes, this is how it is via the OS API, although e.g.
Windows "on the side" provides a not quite reliable current
keyboard key state map (which is possible because a PC's
keyboard differentiates between between first actual key-down
event and later synthesized key-down events for auto-repeat;
it's unreliable because the generating logic is in the
physical keyboard, at the wrong end, and most keyboards aren't
able to handle the situation with 4 or more keys pressed).

In other words, you're saying that it is the keyboard hardware
which simulates a key released/key pressed event pair to
implement auto-repeat. That sounds a bit wierd. In the two
event model, I would expect auto-repeat to be implemented in the
OS. And I'd be very surprised if you couldn't turn it off (but
I'm often surprised by things on PC's).
Well, perhaps amend that conclusion after my clarifying
comments above? :)
It's the hardware.

Then they've gone a step backwards. When I worked on keyboards,
this sort of thing was programmable.
And it's the OS interface to that hardware.

And how the OS programs the hardware?
 
A

Alf P. Steinbach

* James Kanze:
* James Kanze:
* Alf P. Steinbach:
[...]
Here's the way Things Work on a typical PC or workstation:
PHYSICAL EVENTS -> AUTO-REPEATER -> BUFFER -> DECODING -> PROGRAM
1. Under the keys there's a matrix of some sort. A microcontroller in the
keyboard scans this matrix, detects key down and key up. On key down or
key up a series of bytes denoting the EVENT is sent to the computer.
2. When microcontroller detects that a key is held down for a while it
initiates a kind of REPEAT action, sending the same key down sequence
repeatedly to the computer.
3. In the computer, receiving hardware+software logic BUFFERS it all.
The problem isn't hardware. At the lowest level, there are
two possible modes: the OS receives an interrupt for each
key stroke, at which time it can read the scan code of the
key and the state of the "modifier" keys (shift, alt, etc.).
Or the OS receives an interrupt for each key down and key up
event (including those of the modifier keys), and manages
everything itself. In the second case, autorepeat is
handled by the OS.
No, autorepeat is independent of undecoded versus decoded.
On the PC, the keyboard is undecoded but with autorepeat in
the physical keyboard.

I didn't say anything about encoded or not. I don't think that
there's a machine around today where the hardware does the
encoding. The two possibilities here concern the generation of
events (interrupts): in the one case, the system gets an
interrupt (and only one) each key press (except for the mode
keys like shift), and auto-repeat is handled by the hardware
(since the software doesn't know how long the key was pressed);
in the other, there is absolutely no difference between the mode
keys and any other keys, and the system gets an interrupt both
for key down and key up---in this case, auto-repeat is handled
by the OS.

Description doesn't match reality. I'm not sure exactly what you mean, but it
isn't the way things work. Perhaps it's this "decoded" that's problematic. A
non-decoded keyboard produces event data identifying keys. A decoded one
produces characters, except of course for arrow keys etc (I'm sure you have at
one time been familiar with e.g. VT52 terminals; the VT52 as a whole implements
a decoded keyboard, producing normal characters and escape sequences for e.g.
arrow keys). That is, in the context of keyboard i/o "decoding" refers to the
mapping from keys to characters.

At least when I was working at that level (a very long time
ago---CP/M 86 was still relevant:)), the actual hardware was
capable of being programmed for either mode, depending on what
the OS wanted.

The PC keyboard is in a sense configurable, yes, e.g. wrt. repeat rate (the
Atari ST keyboard could be actually programmed! :) ), but I cannot recall any
distinct modes.

The BIOS that I knew for CP/M86 and MS-DOS
programmed it for a single interrupt per key press, but the
xmodmap features of X would probably be easier to implement with
the key up/key down model (which does require more handling in
the OS).

Anyways, in the paragraph above you're treating "single interrupt per key press"
and "key up / key down model" as mutually exclusive.

On the contrary, modern keyboards produce one event per keypress (plus,
unfortunately, a lot more!), and the events they produce are key up / key down
events.

It's not either/or, it's not two different kinds of keyboard modes or something.

[...]
I'm not that bad at explaining things, am I?
Anyway, it makes a huge difference for all aspects of keyboard handling.
With auto-repeats being synthesized as necessary on demand,
instead of being buffered, they're not being buffered. All
problems associated with that (like "delayed delete all")
therefore gone. And the application then has a different
default model where e.g. it doesn't react to arrow key
characters but instead to arrow key state, whether an arrow
key is currently down or not.

I'm still not too sure where the interface hard/soft is in this
model. If auto-repeats are "synthesized" (by the OS), then
you'd have to adopt the two event model, so that the OS could
know whether the key was still being pressed or not.

And you'd still doubtlessly want a timer, to ensure that the
auto-repeat didn't act too fast.

All the time information needed for auto-repeat is the time-stamp of last
retrieval and current time.


Cheers & hth.,

- Alf
 
J

James Kanze

Thanks, but none of that is remotely portable. Obviously,
it's not part of standard C++, or the de facto standard tool
chains.

By definition, nothing concerning optimization is "portable".
The standard says nothing about how you invoke the compiler, or
request specific options. As for the de facto standard tool
chain... I'd say that Sun CC was the de facto standard tool
chain under Solaris, and VC++ under Windows, so at least on
those platforms, it's part of the de facto standard tool chain.
Those certainly are "other words," and they don't mean the
same thing at all. I think you know better.

I do, but you seem to be saying that one implies the other.
All caps? Really?
As you know, exceptions are not "one size." Exceptions fit
into hierarchies, with different types of exceptions
representing different kinds of errors. Something that could,
in principle, have been detected at compile time, should be
represented by a std::logic_error. Something that goes wrong
at runtime should be reported by a std::runtime_error, and so
on.

But they're still really one size---it's always the same
mechanism that comes into play. Generally, it's best to avoid
it---in some ways, exceptions are no more than a glorified goto.
But in particular cases, the alternatives have even worse
disadvantages, so they become the less evil.
The use (or lack) of exceptions says nothing of how a program
will respond to an error. Exceptions are an internal
implementation detail.

They certainly affect the syntax of catching the error for
processing.
If a program receives bad input, and internally throws an
exception to signal this runtime error, it should still catch
the exception, and respond in whatever way it deems
appropriate.

If a program receives bad user input, using an exception is just
a way of making life more difficult for the client code.
OK. I did not recognize that distinction, but it's clear now.
I see this as a serious shortcoming of the
language.http://groups.google.com/group/comp.lang.c++.moderated/msg/b7eddc4f0d...

So propose something that would make it work, in a reasonable
way. The reason it isn't supported is because no one really
knows how to handle it (for several reasons). To be quite
frank, it doesn't really bother me---destructors and
constructors, in C++, have a very definite role: once you enter
the destructor, the object ceases to exist. Anything which
requires error handling should not be part of a destructor.
You have a decent point, but this is something that can be
worked around by making sure that the stream is owned at a
higher level of abstraction than the "working" code that might
throw exceptions. The higher level of abstraction can catch,
close, and rethrow. In general, the code that uses the stream
ought to accept a (semantic) reference to it, and should not
be the same code that owns the stream.
std::eek:fstream is really not meant to be derived from. Is
this, at least, private inheritance?

Are you kidding? The type is an std::eek:fstream, in all shape,
form and fashion. With just an additional feature.
How does the class delete the file without closing the stream?

Well, I don't set up the streams to throw, so there's no
problem. It just ignores any error. Which is reasonable,
because the file concerned with the error will be deleted
anyway.
I don't have to, because I use exceptions.

That's actually a reasonable possibility for output streams; an
output error is usually pretty bad. On the other hand, another
error which results in a throw may cause the stream to be
destructed, calling close, and bang.

For whatever reasons (and history obviously plays a role),
streams aren't really designed with exceptions in mind. I'd
avoid them with streams.
That's right. I may do it in a twenty-line toy program, but
certainly not in production code. I always activate
exceptions for errors, and would advocate that other
developers do the same.

It's dangerous, because it means that the destructor may throw.
And for input, it's probably not appropriate.
You've confused "refusing to consider" with "having considered
and rejected, in favor of a superior tool."

No. You've considered once; maybe exceptions were the
appropriate tool for that case, but that doesn't mean that
they're appropriate for everything. Each case is different, and
you have to consider each case on its own merits.
Every time I start a new project, I don't have to do a special
study of whether Forth is the right language for the job. I
have a pretty good idea that it isn't, except under very
particular circumstances. Similarly, every time I consider
the best way to signal an error condition, I don't have to
make a "pros and cons" list of the various possible
mechanisms.

Interesting. Every time I start a new project, there is a
consideration of what language to do it in. Maybe not Forth,
because we don't happen to have it available, but the last two
green fields projects I implemented were in AWK, and not C++,
and most large projects use a mixture of languages, with some
parts in shell script, some in C++, some in Java, and some in
who knows what else. In one or two cases, I've even created a
domain specific language especially for the project.
 
T

Tony

Sure it does. If you use that pattern, especially if it's the
only one in your toolbox, then your constructors or something
called in those constructors is probably going to have the
possibility of raising an exception.

"Apparently, you don't understand RAII."

Prove that. (I'll play, I'm open to learning what I don't know).

" Despite the name, it has
nothing to do with constructors"

I don't believe that. I don't care what it got contrived to (or in your
parlance, "evolved to"?). Teach me RAII JK and what I apparently don't know
about it. I'm of course warring against it, so be able.

"---std::auto_ptr, for example, is
a prime example a RAII, and none of its constructors can fail."

And your point was with that what? (??).

"And drop the "only tool in your toolbox" business. You're the
one who's arguing to limit the tools in the toolbox."

Two thoughts in the above you had, that are discongrous, btw. You do
everything from within the C++ box, and I try to do everything from outside
of it. That would explain my ref to "if you only have a hammer".

(note to self: you program because of it).
"construction" is a subjective term. As I said above, your
need for exceptions to handle errors from constructors depends
on if you define "construction" to be "resource acquisition"
(large potential to raise error) or "data member
initialization" (small/no potential to raise error).

"Construction is initializing the object so that it can be used.
There's nothing "subjective" about it."

Yes there is. You are attempting to subordinate the term 'initialization',
or overloading it. I'm "simple minded" (curb all the jokes!), initialization
is AKIN to zeroing integers.

" If a constructor returns
without throwing an exception, you have an instance of the
object."

"Exceptions"? What are those? Are you trying to impose upon me a paradigm
that elevates exceptions to high status? I can't construct an object without
.... a mark on my forehead?! :p
"Throwing an exception from the constructor is the simplest way
to achieve that."
Avoiding the scenario seems much simpler.

"Obviously, you've never written any non-trivial code, to make a
statement like that."

Obviously you are either being defensive or trying to perturb me into cluing
you in to what is not available to you. And I may have wanted to tell (not
you, but someone) 20 or 30 years ago, but it ain't happenin now. You find
yourself another boy, "AIG"/"GM". :p (And you thought it was just
"programming").
Avoiding the scenario means expanding the
class invariants, and handling so-called zombie states. It
involves extra code in *every* member function.

Sounds theoretical or propagandical. Don't waste time on me JK, I don't
So is a bulldozer, but I won't use one of those to shovel snow
off of my sidewalk, even if I did own one and even if it was
parked in the driveway.

"And? How is that relevant to anything?"

I guess I can't tell you. Maybe I should stop doing drugs? ;)
I'm not rejecting them. I'm just developing a better way to
handle errors.

"Fine. Show it to us."

Fine, give me a million dollars. :p (I didn't really say that did I? That
was when I started, it's worth surely billions now, but discounted to you,
one billion dollars. OK?).

"The more possibilities we have to choose
from, the better it is."

Said the AIG executive!
It looks doable at this juncture but I have not encountered
all the scenarios yet. I don't need something as capable or
general as exceptions (yet? Time will tell.).

"Now you give the impression that you don't understand
exceptions."

Contraire, you are trying to suggest that (?).

" Exceptions aren't more "general" or more "capable"
than other methods of reporting errors, per se. "

Oh, indeed they are marketed as such. Somewhere there is a "suggestion" that
C++ exceptions are a/THE general way to handle erros.

" They are
different; in some cases, those differences represent an
advantage, and in others, a disadvantage."

C'mon now JK, you don't do large scale or permeating EH other than via C++
exceptions. Admit it.

Tony
 
T

Tony

Note to James Kanze: my ISP timed out your responses to my other posts and
your responses. (I'll make some effort to retrieve those IF (and ONLY IF) I
don't wake up and Halle Berry is indeed now with children and my hope of <>
with the most beautiful woman ever is ... that's a loooong sentence.... what
is the problem?).

Tony
(I hope this is just a .. NIGHTMARE~!)
 
J

James Kanze

Prove that. (I'll play, I'm open to learning what I don't
know).

You prove it later in your post.
I don't believe that.

Then you disagree with the inventor of the concept, and the
generally accepted meaning.
I don't care what it got contrived to (or in your parlance,
"evolved to"?). Teach me RAII JK and what I apparently don't
know about it. I'm of course warring against it, so be able.

In sum, you've decided to reject a concept out of hand, without
even knowing what it is.
And your point was with that what? (??).

That RAII has nothing to do with constructors.
Yes there is. You are attempting to subordinate the term
'initialization', or overloading it. I'm "simple minded" (curb
all the jokes!), initialization is AKIN to zeroing integers.

It can be. It can also be other things.
"Exceptions"? What are those? Are you trying to impose upon me
a paradigm that elevates exceptions to high status? I can't
construct an object without ... a mark on my forehead?! :p

No. I'm trying to explain basic C++ to you, since you
apparently don't know it either.

Except that you can't. If you had any experience with
non-trivial applications, you'd know that.

[...]
Oh, indeed they are marketed as such. Somewhere there is a
"suggestion" that C++ exceptions are a/THE general way to
handle erros.

Not by me. I'm rather one who argues against overuse of
exceptions. I don't like the way they break control flow. But
sometimes the alternatives are worse.
C'mon now JK, you don't do large scale or permeating EH other
than via C++ exceptions. Admit it.

Could you translate that into English, please. And check out
some of my postings here: exceptions are for a very limited set
of circumstances. But when those circumstances are present,
they are better (or at least less bad) than the alternatives.
 
J

James Kanze

I'm saying that in my preferred design style, one does imply
the other. You keep reducing that to a tautology, which it is
not.

Maybe I just can't believe such a claim. If you use the general
meaning of error which I use (and find wide spread), then there
are errors that you cannot or don't want to handle with an
exception. As a radical example: the loss of power on the
mother board. If you say that all errors should be reported by
exceptions, either you're using the concept of "should be
reported by an exception" to define error, or you're introducing
some other constraint on the meaning, that you've not clarified.
Do you really mean, for example, that any hardware error (e.g.
parity error in memory) should be mapped to an exception? Do
you really mean that a blatent software error in the code (e.g.
a segment violation, under Unix), should be mapped to an
exception. Do you really mean *any* user error (e.g. under
Unix, the user accidentally gives the process id of your
program to kill, rather than the one of the program he wanted to
kill) should be treated as an exception.

My statement is provocative. But it was meant to be provocative
in a positive sense: to provoke you to define exactly what you
meant by error in your statement. Because I'm quite sure you
don't really expect a kill -9 to your program to be converted
into an exception, even if it is the result of user error.
They're not "really" one size.

They don't really have a "size" that can be measured:).
They're "one size" in the sense that they all cause the same
mechanism to come into play, and require the same syntax to
catch. And have the same results if one doesn't catch them.
C++ exceptions are not like goto.

The do introduce additional paths of control flow. Which makes
reasoning about the correctness of the code more difficult. If
the error is such that you're going to abandon a whole block of
functionality anyway, it may not matter---who cares if the
result is correct if you're not going to use it. And if the
error is in constructing an object, having to deal with an
invalid object is likely to introduce even more additional paths
of control flow than the exception did. In a very real sense,
exceptions are evil. In some cases, they are less evil than the
alternatives, but that's certainly far from true everywhere.
They are not an immediate transfer of control. They are a
special-purpose return path that unwinds the stack in a
predictable, orderly manner. Furthermore, a given
throw-statement does not always transfer control to the same
catch-block; catch-blocks are not labels.

All of which are arguments *against* exceptions, not in favor of
them. It's a goto where you don't know where you're going, and
when you get there, you don't know where you've come from. A
total disaster when it comes to any reasoning which depends on
control flow.
If you're claiming that
libraries shouldn't throw exceptions, well, that ship has
sailed, and good riddance.

Really? The only exception I might get from any of the
libraries I use is std::bad_alloc, and in a lot of cases, I've
replaced the new_handler with one which aborts, so I don't have
to worry about that one either.

(I know, the standard basically says that unless otherwise
documented, any function in the standard library can throw
anything the implementation wants. In practice, of course,
quality of implementation issues ensure that you won't get
unexpected exceptions. Which you couldn't handle anyway,
because you don't know what they are.)
Did you follow the link? There's a snippet of code in the
middle that was a first-order suggestion for how to handle
simultaneous exceptions within a single thread.

I didn't look at it in detail, because I'm not the person who
would have to decide. If you wrote up a detailed proposal, the
committee would consider it. I've my doubts as to its
workability. And the problem isn't so much implementation, but
specification of what happens when you have several exceptions
propagating in parallel.
By that logic, ~fstream shouldn't call close.

And maybe result in an assertion failure if the file isn't
closed before the destructor is called. It's a compromise (and
I've seriously asked myself if the assertion failure isn't a
better solution): the close in ~fstream is a "backup" solution.
In correct code, at least in ofstream, it shouldn't be used
except when unwinding the stack as a result of an
exception---when you've encountered another error serious enough
to imply that the file you're writing won't be usable anyway.
No, are you?

std::eek:fstream is part of a hierarchy, which is definitely
designed with inheritance in mind. In the end, they all derive
from std::basic_ios.
Are you aware that ofstream hasn't got any virtual member
functions, including the destructor?

std::basic_ios has a virtual destructor. Thus, every class
which derives from it has a virtual destructor.
It's obviously not a metaprogramming support class like
binary_function, either. The intended use for ostreams is
that you derive from streambuf, not from the ostreams
themselves.

One doesn't preclude the other. Almost every time I've derived
from streambuf, I've also derived from istream or ostream (or
both), in the same way ifstream derives from istream and
ofstream derives from ostream.
If somebody tries to deletes your type through a
std::eek:fstream*, they'll get undefined behavior. You've just
lucked out of type-slicing, since ofstream isn't copyable. At
the very least, you could make the inheritance private, and
add using-declarations for the relevant members of ofstream.

Actually, what I'm really interested in is the members of
ofstream's base, ostream. But I'm also interested in the
non-member operator <<. (Also, my class wasn't really designed
to be allocated dynamically---it's a classical RAII, which
counts on the implicit destruction when the object goes out of
scope. It can be allocated dynamically, of course, but there's
no real point in using it in such cases.)

Note that while saying that ofstream is not meant to be derived
from is simply wrong, I am sensitive to arguments saying that it
wasn't designed for the extentions I've provided. And I'll
admit that it is a compromize. I'm used to wrapping ostream's
in a lot of contexts, and I definitely considered that
possibility. In the end, however, it seem to impose too much
extra code in the client, for purely theoretical
considerations.
OK, I see what you mean, but not how exceptions have anything
to do with it. You're saying that close failed, so you delete
the ofstream; what I was saying is that ~ofstream will call
close again, but you've already stated that after the first
error, the subsequent operations are no-ops. This should
prevent any further error, regardless of whether we're using
exceptions.

No. Basically, the class contains a bool, isCommitted,
initialized to false. The function commit() closes the file,
and if there is no error, sets isCommitted to true. In the
destructor, if isCommitted is false, I call remove on the file.
The destructor also calls close on the ofstream as well, before
this, and without checking the error status of close (because if
the file wasn't already closed, we're going to delete it
anyway). (Or if it doesn't call close now, it will---to date,
I've only used the class under Unix, where remove() on an open
file works, so I may have forgotten the manual close before
remove()).
What "bang?"

The result of a second exception being raised. (You're right,
it's not automatic. Most of the time, the close will probably
succeed, and everything will work fine. Except, of course, when
you do the critical demo in front of your most important
client.)
So you've made clear. Nevertheless, streams have explicit
support for exceptions, and I have found it beneficial.

The support was added as an afterthought. I'll admit that I've
never used it, and I'm very suspicious of it. A function that
might report its errors with an exception, or might report them
using other means, is worse than one which always reports them
with an exception. I'd be very sceptical, for example, of
passing a stream with exceptions activated to a third party
library---theoretically, the possibility exists, so the authors
of the library should have taken it into account, but
practically...

[...]
Well, that's characteristically vague and unjustified.

OK, let's be clearer. There are three possible "error" bits in
ios_base, eofbit, failbit and badbit. You can activate an
exception for each of them. For eofbit, of course, there's no
possible correct use for doing so (and the fact that you can is
indicative that the possibility for exceptions was just added
on, without thought). For badbit, except for the problems with
close in the destructor, I suspect that if the streams had been
initially designed to use exceptions here, it would have been
more appropriate---I can't really imagine any case where you'd
be able to handle the error in the calling function. For
failbit, on the other hand, you almost always have to process
the error in the immediate calling function, so exceptions are
not really appropriate.
That's an unfounded accusation, and a personal insult.

You're being over sensitive. You've been arguing that every
error should be handled by an exception. That's definitely a
categorical statement.
 
T

Tony

Prove that. (I'll play, I'm open to learning what I don't
know).

"You prove it later in your post."

OK, Mr. Black-n-White only in-the-C++-box thinker.
I don't believe that.

"Then you disagree with the inventor of the concept, and the
generally accepted meaning."

No skin off of my nose.
I don't care what it got contrived to (or in your parlance,
"evolved to"?). Teach me RAII JK and what I apparently don't
know about it. I'm of course warring against it, so be able.

"In sum, you've decided to reject a concept out of hand, without
even knowing what it is."

I know very well what RAII is. It's of limited design value unless you're
married to C++.
And your point was with that what? (??).

"That RAII has nothing to do with constructors."

Your example doesn't prove your above statement.
Yes there is. You are attempting to subordinate the term
'initialization', or overloading it. I'm "simple minded" (curb
all the jokes!), initialization is AKIN to zeroing integers.

"It can be. It can also be other things."

No argument there. You start doing RA in constructors though and the
exception "requirement" rears it's ugly head.
"Exceptions"? What are those? Are you trying to impose upon me
a paradigm that elevates exceptions to high status? I can't
construct an object without ... a mark on my forehead?! :p

"No. I'm trying to explain basic C++ to you, since you
apparently don't know it either."

Uh, someone is on the defamation defensive.

"Except that you can't. If you had any experience with
non-trivial applications, you'd know that."

I can, I have and I do. If YOU can't, that's your problem. Take a look
outside of the C++ paradigm box sometime if you are capable.
Oh, indeed they are marketed as such. Somewhere there is a
"suggestion" that C++ exceptions are a/THE general way to
handle erros.

"Not by me. I'm rather one who argues against overuse of
exceptions."

I wouldn't have expected that.

"I don't like the way they break control flow."

Me neither, and that's why I developed an alternative.

"But sometimes the alternatives are worse."

You say that as if you have privy to all of the alternatives and have
evaluated them. (I assure you that you have not).
C'mon now JK, you don't do large scale or permeating EH other
than via C++ exceptions. Admit it.

"Could you translate that into English, please. And check out
some of my postings here: exceptions are for a very limited set
of circumstances."

Well maybe there is hope for you afterall then.

"But when those circumstances are present,
they are better (or at least less bad) than the alternatives."

You don't know all the alternatives.

Tony
 
P

Paul Hilbert

"You prove it later in your post."
OK, Mr. Black-n-White only in-the-C++-box thinker.

Ad hominem.
No skin off of my nose.

Did you really want to ask a question or just release some hot air?
"In sum, you've decided to reject a concept out of hand, without
even knowing what it is."

I know very well what RAII is. It's of limited design value unless you're
married to C++.

Texas sharpshooter / Ad hominem.
"That RAII has nothing to do with constructors."

Your example doesn't prove your above statement.

It should in fact be "That RAII is not restricted to the subject of
constructors."
"It can be. It can also be other things."

No argument there. You start doing RA in constructors though and the
exception "requirement" rears it's ugly head.

Well he actually revealed your Dicto Simpliciter which actually /is/ a
(counter-) argument.
"No. I'm trying to explain basic C++ to you, since you
apparently don't know it either."

Uh, someone is on the defamation defensive.

Uh, someone does not have anything to spread but a lousy (and btw.
totally association-free) ad hominem.
"I don't like the way they break control flow."

Me neither, and that's why I developed an alternative.

An alternative you are not willing to share unconditional (oh it's really a
PITA to call your silly billion dollars humbug "condition").
If you are not willing to even share thoughts here, then why did you ask
us to do the same in the first place?
"But sometimes the alternatives are worse."

You say that as if you have privy to all of the alternatives and have
evaluated them. (I assure you that you have not).

Theoretically it is impossible /not/ to do. It is a matter of state of mind
and since you threw nothing into the pot you're really not in the position
to demand impossible perfection.
"Could you translate that into English, please. And check out
some of my postings here: exceptions are for a very limited set
of circumstances."

Well maybe there is hope for you afterall then.

Despite the fact that you were not able to explain what you wanted to
say you're definitely showing /again/ your incapability of communicating
in a serious way.
"But when those circumstances are present,
they are better (or at least less bad) than the alternatives."

You don't know all the alternatives.

Then tell us your mysterious, glancing perfection of an alternative
concept or find some other way to waste the time of others.

Paul Hilbert
 
T

Tony

Paul Hilbert said:
Then tell us your mysterious, glancing perfection of an alternative
concept or find some other way to waste the time of others.

So you are interested in getting something for free (ain't gonna happen) or
offering some money? Hmm?

Tony

P.S. I'm not serious: it's not for sale at any price. I was just offering
the situation as one other in the mix: all SW dev is not this (largely
disastrous) "consultant" project stuff that gets "done". (Been there, done
that).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,142
Latest member
arinsharma
Top