C++ exception handling is defective

Z

Zorro

Ian said:
The C standard only describes the behaviour of signals generated by
raise(), anything else is left to the implementation.

As I've stated many times on this thread, how an underlying hardware
error is communicated to an application is determined by the operating
environment. If the environment uses asynchronous signals, these can't
be communicated to the applications running context without that context
continuously checking for them. It is the requirement that add an
unacceptable overhead.

OK, it may be able to control it, but at a huge cost.

I am asking you the following questions because you mentioned "at a
huge cost".

1. Does C++ run-time library include that of C, or is it an entirely
different library?
2. In the example that you nicely pointed out that "Arithmetic
Exception" was thrown (much appreciated, indeed), who do you think
threw the exception? Linux OS or language run-time lib?

Regards.
Zorro.
 
I

Ian Collins

Zorro said:
Please trim signatures!

I am asking you the following questions because you mentioned "at a
huge cost".

1. Does C++ run-time library include that of C, or is it an entirely
different library?

Considering the (modified) standard C library is a subset of the C++
standard library....
2. In the example that you nicely pointed out that "Arithmetic
Exception" was thrown (much appreciated, indeed), who do you think
threw the exception? Linux OS or language run-time lib?
It wasn't thrown, it was an asynchronous event delivered to the
application by the OS (in the case of Solaris, SIGFPE was delivered).
The same message will be emitted for a C program.
 
G

Grizlyk

Gavin Deane wrote:

1.
Correct, as long as you restrict yourself to the only applicable
definition of "exception". That is, the object thrown by a throw
statement in the C++ program.
definition of "exception"
I am speaking exactly about C++ exception - "The goal of C++ exception
mechanism" - not about all other types of "exceptions" in system.

2.
From the point of view of pure C++, where the definition of exception
that relates to your first sentence is applicable, there is no way of
knowing what code that tries to divide by zero will do. There is
certainly no reason to assume that a C++ exception will be thrown.

">there is no way of knowing what code that tries to divide by zero
will do"

a) the expression is not opposite to my conclusion - "error code often
must post the error to up level"
If you think that "opposite" ( that "error code must _not_ post the
error to up level"), then you just disagree with exception's goal due
to "overhead" or something else.

b) any C++ exception condition can be resolved manualy by programmer
like this:
if (error_condition) throw error_class();

c) in the case of divide_by_zero, expression "if (error_condition) " is
_hardware_ supported,
c-1) you can not test condition befor operation in program manualy
c-2) you can not deny the _hardware_ test
c-2) the _hardware_ test gives no "overhead" if no error condition
occured

But C++ does not garantee, that after _hardware_ "if (error_condition)"
passed, programmer can write "throw error_class()" instead of "exit()".


c-3) In order to do the "garantee", C++ must convert system depended
error into system independent exception with the help of std lib, for
example.

c-4) Ian Collins wrote befor, that even C does not garantee signals,
that was not risen by "raise()" in program. He thinks, that on some
abstract systems that could not support some properties, the
implementation of the std behaviour could be hard to do.
It is not true.
c-4-1) Remember CPU without DIV and MUL opcodes. To implement
operations "/" and "*" C/C++ compilers must call internal functions to
do the actions. No one say, that it is "overhead". It is problem of
concrete CPU, that can not do std operation directly.

c-4-2) When we are doing ordinary arithmetic operation as "divide"
and program fall into "undefinite behaviour" due to C++ fighting
against "overhead" it is wrong arithmetic operation behaviour. Wrong
because
a) programmer can not test error condition befor operation ( due to
system-depended _hardware_ test )
b) programmer can not post error to up level if _hardware_ error
occured ( due to system-depended _hardware_ form of error message )

The trivial operations, as arithmetic operations, must be _reliable_,
and C++ must assume, that CPU and OS will support restore wothout
"overhead" after such the trivial reliable operations, as arithmetic
operations. Otherwise all "overhead" is concrete CPU problem, as absent
DIV and MUL opcodes.

To support trivial operations C++ or
a) must automatic call throw of std::div_by_zero_error and all other
overflows (+,-,*),
b) silently must ignore overflow, as for "+" operation,
c) introduce std error flag: a/=b; if( std::divide_by_zero() )throw
user_class();

To halt program due to overflow on division is wrong C++ behavior.

">There is certainly no reason to assume that a C++ exception will be
thrown."

You can not prove you opinion with the help of standard, while you are
considering behaviour of standard.
What is evident?

Because such the trivial operations, as arithmetic operations, must be
_reliable_.
 
Z

Zorro

Ian said:
Please trim signatures!

What are you referring to? I am trimming it, as you pointed out much
earlier!
Considering the (modified) standard C library is a subset of the C++
standard library....

So, I believe you are saying "yes". This should be clear because we can
handle "signals" in C++ just as we do in C. So something is checking
for such "signals", and looking for handles. If none is found, program
is aborted. Thus, are we going to incur a huge overhead, or the
overhead is already there?
Actually, at this time anymore, I do not know the answer. I am asking a
question about what used to be the case a long time ago, and probably
is not so anymore.
It wasn't thrown, it was an asynchronous event delivered to the
application by the OS (in the case of Solaris, SIGFPE was delivered).
The same message will be emitted for a C program.

Since we can set a handle for SIGFPE, I used the term "throw". You are
using "deliver" and someone might use "raise". This all means an
exceptional event was reported to the interested party (run-time lib in
this case) to see if it would handle it. I will try to avoid using
"throw" except in C++ context. My mistake (due to old age).
 
I

Ian Collins

Zorro said:
What are you referring to? I am trimming it, as you pointed out much
earlier!


So, I believe you are saying "yes". This should be clear because we can
handle "signals" in C++ just as we do in C. So something is checking
for such "signals", and looking for handles. If none is found, program
is aborted. Thus, are we going to incur a huge overhead, or the
overhead is already there?

Again, the overhead is coupling an asynchronous event to the
application's context. At least in the Unix world, signal handlers are
like interrupt service routines in that they run in a different context
from the running application.

On a single CPU system, the application is interrupted, the signal
handler runs then the application continues. On a multi-core system,
the signal handler may run concurrently with the application. There
isn't any synchronisation between the two. The only way to achieve
synchronisation is to set a flag in the signal handler and read it in
the application code. This is the huge overhead I am referring to.
Since we can set a handle for SIGFPE, I used the term "throw". You are
using "deliver" and someone might use "raise". This all means an
exceptional event was reported to the interested party (run-time lib in
this case) to see if it would handle it. I will try to avoid using
"throw" except in C++ context. My mistake (due to old age).
I can relate to that!
 
G

Gavin Deane

Grizlyk said:
Gavin Deane wrote:

1.

I am speaking exactly about C++ exception - "The goal of C++ exception
mechanism" - not about all other types of "exceptions" in system.

Good. So why do you continue to bring up the concept of divide by zero,
which has no relation to C++ exceptions?
2.

">there is no way of knowing what code that tries to divide by zero
will do"

a) the expression is not opposite to my conclusion - "error code often
must post the error to up level"
If you think that "opposite" ( that "error code must _not_ post the
error to up level"), then you just disagree with exception's goal due
to "overhead" or something else.

I agree that errors *detected by the program* often want to be handled
at a higher level than they are detected and that C++ exceptons are a
mechanism for doing that. However, that has nothing whatsoever to do
with divide by zero.
b) any C++ exception condition can be resolved manualy by programmer
like this:
if (error_condition) throw error_class();
Agreed.

c) in the case of divide_by_zero, expression "if (error_condition) " is
_hardware_ supported,

It might be or it might not. That is up to the hardware designer and is
most certainly outside the scope of the C++ language.
c-1) you can not test condition befor operation in program manualy

What??? You might not be able to think of a way, but I can.

if (rhs != 0) // test the condition before the operation manually
lhs/rhs;
else
// do something else
c-2) you can not deny the _hardware_ test

Assuming the hardware test exists, I agree there won't be a standard
C++ means to deny it.
c-2) the _hardware_ test gives no "overhead" if no error condition
occured

That's up to the CPU designer and outside the scope of C++.
But C++ does not garantee, that after _hardware_ "if (error_condition)"
passed, programmer can write "throw error_class()" instead of "exit()".

Absolutely not, because C++ has no way of knowing what might happen.
The possibilities are infinte.
c-3) In order to do the "garantee", C++ must convert system depended
error into system independent exception with the help of std lib, for
example.

I contend that that is impossible, because the set of system dependent
behaviours that might occur is, as far as C++ is concerned, infinite.
Not because an infinite number of different CPU behaviours actually
exisits in the real world, but because for C++ to assume only a finite
set of CPU behaviours it will support would mean, by definition, that
some of the infinite set is left out, and C++ has no right to do that.
c-4) Ian Collins wrote befor, that even C does not garantee signals,
that was not risen by "raise()" in program. He thinks, that on some
abstract systems that could not support some properties, the
implementation of the std behaviour could be hard to do.
It is not true.

If you disagree with Ian Collins, take it up with him if you haven't
done so already.
c-4-1) Remember CPU without DIV and MUL opcodes. To implement
operations "/" and "*" C/C++ compilers must call internal functions to
do the actions. No one say, that it is "overhead". It is problem of
concrete CPU, that can not do std operation directly.

Absolutely. And that provides a nice example for an analogy.

Here is what appears to be going on:
Grizlyk: I want defined behaviour if my C++ program divides by zero
with built in arithmentic types.
Me (and others): You can't have that because the behaviour of the
system and operating environment in the event of divide by zero is not
known. Every system and environment could be different, and new ones
could come along with their own new behaviour, and one of the goals of
C++ is that it can be used to program all of them.
Grizlyk: But some systems behave in a particular way so if C++ required
that behaviour, we could have defined behaviour of the program on
divide by zero.
Me (and others): That may be true, but then you could no longer use C++
to program any system that behaved differently and that would go
against the goals of C++.

Which is analogous to:
Grizlyk (hypothetically): I want division of build in arithmetic types
to be a single machine instruction.
Me (hypothetically): You can't have that because the means used by the
system to carry out division is not known. Every system and environment
could be different, and new ones could come along with their own new
behaviour, and one of the goals of C++ is that it can be used to
program all of them.
Grizlyk (hypothetically): But some systems can do division in a single
instruction so if C++ required that behaviour, we could have what I
want.
Me (hypothetically): That may be true, but then you could no longer use
C++ to program any system that behaved differently and that would go
against the goals of C++.
c-4-2) When we are doing ordinary arithmetic operation as "divide"
and program fall into "undefinite behaviour" due to C++ fighting
against "overhead" it is wrong arithmetic operation behaviour. Wrong
because
a) programmer can not test error condition befor operation ( due to
system-depended _hardware_ test )

Here you go again!!! How is the following not testing the error
condition before the operation?

if (rhs != 0) // test the condition before the operation manually
lhs/rhs;
else
// do something else
b) programmer can not post error to up level if _hardware_ error
occured ( due to system-depended _hardware_ form of error message )

True. But irrelevant, because, despite your assertions to the contrary,
the programmer can very easily detect the error condition *before* the
operation and handle it as required at that time. No need to carry out
the operation which means no need to encounter system dependent
behaviour.
The trivial operations, as arithmetic operations, must be _reliable_,

Why? C++ and C (C++ inherited this issue from C) do not define
behaviour on divide by zero, which you seem to think is "unreliable".
And yet there is a world of C and C++ programmers who have been writing
successful programs with this "unreliability" for decades.
and C++ must assume, that CPU and OS will support restore wothout
"overhead" after such the trivial reliable operations, as arithmetic
operations. Otherwise all "overhead" is concrete CPU problem, as absent
DIV and MUL opcodes.

By what authority may C++ assume such behaviour of the CPU and OS?
Every time C++ assumes some underlying system behaviour, C++ becomes
unusable on all systems that behave differently. That entirely
contradicts the goal of C++ to be implementable on all systems.
To support trivial operations C++ or
a) must automatic call throw of std::div_by_zero_error and all other
overflows (+,-,*),
b) silently must ignore overflow, as for "+" operation,
c) introduce std error flag: a/=b; if( std::divide_by_zero() )throw
user_class();

To halt program due to overflow on division is wrong C++ behavior.

What would you rather have happen? An attempt to divide by zero is
almost certainly a bug. I am very keen on programs halting and falling
over very obviously when a bug in encountered.
">There is certainly no reason to assume that a C++ exception will be
thrown."

You can not prove you opinion with the help of standard, while you are
considering behaviour of standard.

In the abstract, which is the relevant context when, as in this case,
the particular platform has not been specified, the language standard
is the only tool available with which to reason about the behaviour of
a program. In this case the language standard is quite clear. The
behaviour on divide by zero is undefined, and that standard tells you
what "undefined" means. It means you have stepped outside the scope of
the standard. So, still in the abstract, it is no longer possible to
infer *anything* about the behaviour of the program, which means that
both the following statements are not opinion, they are fact:

There is certainly no reason to assume that a C++ exception will be
thrown.
There is certainly no reason to assume that a C++ exception will not be
thrown.
Because such the trivial operations, as arithmetic operations, must be
_reliable_.

*WHY* must they be "reliable"? Just because you want it that way? You
snipped the part of my previous post where I pointed out that you are
free to propose changes to C++ to the standards committee. I'll remind
you again of the two points I made:

1. Your proposal will change the way arithmetic operations on built in
types works. The change must imply zero overhead. Code that does
arithemtic on built in types must run just as quickly and use no more
memory on a compiler that implements you proposal than such code does
on today's compilers. If you don't like that, you need a different
language. C++ is probably not for you.

2. You have already seen in this thread the basics of a wrapper
template that can give *you* all the reliability you desire, while
leaving *me* perfectly free to use the "unreliable" built in types when
I don't need the protection of the wrapper. You could have that class
written, unit-tested and documented in a day and you would be able to
use it for ever after, and no change to the C++ language is required.
So if you are going to propose a change to the committee, I would
suggest you prepare an argument as to why your change is needed. That's
just my opinion as I have no connection to the committee. But if I were
on the committee, one of the first questions I would ask you would be
"You can implement arithmetic types that provide all the reliability
you want very simply as C++ stands today. Given that, why do you think
the langauge should change?".

Gavin Deane
 
G

Grizlyk

Gavin Deane wrote:

too many words.

There is only one conclusion: you do not want any trivial operation
_reliable_, becasue
a) you think it is overhead
b) standard do not require
c) you no need

I take the point into account.
 
L

Lionel B

Lionel said:
What if x is zero? What if x is negative? What
if x is NaN what if x is infinite? What if ...?
Any number divided by itself equals 1. (In the case of x=0 you have the
inconsistency of 0 being equal to 1.)
If x=infinity, it's also equal to 1, on the basis of being divided by
itself. IOW, infinity = 0, 1 and itself simultaneously, which is a BIG
inconsistency.
???

And all this time we've thought mathematics were LOGICAL...[[gg]]

It is... you're not ;)
 
G

Gavin Deane

Grizlyk said:
Gavin Deane wrote:

too many words.

There is only one conclusion: you do not want any trivial operation
_reliable_, becasue
a) you think it is overhead

The only way I know of at the moment to get the reliability we are
talking about (defined behaviour on divide by zero) has overhead.
That's not my opinion. That's fact. Of course, there may be a way to
get your reliability with zero overhead, but I don't know it. If you
know it, please do tell. I and may others would be very interested.

Put it another way around: *if* I am prepared to sacrifice the
reliability, the operations can become faster. I want the faster option
available so I can choose, judging each case on its individual merits,
whether the overhead is worth paying in that case.
b) standard do not require

You've got that backwards. The standard allowing "unreliability" (i.e.
not defining the behaviour), is not a reason I have used in coming to
my conclusion. The standard allowing unreliability is a consequence of
that fact that many other C++ programmers, along with the creator of
C++ (I believe), share my preference for not forcing programmers to
accept overhead if they don't need the protection that comes with it.
c) you no need

Nearly. "I don't need it" should be "I don't *always* need it". That's
a crucial difference.

Sometimes I might well need the reliability you describe. And I know
how to get it when I want it. What I don't know is how to get it at
zero cost in terms of run time and/or memory use. And I don't want to
be forced to pay that cost on the occasions where I don't need the
benefit.

If someone comes up with a way of ensuring your reliability at zero run
time/memory cost, I'd be very pleased. However, what I don't want, and
what goes fundamentally against the philosophy of C++, is to be forced
to accept that reliability if it does come at a price.

Gavin Deane
 
G

Greymaus

Zorro wrote:
]
Yes, termination is in order as this is not going anywhere.

Oh, were you expecting some sort of Grand Conclusion from all this?
Sorry to disappoint you, Son...just a little stroll through some of the
inconsistency in mathematics, if its rules aren't applied strictly..
 
G

Greymaus

David said:
Not even an infinite number of zeroes will give you anything greater than zero.
Yes, of course. NOW do you see why I called the situation inconsistent?
 
M

Marcus Kwok

Greymaus said:
x/0 = infinity, where x equals any number. Why infinity? Because it
takes an infinite number of zeroes to equal any value.
[[gg]]

Infinity in mathematics is not really a number, but a concept.

You cannot divide by 0, ever. However, the limit of x/y (as y
approaches 0 from the right, assuming that x > 0) is infinity.
Conversely, the reciprocal of infinity is zero.

Conversely, the limit of 1/x (as x approaches infinity) is 0.
 
L

Lionel B

Zorro wrote:
]
Yes, termination is in order as this is not going anywhere.

Oh, were you expecting some sort of Grand Conclusion from all this?
Sorry to disappoint you, Son...just a little stroll through some of the
inconsistency in mathematics, if its rules aren't applied strictly..

So there's your Grand Conclusion, then: doing mathematics badly produces
nonsensical results. I think we can probably extend this principle beyond
mathematics.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top