C++ Exceptions Cause Performance Hit?

D

David Abrahams

David Rasmussen said:
Negative cost? Meaning that a program compiled with exceptions, in which
no exceptions are raised will actually run faster than the same program
compiled without exceptions?

If you just "compile it without exceptions" it isn't the same program,
because you've just dropped all of the handling for those exceptional
conditions. If, however, you rewrite the exceptional condition
handling functionality using a different mechanism (e.g. error return
codes) it is quite possible -- even likely -- that it will run slower.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
G

Gene Bushuyev

Francis Glassborow said:
But it does in the real world because of the way that compilers
implement exceptions by trying to make their presence zero cost (or even
negative cost) if no actual exception is raised. For many applications
that is exactly what is wanted, however in time constrained systems it
can push the cost of a raised exception beyond what can be accepted.

If we had compilers that had a switch to minimise exception handling
costs when an exception was raised, my guess is that that would make
exceptions useful in hard RT code.

That is only a factor if:
a) a resource clean up is taking relatively little time compared to compiler
generated code for stack unwinding, and
b) exceptions are relatively frequently thrown

If a) is a possible in some situations, b) indicates that exceptional paths
are not that exceptional at all, but rather common. In which case an
application designer should probably reconsider the definition of what
constitutes an error.

- gene


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
G

Gene Bushuyev

George Neuner said:
You've entirely missed the point.

I don't think so.
First, the programmer does *not* have the same degree of control
unless *every* function call in the chain is separately protected by
its own try block and any exceptions are manually propagated to the
appropriate frame.

No, it doesn't follow from what I have said. It doesn't make any sense to
try/catch if you can't do anything with the error. There are only certain
places where error recovery is possible. Either you get there forwarding
error codes from function to function and cleaning resources on the way
manually, or an exception brings you there. There is no need to have
try/catch in every function, because it would mean that every function can
recover from error, and therefore doens't need to throw at all.
Processing error codes leads to the same amount of manual stack unwinding as
if exceptions were thrown. There is still the same amount of resource
cleaning and recovery needs to be done. Compiler generated code for stack
unwinding may add some overhead, which depends on the individual compiler.
Whether that overhead is significant or not depends on the amount of
resource clean up that needs to be done and frequency the exceptions are
thrown. Maybe embedded applications are completely different, but in server
data-crunching applications that I'm familiar with, exceptions add nothing
mesuarable to the program run-time.

- gene


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
F

Francis Glassborow

David said:
Negative cost? Meaning that a program compiled with exceptions, in which
no exceptions are raised will actually run faster than the same program
compiled without exceptions?

Yes, faster than the equivalent program written using other error
handling mechanisms. Of course not doing any error checking will beat
both until something goes wrong:)

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
A

Andreas Huber

George Neuner wrote:
[snip]
You've entirely missed the point.

First, the programmer does *not* have the same degree of control
unless *every* function call in the chain is separately protected by
its own try block and any exceptions are manually propagated to the
appropriate frame.

Have you ever actually seen code ...
1. establish whether a called function returned an error code
2. establish whether the error code must be propagated out of the
calling function
3. establish whether error code propagation might take longer than the
remaining time until the next deadline
4. yield control to a different thread/function, which could then ensure
that the looming deadline is not missed

?

I'm asking because what you describe above only seems to make sense if
you go through steps 1-4 after *every* function call that can fail?

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
H

Hyman Rosen

David said:
Negative cost? Meaning that a program compiled with exceptions, in which
no exceptions are raised will actually run faster than the same program
compiled without exceptions?

Yes. The reason is that without exceptions programs must check
for errors using return values. So even if there are no errors,
the program spends extra time verifying that there are no errors.

With exceptions, the normal execution path runs as if no error
checking is required. Meanwhile, the error handling code just
sits apart from the normal path, and if there is an exception
that error handling code is invoked.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
G

George Neuner

George Neuner wrote:
[snip]
You've entirely missed the point.

First, the programmer does *not* have the same degree of control
unless *every* function call in the chain is separately protected by
its own try block and any exceptions are manually propagated to the
appropriate frame.

Have you ever actually seen code ...
1. establish whether a called function returned an error code
Yes.

2. establish whether the error code must be propagated out of the
calling function
Yes.

3. establish whether error code propagation might take longer than the
remaining time until the next deadline

Not directly. But not every error needs to be propagated immediately
..... the intervening function may have something of its own to finish
first.
4. yield control to a different thread/function, which could then ensure
that the looming deadline is not missed
Yes.


I'm asking because what you describe above only seems to make sense if
you go through steps 1-4 after *every* function call that can fail?

And it is necessary with functions that succeed as well.

Normal practice when writing real time code is to determine the
cumulative time spent in the current context at each decision point
and decide whether it has become "too much". Typical decision points
are before/after a function call, before/after a loop or, if it's a
lengthy loop, after every so many iterations. Real time programmers
consider these things all the time.

George
 
G

George Neuner

I don't think so.


No, it doesn't follow from what I have said.

I think it does.
It doesn't make any sense to
try/catch if you can't do anything with the error.

Tell that to Java fans. Please!
There are only certain
places where error recovery is possible. Either you get there forwarding
error codes from function to function and cleaning resources on the way
manually, or an exception brings you there. There is no need to have
try/catch in every function, because it would mean that every function can
recover from error, and therefore doens't need to throw at all.

The manual option and the automatic option are not functionally
equivalent for reasons I've already articulated.

Processing error codes leads to the same amount of manual stack unwinding as
if exceptions were thrown. There is still the same amount of resource
cleaning and recovery needs to be done. Compiler generated code for stack
unwinding may add some overhead, which depends on the individual compiler.
Whether that overhead is significant or not depends on the amount of
resource clean up that needs to be done and frequency the exceptions are
thrown.
Maybe embedded applications are completely different, but in server
data-crunching applications that I'm familiar with, exceptions add nothing
mesuarable to the program run-time.

Again, it's not about the cumulative time - its about having control.

Real time operations frequently have microsecond range tolerances.
Such things are usually handled directly by interrupt handlers.
However higher level code which monitors or sequences the operations
may still have millisecond or even sub-millisecond tolerances.
Despite this the program might be expected to accomplish several high
level operations simultaneously.

The total time to pop, say 3 frames, from the stack may be roughly the
same whether the functions return normally or an exception is thrown.
But in the exception case the time to return to the top frame is all
spent in a single indivisible action. In the return case the total
time is spread over 3 actions between which the programmer regains
control.


George
 
A

Andreas Huber

George Neuner wrote:
[snip]
And it is necessary with functions that succeed as well.

Normal practice when writing real time code is to determine the
cumulative time spent in the current context at each decision point
and decide whether it has become "too much". Typical decision points
are before/after a function call, before/after a loop or, if it's a
lengthy loop, after every so many iterations. Real time programmers
consider these things all the time.

I ask a bit more obviously: Is there such a decision point after *every*
function call? If yes, then your original statement that "the C++
exception mechanism doesn't work very well for real time coding" is
correct. If no, then I don't see why real-time code would not gain
something from using exceptions. The programmer would still have full
control: Whenever he wants to introduce a decision point he simply puts
a call (or multiple calls) into a try block. In the catch block he then
does the same as he would do when he gets back an error code from a
failing function.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
G

George Neuner

George Neuner wrote:
[snip]
And it is necessary with functions that succeed as well.

Normal practice when writing real time code is to determine the
cumulative time spent in the current context at each decision point
and decide whether it has become "too much". Typical decision points
are before/after a function call, before/after a loop or, if it's a
lengthy loop, after every so many iterations. Real time programmers
consider these things all the time.

I ask a bit more obviously: Is there such a decision point after *every*
function call?

The strict answer is "no" - decision points are app specific -
function call sites are just obvious and convenient places to evaluate
the need for a decision. RT isn't a set of rules to follow - it's a
dicipline of always being aware of the time consequences of code.

If yes, then your original statement that "the C++
exception mechanism doesn't work very well for real time coding" is
correct. If no, then I don't see why real-time code would not gain
something from using exceptions.

As I said in a previous post, I have no issue with appropriately tamed
exceptions. I believe there is an inherent problem which makes them
unsuitable for control transfers that span more than one or two frames
..... something I see not infrequently in conventional applications.


The basic RT coding skill that needs to be acquired is to *always* be
aware of potential timing issues in your code - while first writing
it. Once you've gotten yourself into a timing problem it can be very
difficult to get out of it without a lot of refactoring. RT code
requires careful upfront planning and continual awareness of the time
consequences of the code you are working on - the conventional
desktop/server technique of sketching code and then tweaking it by
profiling usually won't work.

C was designed to do system programming - it has a slight abstraction
penalty relative to assembler which is far more than made up for by
the gain in expressiveness. The more important point is that
virtually nothing is hidden from the programmer.

C++, OTOH, was designed for more conventional application programming
while still *permitting* system programming. It's additional
expressiveness [compared to C] is achieved largely through layered
abstractions which hide the implementation mechanisms from the
programmer. This leads to the vast majority of programmers not having
any clue about the implementation of language constructs or the
abstraction penalties paid for using them.

Naivete regarding the language becomes a major problem when people
[such as the OP of this thread] who have no experience in RT are
pressed into doing it - particularly in situations where no one is
around to teach techniques and explain why things can't or shouldn't
be done in the conventional way the programmer is accustomed to.

A decent C programmer who is new to RT can usually figure out what the
problem is and devise a way around it. My experience has been that
[even experienced] C++ coders attempting to do RT have much more
trouble discovering the cause of their problems and when faced with a
significant problem, the C++ programmer frequently has much more
difficulty resolving it because there are more hidden interactions to
consider. A stroll through comp.arch.embedded will show that I'm far
from alone in this observation.


George
 
A

Andreas Huber

George Neuner wrote:
[snip]
The strict answer is "no" - decision points are app specific -
function call sites are just obvious and convenient places to evaluate
the need for a decision. RT isn't a set of rules to follow - it's a
dicipline of always being aware of the time consequences of code.

I suspected as much.
As I said in a previous post, I have no issue with appropriately tamed
exceptions.
Ok.

I believe there is an inherent problem which makes them
unsuitable for control transfers that span more than one or two frames
.... something I see not infrequently in conventional applications.

Why only two frames, not more? It seems that certain coding styles
(many, many functions doing very little each, or recursive functions)
combined with inlining could easily lead to C++ code where an exception
is propagated over say 20 frames but in the optimized machine code the
resulting stack-unwind doesn't do much more than call the exception's
ctor, reset the stack-pointer and call the exception handler. Using
error return codes in such a scenario could thwart inlining up to the
point of noticeably slowing your code, even for the case when an error
is propagated.

[snip]
Naivete regarding the language becomes a major problem when people
[such as the OP of this thread] who have no experience in RT are
pressed into doing it

I don't think the OPs question was naive. If I was told to follow such a
coding standard I would probably ask a very similar question. Call me
naive too but I really have a problem if people talk about
performance/timing problems before having profiled/measured actual or at
least typical code. In absence of a proper rationale for the
"no-exceptions" rule, such a coding standard pushes premature
optimization, which - as we all know - is the root of all evil.
Don't get me wrong, I don't have any problems to follow such a standard
if it contains conclusive evidence that using exceptions in a particular
environment will indeed cause an unacceptable performance hit. Not being
an RT programmer, I have yet to see such evidence.
- particularly in situations where no one is
around to teach techniques and explain why things can't or shouldn't
be done in the conventional way the programmer is accustomed to.

A decent C programmer who is new to RT can usually figure out what the
problem is and devise a way around it. My experience has been that
[even experienced] C++ coders attempting to do RT have much more
trouble discovering the cause of their problems and when faced with a
significant problem, the C++ programmer frequently has much more
difficulty resolving it because there are more hidden interactions to
consider.

Right. But that could easily be corrected with better education.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
G

George Neuner

George Neuner wrote:
[snip]
I believe there is an inherent problem which makes them
unsuitable for control transfers that span more than one or two frames
.... something I see not infrequently in conventional applications.

Why only two frames, not more? It seems that certain coding styles
(many, many functions doing very little each, or recursive functions)
combined with inlining could easily lead to C++ code where an exception
is propagated over say 20 frames but in the optimized machine code the
resulting stack-unwind doesn't do much more than call the exception's
ctor, reset the stack-pointer and call the exception handler.

As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.

2 frames? Exceptions which throw to the next higher frame have
execution time which is, at most, equivalent to the normal function
return. Once you go beyond 1 frame it becomes easy to make seemingly
innocuous code changes in intermediate layers that look local and
would have little impact on a normal return sequence where the
programmer could intervene at each step, but which cause a timing
failure when added to the cumulative execution time of an atomic
multiple frame throw.

Using
error return codes in such a scenario could thwart inlining up to the
point of noticeably slowing your code, even for the case when an error
is propagated.

Sigh!

RT is *not* about "fast" code - it is about *predictable* code whose
time related behavior is known under all circumstances. Sometimes
code will be deliberately written to run slower than it could because
the faster version will not play well with other code in the system.

Naivete regarding the language becomes a major problem when people
[such as the OP of this thread] who have no experience in RT are
pressed into doing it

I don't think the OPs question was naive. If I was told to follow such a
coding standard I would probably ask a very similar question.

I have no issue with the question ... it is perfectly understandable
to me why it should be asked. I also wonder why the question was
asked *here* in Usenet rather than in the office, where, presumably,
the OP's RT code writing colleagues would know the technical reasons
for the company's practices and be able to explain them to newcomers.


George
 
A

Andreas Huber

George said:
As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.

Right, you did say that, but this is not apparent in your statement
quoted above.
2 frames? Exceptions which throw to the next higher frame have
execution time which is, at most, equivalent to the normal function
return. Once you go beyond 1 frame it becomes easy to make seemingly
innocuous code changes in intermediate layers that look local and
would have little impact on a normal return sequence where the
programmer could intervene at each step, but which cause a timing
failure when added to the cumulative execution time of an atomic
multiple frame throw.

What you say here only applies if the code that uses exception handling
does not contain as many decision points as the equivalent code using
error codes. Isn't that comparing apples to oranges?
Sigh!

RT is *not* about "fast" code - it is about *predictable* code whose
time related behavior is known under all circumstances. Sometimes

Your sighing is unwarranted. I know that RT is all about
predictability, i.e. being able to calculate absolute upper limits for
runtimes. I don't see how exception handling per se prevents that in
any way. Sure, some EH implementations may push such an upper limit
well beyond of what you aim to guarantee. However, I'd expect that a
compiler for an RT system employs an implementation that does not favor
the non-exceptional paths so much that the exceptional ones become
painfully slow (as some desktop compilers do).
code will be deliberately written to run slower than it could because
the faster version will not play well with other code in the system.

You mean that the faster variant is non-predictable, right?

[snip]
I have no issue with the question ... it is perfectly understandable
to me why it should be asked. I also wonder why the question was
asked *here* in Usenet rather than in the office, where, presumably,
the OP's RT code writing colleagues would know the technical reasons
for the company's practices and be able to explain them to newcomers.

Presumably, the answer the OP got from the internal staff was
unsatisfactory. If the only reason is the alleged "performance hit" -
as the OP implies - then I wouldn't be satisfied either. This smells of
FUD...

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
G

George Neuner

Right, you did say that, but this is not apparent in your statement
quoted above.


What you say here only applies if the code that uses exception handling
does not contain as many decision points as the equivalent code using
error codes. Isn't that comparing apples to oranges?

An exception has exactly 2 decision points - the throw point and the
catch point - control transfer between them is atomic.

Obviously you can add as many intermediate catch/rethrow points as
necessary, but IMO that defeats the purpose. Having to catch and
rethrow exception multiple times adds back the code complexity that
non-local exceptions were intended to remove. Apart from structuring
my code a bit differently, what have I really gained by using them?

[Before you answer that, keep in mind that the question's context is
limited to RT coding. I don't question the utility of exceptions in
other areas of programming.]

Your sighing is unwarranted. I know that RT is all about
predictability, i.e. being able to calculate absolute upper limits for
runtimes.

Predictability in RT means an accurate presentation of a series of
time related events as viewed by an observer outside of the system.
Determining upper limits for execution time is only a part of it.

Timing constraints are defined by windows within which the code must
deliver some result - ie. produce some value or take some action.
Windows can have both upper and lower boundaries and may be soft or
hard. For a hard window the result must be delivered within the
window - the result for a hard window is useless if delivered too
early or too late. A soft window makes allowances for the result to
be delivered late but specifies the preferred delivery situation.


It has been said that an RT program is the very hardest type of
program to write. RT programming, in general, has all the problems of
reliable concurrent programming and adds to them the requirement to
ensure predictable timed execution under all circumstances. On top of
that, many RT systems are used in safety critical applications, which
adds yet another dimension of complexity.


George
 
T

Thomas A. Horsley

As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.

But that makes no sense. The non-trivial destructor will get called
if I return as well. There are lots of ways to go out of scope
and make non-trivial destructors execute, exceptions are just one.
If that is your problem, you shouldn't object to exceptions, you
should object to destructors :).
-- email: (e-mail address removed) icbm: Delray Beach, FL |
<URL:http://home.att.net/~Tom.Horsley> Free Software and Politics <<==+

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
A

Andreas Huber

George said:
An exception has exactly 2 decision points - the throw point and the
catch point - control transfer between them is atomic.

Obviously you can add as many intermediate catch/rethrow points as
necessary, but IMO that defeats the purpose. Having to catch and
rethrow exception multiple times adds back the code complexity that
non-local exceptions were intended to remove. Apart from structuring
my code a bit differently, what have I really gained by using them?

Since you earlier confirmed that you not normally have a decision point
before/after every function call, EH would allow you to automate error
propagation *between* decision points. That is, in a program using EH
you would have a lot fewer try-catch blocks than if-then blocks in an
equivalent program using error codes (note that I assume that both
programs contain an equal number of decision points). IOW, there is
less code that solely progagates errors. Of course, not being an
RT-programmer I can't judge whether the resulting code/complexity
reduction is significant for typical RT programs.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
G

George Neuner

But that makes no sense. The non-trivial destructor will get called
if I return as well. There are lots of ways to go out of scope
and make non-trivial destructors execute, exceptions are just one.

Exceptions are the only way to simultaneously cause multiple call
frames to go out of scope.

If that is your problem, you shouldn't object to exceptions, you
should object to destructors :).

The context of this sub-discussion is real time programming ...
nothing being said here is relevent outside that context. Within that
context, I object to exceptions being an atomic operation whose
duration is only weakly and indirectly controllable.

The only method to control the duration being the manipulation of the
number of intervening call frames between the throw and catch point
and the number of live objects in those frames which have non-trivial
destructors. In the presence of multiple frame exceptions, it is too
easy to make a seemingly trivial change to an intermediate frame which
causes no problem in the normal return sequence but which breaks
timing constraints in the exceptional case because the cumulative time
to destruct all the frames becomes too great. Such a situation can
lead to significant refactoring to restore the timing.


George
 
F

Francis Glassborow

George Neuner said:
The only method to control the duration being the manipulation of the
number of intervening call frames between the throw and catch point
and the number of live objects in those frames which have non-trivial
destructors. In the presence of multiple frame exceptions, it is too
easy to make a seemingly trivial change to an intermediate frame which
causes no problem in the normal return sequence but which breaks
timing constraints in the exceptional case because the cumulative time
to destruct all the frames becomes too great. Such a situation can
lead to significant refactoring to restore the timing.

Agreed, but at a minimum RT programs should be tested with exceptions
being thrown. It isn't that hard to make small modifications to a
program or its data so as to force an exception condition. That should
be part of the test harness of any critical RT program. Note that
because of the overhead for calling a dtor, it is easy to make an
apparently insignificant change to any program however it does its error
handling that breaks the timing constraints.


--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
P

peter.koch.larsen

George Neuner skrev:
Exceptions are the only way to simultaneously cause multiple call
frames to go out of scope.



The context of this sub-discussion is real time programming ...
nothing being said here is relevent outside that context. Within that
context, I object to exceptions being an atomic operation whose
duration is only weakly and indirectly controllable.

An exception is not an atomic operation. Lots of stuff takes place from
the time an axception is thrown to the time it gets caught.
The only method to control the duration being the manipulation of the
number of intervening call frames between the throw and catch point
and the number of live objects in those frames which have non-trivial
destructors. In the presence of multiple frame exceptions, it is too
easy to make a seemingly trivial change to an intermediate frame which
causes no problem in the normal return sequence but which breaks
timing constraints in the exceptional case because the cumulative time
to destruct all the frames becomes too great. Such a situation can
lead to significant refactoring to restore the timing.

One way to get control back while an exception is unwinding is to
insert a class whos destructor can take care of any decisions as to
what to do.

class decisionmaker
{
decisionmaker(): begun(now()) {}
~decisionmaker() { if (now() - begun > delta) {
appropriate_action(); }
private:
time begun;
}

Notice that this will work in both the exceptional and normal case.

/Peter


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 
G

Gene Bushuyev

Andreas Huber said:
Since you earlier confirmed that you not normally have a decision point
before/after every function call, EH would allow you to automate error
propagation *between* decision points. That is, in a program using EH
you would have a lot fewer try-catch blocks than if-then blocks in an
equivalent program using error codes (note that I assume that both
programs contain an equal number of decision points). IOW, there is
less code that solely progagates errors. Of course, not being an
RT-programmer I can't judge whether the resulting code/complexity
reduction is significant for typical RT programs.

That's what I also was wondering. Whatever the reason to have a "decision
point" can be, it's easier done with exceptions than with error codes. It
should be even easier for RT code to guarantee measured execution with
exceptions than calculating all the different conditional branches that
error codes create. For example,
// error codes are messy and error-prone
RetCode foo()
{
....
RetCode ret_code = bar1();
sleep(10); // yield to another thread
if(ret_code != success)
{
...
return another_ret_code;
} else {
...
RetCode ret_code = bar2();
sleep(10); // yield to another thread
if(ret_code != success)
{
...
return yet_another_ret_code;
} else {
...
}
}
// exceptions are much easier to handle
void foo()
{
try
{
...
bar1();
sleep(10); // yield to another thread
...
bar2();
sleep(10); // yield to another thread
...
} catch (...)
{
sleep(10); // yield to another thread
throw();
}
}

Additionally, one cannot return error codes from operators, so I guess RT
programmers do not use those either. And if a function already has something
to return, error codes would either create a mess being bundled with the
return value, or function would have to return its result in one of its
parameters. All this creates a messy error prone code, which can also be
slower. I wish the no-exception proponents provided a code example to
illustrate their point, otherwise it makes no sense to me.

- gene


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top