Assertions in principle

?

=?iso-8859-1?q?Kirit_S=E6lensminde?=

Isn't the ability to do just that what we strive to do? To write
reliable software even in the face of the unexpected?

There may be a confusion over the meaning of "expected" here. I was
responding to Roland Pibinger's phrase "expected runtime scenario"
believing he meant that a contract violation by a third party
component was a normal ("expected") event and that your program is
inherently flawed if correct behaviour relies on third party
components not violating their contracts.

By definition, the only way [*] it is possible for a component to
violate its interface specification is if it has a bug. And you can't
build a reliable product if one of the components has a bug. If the
bug is in a component you produce, you need to fix the bug. If the bug
is in a third party component, you need to change supplier (or get the
current supplier to fix their product).

[*] Assuming the documentation fully describes the interface. But
then, if it doesn't, you don't have the information needed to
integrate that component into your product in the first place.

OK. I think I see where you're coming from. It seems you're talking
about a systematic fault due to a buggy design or implementation,
rather than a transient fault caused by hardware failure or data
corruption.

Personally I find that I never use assert, but I suspect this has much
to do with the sorts of system that I build and the historic un-
availability of any debugger. I do use exceptions for the purposes
that others use asserts though.


K
 
J

jeffc226

What do you mean by this statement? The C++ standard has no notion of
"debug mode."

I meant that it's implied, or that it's true in a pragmatic sense.
It is true that by setting a preprocessor variable named NDEBUG, you make
assertions go away. But it's up to you whether to do that. If you don't
think it's a good idea, don't do it.

But NDEBUG is not necessarily just for assert. Assert uses NDEBUG,
but NDEBUG could be used for other things as well. It makes little
sense to me to essentially tell the preprocessor that we are in fact
compiling in debug mode when we are not, just to make assert start
working. This seems to me to be a form of coupling, to use borrow the
software design term.
 
J

jeffc226

The ANSI/ISO C assert, inherited into C++, is disabled by the presence
of the NDEBUG macro. This is not the same thing as being ``debug-mode
only''.

Debug mode is whatever your compiler, build environment and local
conventions dictate debug mode is.

I don't think I can buy that. Wouldn't it be much more
straightforward to simply have a preprocessor flag that turns
assertions on and off? Then this can be set for debug builds, release
builds, both or neither. However having a flag that tells you you're
in release mode, and then turning off that flag in release mode, seems
silly to me. By linking assert to NDEBUG, then I would say that for
practical purposes, being disabled by the presence of the NDEBUG macro
is actually the same thing at being "debug mode only".

This would be similar to a programmer defining a constant called ONE
and setting it to 0, because then his program then starts working the
way he wants.
 
K

Kai-Uwe Bux

I meant that it's implied, or that it's true in a pragmatic sense.


But NDEBUG is not necessarily just for assert. Assert uses NDEBUG,
but NDEBUG could be used for other things as well. It makes little
sense to me to essentially tell the preprocessor that we are in fact
compiling in debug mode when we are not, just to make assert start
working. This seems to me to be a form of coupling, to use borrow the
software design term.

But that coupling is entirely your doing. The standard introduces NDEBUG for
precisely one purpose: as far as the standard is concerned, NDEBUG only
turns on/off assert. It is not used for anything else in standard C++.

If you decide to use NDEBUG for something else, it is entirely your fault
when you find you painted yourself into a corner.


Best

Kai-Uwe Bux
 
G

Greg Herlihy

I feel this is going around in circles. As for a concrete example, I find
that g++ sometimes crashes upon me. Usually, it dies with a friendly
invitation to send in a bug report. It even gives some file and line number
info. I am pretty certain that the developers left some sanity check
assertions in g++. I highly appreciate that for the following reasons:

a) I prefer g++ crashing over generating faulty code. If I had no indication
that there was a problem with the compiler and the generated program
behaves not as expected, I would start searching for a bug in my code. That
could be a tremendous waste of time.

The only problem apparent in this situation is that gcc has crashed.
Had the compiler not crashed, the compiler would have gone to produce
either a correct or a faulty binary. Since the vast majority of the
time, gcc produces the former, it is much more likely that this crash
is preventing gcc from producing a correct build than it is somehow
avoiding an incorrect build.
b) I can use the file and line number info to check the bugzilla database
and see whether the bug has already been reported.

In other words, you can spend time debugging your C++ compiler instead
of your own C++ programs.
Do you think, the compiler would be a better program with those assertions
turned off?

I think that a better compiler would have the asserts turned off. The
point here is that a more thoroughly-tested compiler would see no
benefit from shipping with its asserts enabled. Shipping a program
with asserts enabled can only mean that the software has not been
adequately tested. So in this case, anyone who uses g++ is
effectively participates in the product's QA.

And while such an arrangement may be reasonable for a free product
like g++, it probably won't fly for those who program with a $500 C++
compiler. Customers who pay money for software do so with the
expectation that the software will be tested before it is shipped and
that it will run reliably after it has shipped. Leaving asserts
enabled in a shipping program has the completely opposite effect - the
asserts make the shipping program less reliable (that is, it is more
prone to fail), and all but eliminates the chances that the developers
will have tested the software before it shipped - and not just rely
on those who use the software to make up the difference.

Greg
 
A

Alf P. Steinbach

* Greg Herlihy:
I think that a better compiler would have the asserts turned off. The
point here is that a more thoroughly-tested compiler would see no
benefit from shipping with its asserts enabled. Shipping a program
with asserts enabled can only mean that the software has not been
adequately tested. So in this case, anyone who uses g++ is
effectively participates in the product's QA.

And while such an arrangement may be reasonable for a free product
like g++, it probably won't fly for those who program with a $500 C++
compiler.

As they say, that's not even wrong. $500-compilers do produce Internal
Compiler Errors. And g++'s free support is superior to e.g. Microsoft's
non-existent free support.

Customers who pay money for software do so with the
expectation that the software will be tested before it is shipped and
that it will run reliably after it has shipped. Leaving asserts
enabled in a shipping program has the completely opposite effect - the
asserts make the shipping program less reliable (that is, it is more
prone to fail), and all but eliminates the chances that the developers
will have tested the software before it shipped - and not just rely
on those who use the software to make up the difference.

Have you ever /used/ a commercial program?
 
K

Kai-Uwe Bux

Greg said:
The only problem apparent in this situation is that gcc has crashed.
Had the compiler not crashed, the compiler would have gone to produce
either a correct or a faulty binary. Since the vast majority of the
time, gcc produces the former, it is much more likely that this crash
is preventing gcc from producing a correct build than it is somehow
avoiding an incorrect build.

I disagree with this estimation of likelyhood: The failing assert shows that
some internal data structure is messed up or some precondition of a method
is not met, etc. In short, the program has run into a bug. I would be
rather surprised if ignoring this problem would result in a correct built.

Keep in mind that your observation about gcc producing correct builts is a
conditional probability: you observe the likelyhood that a non-crashing run
of gcc produces a correct built. That says next to nothing about the runs
of gcc that trigger an assert.

In other words, you can spend time debugging your C++ compiler instead
of your own C++ programs.

I don't debug gcc, I just file a bug report or not.

I think that a better compiler would have the asserts turned off. The
point here is that a more thoroughly-tested compiler would see no
benefit from shipping with its asserts enabled. Shipping a program
with asserts enabled can only mean that the software has not been
adequately tested. So in this case, anyone who uses g++ is
effectively participates in the product's QA.

So you are just saying that gcc is not tested thoroughly enough. Whether
that is the case or no, it appears to be quite immaterial: note that
turning off the asserts does not magically increase the amount of testing
that has been done before releasing the program.

And while such an arrangement may be reasonable for a free product
like g++, it probably won't fly for those who program with a $500 C++
compiler. Customers who pay money for software do so with the
expectation that the software will be tested before it is shipped and
that it will run reliably after it has shipped. Leaving asserts
enabled in a shipping program has the completely opposite effect - the
asserts make the shipping program less reliable (that is, it is more
prone to fail), and all but eliminates the chances that the developers
will have tested the software before it shipped - and not just rely
on those who use the software to make up the difference.

The asserts only prevent the program from completing a run by coincidence.
Any run of a program that would have triggered asserts is to be considered
unreliable with regard to its result. The effect of turning off the asserts
in such a case is just that a bug is covered up. I agree, though, that
commercial vendors are more likely to have a financial interest in hiding
such problems.


Best

Kai-Uwe Bux
 
G

Gavin Deane

The only problem apparent in this situation is that gcc has crashed.
Had the compiler not crashed, the compiler would have gone to produce
either a correct or a faulty binary. Since the vast majority of the
time, gcc produces the former, it is much more likely that this crash
is preventing gcc from producing a correct build than it is somehow
avoiding an incorrect build.

I don't buy that at all. Presumably the authors of g++ put those
assertions in to check for things that should not happen and that
therefore signify a bug when they do happen. I don't think anybody
writes asserts that fire when everything is going to plan do they? And
if you've hit a bug, all bets are off. The program state is something
other than its creator intended and therefore the behaviour of
subsequent code can not be predicted (without having the code
available to read). You're correct to say that the output could be a
correct binary, but incorrect in your assessment of the probability of
that happening. The likelihood of g++ producing a correct binary when
it does not encounter a bug in the process (very high) is irrelevant
to the likelihood of it producing a correct binary when it does
encounter a bug. To say the least, it would be courageous to rely on
the compiler's output.
In other words, you can spend time debugging your C++ compiler instead
of your own C++ programs.

No, you spend your time *reporting* a bug in your C++ compiler. Or you
can choose not to bother. Many software providers offer you a means to
report problems if you choose to. In the specific case of an open-
source project, you *can* also choose to debug it yourself if you want
to. But that's entirely up to you and entirely separate from choosing
whether to report the bug or not.
I think that a better compiler would have the asserts turned off. The
point here is that a more thoroughly-tested compiler would see no
benefit from shipping with its asserts enabled.

A *perfectly* tested compiler would see no benefit, but I don't think
such a thing exists or reasonably could exist.
Shipping a program
with asserts enabled can only mean that the software has not been
adequately tested.

Are you saying that you think there is a practicable amount of pre-
release testing that can guarantee a product like g++ will be 100% bug
free?
So in this case, anyone who uses g++ is
effectively participates in the product's QA.

And what do you think happens when any other piece of complex software
is deployed to a market that is orders of magnitude larger than the
pre-release testing team?
And while such an arrangement may be reasonable for a free product
like g++, it probably won't fly for those who program with a $500 C++
compiler. Customers who pay money for software do so with the
expectation that the software will be tested before it is shipped and
that it will run reliably after it has shipped.

As, I'm sure, do customers of purported stable releases of free
software. Nobody is suggesting g++ falls over every time it tries to
compile something, just that on the rare occasions it does fall over,
it's good that it does so in a way that lets you know you can't rely
on anything it just produced, and gives you the option to provide
useful feedback to the people who can fix it.
Leaving asserts
enabled in a shipping program has the completely opposite effect - the
asserts make the shipping program less reliable (that is, it is more
prone to fail),

Again, only if one writes an assert that can fire when nothing is
wrong. I don't think people do that.

Two assumptions:
1. Asserts will only fire if there is a bug in the program.
2. It is not possible to guarantee the program is 100% bug-free before
release.

If you agree with those two assumptions, the choice is between leaving
asserts in so something obvious and controlled happens if a bug is
encountered or taking asserts out and having something unpredictable
and (worse) possibly not immediately noticeable if a bug is
encountered. I don't see how the latter could be preferable. There's
no concept of being more likely to fail with asserts enabled. All that
is more likely is that the user knows about it. That's good.

If you don't agree with the assumptions above, please do explain why.
and all but eliminates the chances that the developers
will have tested the software before it shipped - and not just rely
on those who use the software to make up the difference.

If that sort of attitude is prevalent, the people managing the
development process are going to need a far more fundamental strategy
if they want to produce a well engineered product.

Gavin Deane
 
I

Ian Collins

I don't think I can buy that. Wouldn't it be much more
straightforward to simply have a preprocessor flag that turns
assertions on and off? Then this can be set for debug builds, release
builds, both or neither. However having a flag that tells you you're
in release mode, and then turning off that flag in release mode, seems
silly to me. By linking assert to NDEBUG, then I would say that for
practical purposes, being disabled by the presence of the NDEBUG macro
is actually the same thing at being "debug mode only".
If you read the standard, NDEBUG is used to disable asserts, nothing
else. There is no mention of "debug mode". In most environments,
"debug mode" implies a lot more than enabling asserts.
 
G

Greg Herlihy

* Greg Herlihy:

As they say, that's not even wrong. $500-compilers do produce Internal
Compiler Errors. And g++'s free support is superior to e.g. Microsoft's
non-existent free support.

I see you disagree with several points that are not in my post, but it would
be interesting to know whether you take issue with anything that I actually
wrote. After all, I offered no opinion whether a $500 compiler crashes any
less often than a free C++ compiler - only that some who spends $500 on a
compiler (such as the Intel compiler) but be expecting something different
than would expect from a free compiler.

The comparison between compilers illustrates a simple point: that gcc is one
of the few programs that can justify active asserts in a shipping program.
But for the vast majority of programs - including entire categories - such
as computer games - would see no benefit at all from such a practice.
Considier how many computer games ship with asserts enabled. Any estimate?
Any estimate greater than zero? Who would prefer to have a PlayStation II
game abort because of a failed assert? How likely is it that debugging the
assert failure would be more popular than playing the games itself?
Have you ever /used/ a commercial program?

I have used commercial programs - but I never remember using a shipping
version of a commercial program that had its asserts enabled. And until
programmers start writing software that does not fail often enough, I doubt
I'll be seeing asserts in shipping commercial software anytime soon.

Greg
 
A

Alf P. Steinbach

* Greg Herlihy:
I see you disagree with several points that are not in my post, but it would
be interesting to know whether you take issue with anything that I actually
wrote. After all, I offered no opinion whether a $500 compiler crashes any
less often than a free C++ compiler - only that some who spends $500 on a
compiler (such as the Intel compiler) but be expecting something different
than would expect from a free compiler.

Insights into people's actual expectations are probably better discussed
in an ESP or psychology group.


[snip]
I have used commercial programs - but I never remember using a shipping
version of a commercial program that had its asserts enabled. And until
programmers start writing software that does not fail often enough, I doubt
I'll be seeing asserts in shipping commercial software anytime soon.

The commercial version of MSVC is one commercial compiler (in the price
range you mention, the 7.1 version with documentation was about 13.000
NKr) that ships with asserts enabled -- it ICEs regularly.

For that matter, even Windows Explorer, the Windows GUI, ships with
asserts, or an equivalent mechanism, enabled, doing a report-and-restart
when it crashes.

For an honest and competent developer it's lunacy to turn off asserts.
 
G

Gavin Deane

Who would prefer to have a PlayStation II
game abort because of a failed assert? How likely is it that debugging the
assert failure would be more popular than playing the games itself?

You're still missing the point of assert. assert is used to identify
bugs. You do not reduce the number of bugs or the probability that a
user will encounter one just by removing your bug identification tool.
What you change is the outcome *if* a bug is encountered. You seem to
be suggesting:

Option 1: Leave bug identification in and annoy users when it
identifies one.
Option 2: Take bug identification out so that users can continue to
use the software even in the presence of bugs.

That's nonsense. The choice is:

Option 1: Leave bug identification in.
1a. If no bug is encountered, the software continues to work.
1b. If a bug is encountered, the user is made aware that their
software does not work and its behaviour can not be relied upon in to
do what they were trying to do.

Option 2: Take bug identification out.
2a. If no bug is encountered, the software continues to work.
2.b If a bug is encountered, the behaviour of the software is not
reliable but the user may have no way of knowing that and may be
relying on it anyway. That's bad.

When no bug is encountered, whether bug identification is still in
*makes no difference* to the user. The only difference is what happens
when a bug is encountered. Continuing to use the software in the way
they were *is not an option* for the user but if they are not informed
of the bug that might be exactly what they try and do. And the problem
may not manifest itself until some way down the line in whatever
process the use of that software was part of.

That is the advantage of leaving bug identification in. If you decide
that that advantage is appropriate, you *might* also decide that, at
the same time as telling the user that their software is broken, you
will provide information that they could feed back to you to help
diagnose the problem (or you might decide that simply identifying that
a bug has been encountered is sufficient). Having made all those
design decisions, you might find that assert provides the
implementation of bug identification that you need. Or you might need
to implement something else.

Gavin Deane
 
R

Roland Pibinger

The commercial version of MSVC is one commercial compiler (in the price
range you mention, the 7.1 version with documentation was about 13.000
NKr) that ships with asserts enabled -- it ICEs regularly.

But ICE doesn't mean that all asserts are turned on in production
code.
For that matter, even Windows Explorer, the Windows GUI, ships with
asserts, or an equivalent mechanism, enabled, doing a report-and-restart
when it crashes.

When it crashes! Again, that doesn't mean at all that asserts are
turned on in production code. Quite the contrary! Probably a signal
handler for SIGSEV is invoked. BTW, people interested in Microsoft's
usage of assert should read Steve Maguire's classic book 'Writing
Solid Code':
www.amazon.com/Writing-Solid-Code-Microsofts-Programming/dp/1556155514

For an honest and competent developer it's lunacy to turn off asserts.

LoL!
 
A

Alf P. Steinbach

* Roland Pibinger:
But ICE doesn't mean that all asserts are turned on in production
code.

Right, it doesn't mean that the moon is gouda-cheese, but so what?

An ICE (Internal Compiler Error) is an assert.

For optimization some small parts of the code will instead (typically)
be very intensively tested. However, that is impractical for the
complete application. The asserts that are left on also serve to
hopefully catch invalid state caused by any remaining errors in the
intensively tested code.

When it crashes! Again, that doesn't mean at all that asserts are
turned on in production code. Quite the contrary! Probably a signal
handler for SIGSEV is invoked.

Probably not, unless you're talking of a *nix version of Windows Explorer.

BTW, people interested in Microsoft's
usage of assert should read Steve Maguire's classic book 'Writing
Solid Code':
www.amazon.com/Writing-Solid-Code-Microsofts-Programming/dp/1556155514



LoL!

Hm.
 
O

Old Wolf

For that matter, even Windows Explorer, the Windows GUI, ships with
asserts, or an equivalent mechanism, enabled, doing a report-and-restart
when it crashes.

That's a separate background task that detects segfaults and then
takes over.
 
A

Adrian Hawryluk

Chris said:
But how do you actually "assert" that the code is correct? Running your
own tests, extended beta-testing etc. etc. is certainly a mandatory
thing, but it does not at all guarantee correct code in the sense, that
it will act correctly under each and every aspect that might come along.

You could always try and build a theorem prover. In university, I was
involved in building one for the Eiffel language. It is pretty
intriguing stuff. Too bad it is far from going main stream.
IMO the sailing analogy hits the point. Staying with the analogies -
even those competing in the TDF wear helmets for safety, but not all of
them. Of course one would disable asserts in time critical parts in the
production code, but this is not necessarily useful throughout the whole
program. Especially when implementing complex parser/compiler systems
they come in extremely handy in code that is already shipped to customers.

This makes sense, but an assert in this case is not very useful when a
customer calls and says, "your programme (he's a Canadian, eh? :D) went
down and it is reporting some gibberish on the screen." Even if s/he
gives you the info on the screen, it may not be enough, and usually they
hit ok and don't write down anything.

A good production assert (which I would really like) would be to dump
the call stack in a manner that I could read it (dumped to a file). And
no, I've not have had a good experience with core dumps as they are not
available on all compiled systems, and the ones I have had them on are
junk as I can not find any documentation as to how to read em.

An even better one would describe the state of the system (really
difficult to implement, though if core dumps were a standard and worked
as they ought, I'm sure it would be suitable).


I've even see Windoze apps display stack dumps, but no docs in how to
use them.

What is really needed is a standard way of representing the state of a
failed system.


Adrian
 
A

Adrian Hawryluk

Roland said:
On 2 Mar 2007 08:08:55 -0800, "[email protected]" wrote:
This is a very good explanation (source?). assert is for finding bugs
in a program. Input validation, error handling, ... is not done with
asserts. Therefore asserts should never be used in production
(release) code.
Sure


Yes, that's their purpose. Actually, assert should be renamed to
debug_assert.
Sure


Well tested software has so few bugs that is can be smoothly used by
the client. And what have interface changes and mismatched code to do
with asserts?

Sure, if you put your foot down and tell your manager that "I don't care
if the deadline is tomorrow, it ain't ready!" ;)

Library interface changes, if done incorrectly could cause a whole whack
of problems. Input interfaces may only be narrowed, never widened.
Output interfaces may be widened, never narrowed. You tell that to some
programmers and they will stare at you with a blank look, or worse argue
with you. :)
The bug crashes the program, assert calls abort which crashes the
program. So?

So, you keep the programme from corrupting persistent data causing a
cascade of other problems afterwards.
Of course, they test the program that is shipped, not a debug version.

Of course.
To find bugs in the program. Nothing more, nothing less. Together with
unit tests, functional tests, integration tests, ... it is one step to
produce a deployable program. Actually, assert can be seen as
automated test built into code.
Definitely.


That just doesn't make sense to me. You write code that consists of
(public) interfaces to encapsulated code. You validate the input.
Since software is (should be) reusable it's no problem (even desired)
that it is used in ways 'the original developer might not even have
imagined'. You check the input as part of the contract between your
code and the caller. When the input is within the allowd range your
code must work.

Given the set of valid inputs, your system (since you defined the valid
inputs) should work always.
Using a debugger is the most inefficient activity in programming. It
is entirely resistant to automation. asserts help you avoid using the
debugger.

Yeah, well that goes to design and the human factor. If the computer
did everything for us, what would we do? ;)

The debugger, though resistant to automation, is still required, even
when you use asserts since that is the only way to get access to the
system state.

I just wrote the other day a parser with many asserts in it, but trying
to find why the assert failed wouldn't have been possible without the
debugger (and a bunch of debugging classes and functions that I wrote).
Did I tell you I *hate* pointers? :(
BTW, have you ever seen a program/library that prints/displays asserts
in production/release mode? I would be very reluctant to use that
program because the code quality most probably is very poor.

If the programme or library displays the failed asserts in
"production/release mode", I would be concerned. If they could be
redirected to a file which I could look at postmortem, I would be happy
as I would be able to talk to that company and resolve the issue in a
more timely fashion.

Asserts are good, bad and ugly part of programming. I can't wait till a
theorem prover can be properly build.

Man, this thread spans 4 days. Its interesting, but I'm reading the
rest of it later.


Adrian
 
A

Adrian Hawryluk

Greg said:
> ...Leaving asserts
enabled in a shipping program has the completely opposite effect - the
asserts make the shipping program less reliable (that is, it is more
prone to fail), and all but eliminates the chances that the developers
will have tested the software before it shipped - and not just rely
on those who use the software to make up the difference.

If the asserts are correct then the programme will not fail. If they
are incorrect, then someone has forgotten something (we are only human
after all) and it should be checked. Having this information may prove
to be invaluable as there are times that something _could_ have been
forgotten. Yes, it shouldn't have happened, but it did. Now the
customer would want to know how long before it is fixed. If you have no
starting point, you may be a long while making the customer even more
frustrated. (Man I could tell you stories...)


Adrian

--
========================================================
Adrian Hawryluk BSc. Computer Science
--------------------------------------------------------
Specialising in: OOD Methodologies in UML
OOP Methodologies in C, C++ and more
RT Embedded Programming
__------------------------------------------------__
-----[blog: http://adrians-musings.blogspot.com/]-----
'------------------------------------------------------'
This content is licensed under the Creative Commons
Attribution-Noncommercial-Share Alike 3.0 License
http://creativecommons.org/licenses/by-nc-sa/3.0/
=========================================================
 
A

Adrian Hawryluk

Kirit said:
On 4 Mar, 10:29, (e-mail address removed) (Roland Pibinger) wrote:
Is a contract violation a bug or an expected runtime scenario? IMO,
the latter.
How do you engineer a reliable product if you expect third party
components (software or hardware) not to adhere to their interface
specifications? If you expect them to do that you need to change
supplier.
Isn't the ability to do just that what we strive to do? To write
reliable software even in the face of the unexpected?
There may be a confusion over the meaning of "expected" here. I was
responding to Roland Pibinger's phrase "expected runtime scenario"
believing he meant that a contract violation by a third party
component was a normal ("expected") event and that your program is
inherently flawed if correct behaviour relies on third party
components not violating their contracts.

By definition, the only way [*] it is possible for a component to
violate its interface specification is if it has a bug. And you can't
build a reliable product if one of the components has a bug. If the
bug is in a component you produce, you need to fix the bug. If the bug
is in a third party component, you need to change supplier (or get the
current supplier to fix their product).

[*] Assuming the documentation fully describes the interface. But
then, if it doesn't, you don't have the information needed to
integrate that component into your product in the first place.

OK. I think I see where you're coming from. It seems you're talking
about a systematic fault due to a buggy design or implementation,
rather than a transient fault caused by hardware failure or data
corruption.

If there is a transient fault, it is the same /unless/ there is a
recovery interface to the device /or/ you are willing to write a
mechanism to shutdown further possible use of that device. Either one
could get quite complex and would require an engineering decision as to
the pros, cons, ability to meet time-lines, etc.

If there is no recovery mechanism in place for the device, then you
/probably/ have lost _reliable_ contact with that device /permanently/.
There are, of course, exceptions to what I have just said, and would
have to be dealt with on a case by case basis.
> Personally I find that I never use assert, but I suspect this has much
> to do with the sorts of system that I build and the historic un-
> availability of any debugger. I do use exceptions for the purposes
> that others use asserts though.

Using exceptions in place of asserts is a more controlled way of
implementing runtime checks like asserts, but in some cases may not buy
you much when some hardware fails and may add a lot of complexity to the
system. However, that depends on many factors and should be dealt with
on a case by case manner.


Adrian

--
==========================================================
Adrian Hawryluk BSc. Computer Science
----------------------------------------------------------
Specialising in: OOD Methodologies in UML
OOP Methodologies in C, C++ and more
RT Embedded Programming
__--------------------------------------------------__
----- [blog: http://adrians-musings.blogspot.com/] -----
'--------------------------------------------------------'
My newsgroup writings are licensed under the Creative
Commons Attribution-Noncommercial-Share Alike 3.0 License
http://creativecommons.org/licenses/by-nc-sa/3.0/
==========================================================
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,756
Messages
2,569,533
Members
45,007
Latest member
OrderFitnessKetoCapsules

Latest Threads

Top