Testing Program Question

I

Immortal Nephi

I am unable to find good documentation which talks about debug
macro. The exception may be used for testing purpose.

/* C Code */

void foo( int x )
{
#if _DEBUG
if( x > 10 )
std::cerr << “x value must not be greater than 10.” << std::endl;
#endif // _DEBUG

// Run normal execution
}

/* C++ Code */
static const bool TESTING = true;

void foo( int x )
{
if( TESTING )
if( x > 10 )
std::cerr << “x value must not be greater than 10.” << std::endl;

// Run normal execution
}

C++ Compiler has the _DEBUG option. _DEBUG option is turned off if
release mode is active and any _DEBUG macro blocks are ignored. I
have no idea how C++ does not use _DEBUG macro. I wonder that TESTING
condition will be removed if optimization is turned on when release
mode is active.
Please tell me more how cerr and clog are useful. I do not want to
see error messages in the console window. I want to see error message
in window dialog when Microsoft Visual C++ .NET 9.0 triggers assertion
request.
I may not need exception because step by step debugging procedure is
simpler than throw, try, and catch keywords.

/* C++ Code */
static const bool TESTING = true;

void foo( int x )
{
if( TESTING )
if( x > 10 )
std::_DEBUG_ERROR("x value must not be greater than 10.\n");

// Run normal execution
}
 
N

Nick Keighley

        I am unable to find good documentation which talks about debug
macro.  

I think you're using microsoft specific stuff here. Try MSDN or a
microsoft newsgroup. They seem a pretty helpful bunch,
The exception may be used for testing purpose.

what? Do you mean "exceptions can be used for test puposes"? is
assert() closer to what you want? Would writing your own version of
assert() help?

/* C Code */

void foo( int x )
{
#if _DEBUG
        if( x > 10 )
                std::cerr << “x value must not be greater than 10.” << std::endl;

er... that's not C!

#endif // _DEBUG

        // Run normal execution

}

/* C++ Code */
static const bool TESTING = true;

void foo( int x )
{
        if( TESTING )
                if( x > 10 )
                        std::cerr << “x value must not be greater than 10.” << std::endl;

        // Run normal execution

}

        C++ Compiler has the _DEBUG option.

C++ doesn't have such an option. Microsoft may.

 _DEBUG option is turned off if
release mode is active and any _DEBUG macro blocks are ignored.  I
have no idea how C++ does not use _DEBUG macro.

"how C++ does not use..." did you mean to say that?

 I wonder that TESTING
condition will be removed if optimization is turned on when release
mode is active.

maybe. Why not use the preprocessor if you care that much? Or an
ASSERT macro?
        Please tell me more how cerr and clog are useful.

they are streams to write error and log messages to.

 I do not want to
see error messages in the console window.

divert them else where then or use your own custom stream or use some
other mechanism

 I want to see error message
in window dialog when Microsoft Visual C++ .NET 9.0 triggers assertion
request.

write your own assert

        I may not need exception because step by step debugging procedure is
simpler than throw, try, and catch keywords.

if you say so. Exceptions aren't meant to be used for debugging
/* C++ Code */
static const bool TESTING = true;

void foo( int x )
{
        if( TESTING )
                if( x > 10 )
                        std::_DEBUG_ERROR("x value must not be greater than 10.\n");

        // Run normal execution

yes? and so?
 
J

James Kanze

I am unable to find good documentation which talks about debug
macro.

You mean NDEBUG? At the language level, all it commands is
assert (but there's nothing to stop you from using it yourself).
The exception may be used for testing purpose.
/* C Code */
void foo( int x )
{
#if _DEBUG
if( x > 10 )
std::cerr << "x value must not be greater than 10." << std::endl;

Just a nit, but in C++, the characters for delimiting a string
are ". And nothing else, even if it looks like a ".

And there is no _DEBUG macro defined in the language. What it
does is up to you and the implementation. (Mainly the
implementation, since it's in the implementation namespace.)
#endif // _DEBUG
// Run normal execution
}
/* C++ Code */
static const bool TESTING = true;
void foo( int x )
{
if( TESTING )
if( x > 10 )
std::cerr << "x value must not be greater than 10." << std::endl;

// Run normal execution
}

Excuse me, but there's nothing C vs. C++ here. Both of the
programs are C++. And the debug macro is generally one of the
rare things you want to be a macro, so you can set or reset it
from the command line.
C++ Compiler has the _DEBUG option.

The C++ compiler will allow pretty much any macro you care to
define.
_DEBUG option is turned off if release mode is active and any
_DEBUG macro blocks are ignored.

It's not an "option". It's a macro, that you define or not, as
you like. And there's no such thing as "modes" with regards to
C++.
I have no idea how C++ does not use _DEBUG macro. I wonder
that TESTING condition will be removed if optimization is
turned on when release mode is active.

Probably, but the content still has to be compilable C++.
Please tell me more how cerr and clog are useful.

Fundamentally, they're just different pre-defined streams.
Classically, cerr wouldn't get redirected, even when cout was,
and by default, the output of cerr is unit buffered, but that's
about it.
I do not want to see error messages in the console window. I
want to see error message in window dialog when Microsoft
Visual C++ .NET 9.0 triggers assertion request.

So you'll have to implement something else. If you're not using
cout, then cerr probably isn't much use either.
I may not need exception because step by step debugging
procedure is simpler than throw, try, and catch keywords.

It's not clear what types of errors you have in mind.
Programming errors are generally best handled by immediately
aborting, at least in production code, at least in most
application domains. Depending on circumstances, other errors
are best handled with exceptions or return codes.
 
I

Immortal Nephi

I think you're using microsoft specific stuff here. Try MSDN or a
microsoft newsgroup. They seem a pretty helpful bunch,


what? Do you mean "exceptions can be used for test puposes"? is
assert() closer to what you want? Would writing your own version of
assert() help?



er... that's not C!









C++ doesn't have such an option. Microsoft may.


"how C++ does not use..." did you mean to say that?


maybe. Why not use the preprocessor if you care that much? Or an
ASSERT macro?


they are streams to write error and log messages to.


divert them else where then or use your own custom stream or use some
other mechanism


write your own assert


if you say so. Exceptions aren't meant to be used for debugging

Is _DEBUG_ERROR from Microsoft specification? It is written in vector
header. I think that Microsoft wrote vector header in their own. Do C
++ Standard Library use cerr or clog to display error message instead
of _DEBUG_ERROR.

I plan to write error message in the debug version. The error message
is able to detect argument in the function's parameter before it
triggers to warn or alert the programmer. For example, the argument
in the function's parameter should be in the range between 0 and 10.
If it exceeds 10, then error message is displayed to the screen.

The error message procedure is similar to vector and string like out
of range.
 
J

James Kanze

Programming errors are best handled by fixing them,

Programming errors are best handled by preventing them in the
first place: carefully specifying each function, and reviewing
the code which uses the function against the specification.
assert can help in this endeavour but *usually* only in debug
builds: release build asserts are best suited only where a
higher level of defensiveness is required, not for your
typical desktop application.

Asserts are a sort of life jacket, to prevent things from
fucking up too much when you screwed up. Removing them from
production code is akin to wearing a life jacket when practicing
in the harbor, but taking it off when you go to sea.
Sometimes, it's necessary to remove asserts for performance
reasons, and for some particular applications (games?), it's
probably preferrable to remove them completely (in which case,
you do have to study what happens when they are
removed---there's no point in removing an assert if you're just
going to get a crash three statements later).

With regards to "typical desktop applications", I'm not too sure
what you mean by that. As I said, games are a possible
exception, where removing asserts makes sense. But which is
preferrable for an editor: that it crash, leaving you to recover
using its last check-point save (5 seconds or 5 keystrokes
before the crash), or that it stumble on to overwrite all of
your data, and save that at a check-point before finally
crashing?
Genuine runtime errors (e.g. out of memory, disk space or user
input error) are best handled by exceptions or return codes.

Agreed. Any program which has an assertion failure on user
input error is broken, and the same probably holds for out of
memory, although it's discutable for some applications. (What
happens if the "out of memory" occurs because the runtime is
trying to grow the stack? Or if the only possible cause of "out
of memory" is a memory leak? Or simply that your OS doesn't
handle "out of memory" safely?)
 
A

Alf P. Steinbach

* Leigh Johnston:
I disagree, why do you think assert was designed to do nothing for
NDEBUG? Asserts were designed to be used as a debugging tool so should
do nothing in released production code *unless* a higher degree of
defensiveness is required. I disagree that only games are an exception,
I would also argue that such defensiveness is not required for a typical
office application for example (e.g. a word processor). The amount of
software which requires less defensiveness probably outnumbers the
amount of software that requires increased defensiveness. If you are
worried about cosmic rays hitting your ram chips then perhaps you should
use assert more! :)

I think one main problem is that NDEBUG is binary.

Some asserts are very costly (e.g. think about asserting that a list is sorted,
or an assert at the very lowest level of code): you want to only optionally
enable them, have them off by default unless you suspect some problem.

Some asserts have mostly acceptable cost: you want to leave them in if possible,
unless profiling or simple measurement show that some specific such are too costly.

Some asserts have essentially zero cost because they're simple checks at mid or
high level of call chains, you want to keep them enabled anyway as belt &
suspenders defensive programming.

I'm not sure, but three general assertion levels sounds about right (haven't
thought about this earlier).



Cheers,

- Alf
 
Ö

Öö Tiib

Not sure I agree, you either want to be extremely defensive or not defensive
at all, somewhere in the middle sounds a bit pointless as it might not catch
data corruption due to breaking an invariant.  For this reason I am happy
with the binary nature of NDEBUG. :)

Yet on the field you see tons of code that does both asserting and
working around or asserting and throwing to avoid simply screwing up
and crashing. Majority of better software out there. That is the non-
binary-assert.
Programmers should get into the habit of adequately testing their software
prior to release (assert helps with this) and users should get into the
habit of regularly backing up their important data.

Hey, but there is gray real world out there. What you think
programmers should do is not what is actually always rational.
Everybody in the team agrees that what you say is sort of correct
*BUT* this product should be immediately released with serious amount
of technical debt and testing debt in, otherwise it fails to compete.
For example when customer tells that he needs feature A now and B and
C may be broken, he can live with it for few months. He has reasons
like if he does not get that A now then he will lose money. He will
next time consider your competitors who are more flexible.
 
Ö

Öö Tiib

It is possible to create robust software even if some desired features are
missing, i.e. make what you have robust.  Adding tons of release asserts
doesn't make your software more robust is just causes crashes to maybe
become more predictable.

Yes. Unpredictable crashes (or hangs) are even worse welcome by
customers than predictable and avoidable crashes. Being robust means
checking and handling situations anyway is assert present there or no.
That leaves assert without other purpose than being a pre-set
unconditional breakpoint in debugger. Everybody have seen
unconditional asserts like that:

assert( 0&&"Uh-oh, we should not really be here" );

Writing full source code coverage (nothing to talk of all execution
path coverage) tests, doing what all TODO: comments say and getting
rid of all compiler or static analysis tool (if it is used) or XXX:
comment warnings are the things what are often postponed in gray real
world because of lack of time/budget. Avoiding known and reproducible
asserts is also sometimes postponed.

As result the maturity levels of released features are like (A)
feature works only on limited cases, (B) feature does not work on some
limited cases, (C) there are none (known) cases when feature does not
work and (D) it has been around for years without maintenance needed.
The distinction between features that you have and features that are
missing is therefore dim itself.
 
J

James Kanze

I disagree, why do you think assert was designed to do nothing
for NDEBUG?

So that you can turn it off when you have to. The designed use
would be to use some application specific macro, defined (or
not) in the command line, and then to wrap the (few) critical
functions in something like:

#ifdef PRODUCTION
#undef NDEBUG // Just in case.
#define NDEBUG
#include <assert.h>
#endif
void critical_function()
{
// ...
}
#undef NDEBUG
#include <assert.h>

Why do you think you're allowed to include <assert.h> multiple
times, with it's meaning depending each time on the current
definition of NDEBUG?
Asserts were designed to be used as a debugging tool so should
do nothing in released production code *unless* a higher
degree of defensiveness is required.

That's your opinion. It doesn't correspond to the design of the
feature, nor good programming practices.
I disagree that only games are an exception,

They're certainly not the only exception. But such exceptions
are just that, exceptions. Most software should ship with
asserts turned on, *if* they can afford the performance impact.
(An awful lot of software is IO bound, so the performance impact
can be ignored.)
I would also argue that such defensiveness is not required for
a typical office application for example (e.g. a word
processor).

It depends on whether you consider trashing the users work
acceptable or not. And what the other costs are---since it
costs nothing to leave it, assuming no performance problems, and
requires extra work to remove it, it's more or less stupid to
choose the less robust solution.
The amount of software which requires less defensiveness
probably outnumbers the amount of software that requires
increased defensiveness. If you are worried about cosmic rays
hitting your ram chips then perhaps you should use assert
more! :)

You should always use assert liberally. We're talking about
whether you should turn it off in production code. In other
words, whether you should take an additional, explicit action
(defining NDEBUG) to render the software less robust.
This is the usual argument put forward in favour of more
defensive programming but in my opinion having an assert after
every other line of code is overkill for a typical office
application as I have already said.

I've never seen assert's used that heavily. And how often or
where you write an assert is a different question---in general,
a minimum would be to assert preconditions for a function, at
least when that function is called from other modules, and
post-conditions for a virtual function, when the derived classes
may be called from other modules. (This means, of course, that
your virtual functions shouldn't be public. But that's a pretty
well established rule anyway.)
Running out of stack space is more likely to be due to a
programming error rather than a valid runtime error condition.

Probably in most applications, although it also depends somewhat
on the OS. (Linux and Solaris don't have "pre-allocated" stacks
of a specific size, so using too much heap can cause a stack
overflow in some specific cases.) I've worked on applications
with embedded recursive descent parsers, parsing various forms
of user input. How deep the stack needs to be depends on the
complexity of the expression given to it by the user. But in
most cases, even then, you can set some arbitrary complexity
limit, test it, and ensure that the stack is large enough to
handle it.
 
J

James Kanze

* Leigh Johnston:
I think one main problem is that NDEBUG is binary.

Partially, at least. You can define and undefine it as often as
you want in code, and include <assert.h> after each change, but
1) it's overly verbose, so not used as often as it should be,
Some asserts are very costly (e.g. think about asserting that
a list is sorted, or an assert at the very lowest level of
code): you want to only optionally enable them, have them off
by default unless you suspect some problem.

Agreed. There are potentially asserts that you'd like to put in
that are expensive enough that you'd like to activate them only
if you actually had a problem.
Some asserts have mostly acceptable cost: you want to leave
them in if possible, unless profiling or simple measurement
show that some specific such are too costly.
Some asserts have essentially zero cost because they're simple
checks at mid or high level of call chains, you want to keep
them enabled anyway as belt & suspenders defensive
programming.
I'm not sure, but three general assertion levels sounds about
right (haven't thought about this earlier).

If you offer three, someone's going to want four:). But yes,
your analysis sounds about right.
 
Ö

Öö Tiib

It is a nonsense to say that virtual functions shouldn't be public: a public
virtual destructor is fine if you want to delete through a base class
pointer.  Bjarne Stroustrup's first virtual function example in TC++PL is a
public Employee::print() method, I see no problem with this.  You are
probably thinking of virtual functions which are called as part of some
algorithm implemented in a base class, such virtual functions need not be
public as it makes no sense for them to be but it does not follow that this
is the case for all virtual functions.

Yes, virtual functions may be public. It is better when they are not
since otherwise such function has two jobs to do. It has to specify
external interface of class and it has to specify inner hook for
derived classes to modify behavior of base. Missions of hook and
interface are critical and so burdening it all onto one function goes
against good practice of separating concerns.

Therefore policies against virtual functions in public interface are
more and more common. There are even special prefixes for virtual
functions like 'do' demanded by some style policies i have seen. This
does not concern destructors since destructors have all same interface
and have separate paragraph in all style policies anyway.
 
Ö

Öö Tiib

It is not the end of the world to move from public virtual to
private/protected virtual if the situation changes to require it:

Iteration #1 (Initial design)

class foo
{
public:
    virtual void xyzzy();

};

class bar : foo
{
public:
    virtual void xyzzy();

};

Iteration #2 (need to do new stuff before virtual function is called)

class foo
{
public:
    void xyzzy() { /* new stuff */ do_xyzzy(); }
protected:
    virtual void do_xyzzy() = 0; // forces clients to rewrite their code
(provide a new override)

};

class bar : foo
{
protected:
    virtual void do_xyzzy();



};

Yes, it is exactly so simple and it gives lot of benefits. Base class
is in complete control of its interface and contract, can check pre
and postconditions, inject instrumentation and so on in a single
place. Implementation is also free to take its natural form and no
compromises are needed with interface. If it needs to change then
change/add/split the non-public virtuals, public interface stays
stable and not virtual.

All that yo-yo with virtuals is one of the biggest sources of issues
in C++ so this 'no virtual interfaces' idiom should make lot of sense
if to think a bit. Also like James Kanze suggested it reduces need for
defenses all over the place by moving some of them to front. ;)
 
Ö

Öö Tiib

You misunderstand me, I am advocating that virtual interfaces are fine in
themselves and it is not too difficult to mutate said interfaces as
required.  Obviously I am in favour of keeping public interfaces as simple
and minimalist as possible and quite often I have virtual methods which are
protected/private when they from part of a class's implementation rather
than public interface.

The following are fine for example:

class interface
{
public:
    virtual void foo() = 0;
    virtual void bar() = 0;

};

class widget
{
public:
    virtual void draw() const;

};

It is wrong to say that virtual functions must not be public.

/Leigh

Yes, virtual interface is not wrong. All i say is that having the
interface and implementation separated helps a lot and brings no
downsides whatsoever. This is one good policy how to draw line and
separate interface from implementation. Lot of people, experts of the
trade, establish within their teams and follow that policy. It is good
from the start and it is hard to find solid reasons why not to on most
cases.

You can establish interface and start to code. Others can start to
write code that uses that interface. If it happens that you need to
split processing into several phases for more flexibility of it, then
you do it, it is implementation detail:

class Widget
{
public:
int draw( Gadget& ) const;
bool isDone() const;
// ...
private:
virtual int doDrawPhase1( Gadget& ) const = 0;
virtual int doDrawPhase2( Gadget& ) const = 0;
virtual bool doIsDone() const = 0;
// ...
};

Kind of win-win-win situation. It is still the same amount of
instructions produced if you inline draw() and isDone() but you have
checkpoint within your base interface.
 
Ö

Öö Tiib

I consider myself to be an expert of sorts (well competent at least) and I
am sure you do think the same of yourself.  I've been programming C++ since
1993 and have worked on large and small projects and never really had a
problem with virtual functions being public.  I will not stop this practice
now because a *subset* of the C++ community advocates against it.  If the
*entire* C++ community advocates against it then maybe I would think
different.

My position about the issue is certainly not as strong as to forbid
public virtuals what so ever. I have experienced clear benefits of
it ... but where everybody expects public virtual returning covariant
like for example clone() i implement it and do not implement some sort
of cloning factory that Andrei Alexandrescu suggests somewhere to
avoid breaking the idiom. On clear corner cases it is better to not be
too pedantic.
 
Ö

Öö Tiib

Alexandrescu uses public virtual functions in MC++D also, look at his take
on Visitor Pattern for example. :)

/Leigh

Of course. This is easy to see why. Visitor pattern is a way how
double dispatch is implemented using specific purpose interface. The
purpose of accept() and visit() is defined and is not likely to change
so some concerns are solved and no point to separate them.

The only reason to use private virtual idiom there may be for
simplifying instrumenting that interface for example to do deep
exception safety unit tests on all the different visitors visiting.
Might be good idea with mission-critical products.
 
J

James Kanze

Sorry but I disagree with your opinion.

For the most part, I'm not stating opinion. Look closely at the
design of assert, and the guarantees it gives you.
Different software has different requirements regarding how
defensive you should be. A typical application should not be
using assert to terminate in a released product.

A typical application in what domain. It's clear that any
critical software must terminate as soon as it is in doubt. And
I've pointed out why this is true for an editor (and the same
logic also holds for things like spreadsheets). I also
recognize that there are domains where it isn't true. I'm not
sure, however, what you consider "typical".
A released product should be using exceptions for errors which
are valid during runtime. Assert is used for catching
programming errors, not valid runtime errors or bad user
input.

There's no disagreement on that.
Programmers should get into the habit of adequately testing
their software prior to release (assert helps with this) and
users should get into the habit of regularly backing up their
important data.

It would help if you'd read what was written, before disagreeing
with it. No one is arguing against testing. And it's not the
user who's backing up his data, it's the editor---all of the
editors I know to day regularly checkpoint their data in case
they crash. The whole point is that if there is a programming
error, and the editor continues, it's liable to overwrite the
checkpoint with corrupt data (or the user, not realizing that
the data is corrupt, is liable to overwrite his own data).
Using assert liberally is fine (I have no problem with this)
but this (in most cases) is an aid during development only,
rather creating hundreds of crash points in a released product
(in most cases).

If you've tested correctly, leaving the asserts active creates
zero crash points in the released product. And if you've missed
a case, crashing is the best thing you can do, rather than
continuing, and possibly destroying more data.

[...]
I am sorry but you are wrong, you should either be extremely
defensive or not defensive at all, somewhere in-between is
pointless.

When someone starts issuing statements as ridiculous as that, I
give up. It's not humanly possible to be 100% defensive.
Extremely defensive means at least one assert at some point
after call a function which has side effects (which could be a
precondition check before a subsequent function call). This
is overkill for typical desktop applications for example.
It is a nonsense to say that virtual functions shouldn't be
public: a public virtual destructor is fine if you want to
delete through a base class pointer.

The destructor is an obvious exception. But most experts today
generally agree that virtual functions should usually be either
protected or private.

Again, there are exceptions, and I have classes with only public
virtual functions. (Callbacks are a frequent example.) But
they're just that: exceptions.
Bjarne Stroustrup's first virtual function example in TC++PL
is a public Employee::print() method, I see no problem with
this.

For teaching, neither do I. (For that matter, a print function
might be an exception. It's hard to imagine any reasonable pre-
or post-conditions.)
You are probably thinking of virtual functions which are
called as part of some algorithm implemented in a base class,
such virtual functions need not be public as it makes no sense
for them to be but it does not follow that this is the case
for all virtual functions.

No, I'm not thinking of the template method pattern. I'm
thinking of programming by contract.
Writing code without some bound on stack growth is incorrect
in my opinion.

Yes, but only because we can't catch the overflow in the same
way we can catch bad_alloc. Otherwise, the principle is the
same.
A compiler should not stack fault when parsing source code of
any complexity for example, it should either be non-recursive
(be heap bound) or have some hard limit. A stack fault is not
acceptable, running out of heap is acceptable and can be
signalled via an exception.

Most people would disagree with you in general, concerning a
compiler. Why should it have an artificial hard limit?

In fact, the fact that running out of stack cannot be gracefully
caught means that we do have to do something. But don't confuse
the cause and the effect.
 
J

James Kanze

[...]
Read GoF: even though it includes the Template Method most of
the examples for the other design patterns use public virtual
functions.

Would they do the same today? In 1995, most people hadn't even
heard of private virtual functions.
 
A

Alf P. Steinbach

* James Kanze:
In fact, the fact that running out of stack cannot be gracefully
caught means that we do have to do something. But don't confuse
the cause and the effect.

Another angle is to focus on prevention rather than detection.

A semi-portable way to check for available stack space is to use the de facto
standard 'alloca', which it seems has the same behavior on a wide range of
platforms, returning NULL if there isn't enough space.

Curiously, I've never used that solution; what it solves seems to not be that
big a problem in practice, i.e. it's probably a solution looking for a problem?

And I can imagine that on some systems the alloca that I mentioned here might
cause virtual allocated address space to be mapped to actual memory.

Which might slow down things.


Cheers,

- Alf
 
A

Alf P. Steinbach

* Alf P. Steinbach:
* James Kanze:

Another angle is to focus on prevention rather than detection.

A semi-portable way to check for available stack space is to use the de
facto standard 'alloca', which it seems has the same behavior on a wide
range of platforms, returning NULL if there isn't enough space.

Curiously, I've never used that solution; what it solves seems to not be
that big a problem in practice, i.e. it's probably a solution looking
for a problem?

And I can imagine that on some systems the alloca that I mentioned here
might cause virtual allocated address space to be mapped to actual memory.

Which might slow down things.

Oh, discovery:

the reason that I've never used the alloca technique that I mention above seems
to be that alloca *is not* consistently defined on different platforms.

<url: http://www.mkssoftware.com/docs/man3/alloca.3.asp>
guarantees 0 on error,

<url: http://msdn.microsoft.com/en-us/library/wb1s57t5(VS.71).aspx>
guarantees a "stack overflow exception" on error, and

<url: http://www.kernel.org/doc/man-pages/online/pages/man3/alloca.3.html>
says the error behavior is undefined.

Perhaps the group can benefit from this info.


Cheers,

- Alf
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top