Good way to write integer overflow checks?

  • Thread starter Alf P. Steinbach
  • Start date
A

Alf P. Steinbach

This code is in support of some API functionality:


Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}


Can this be written in an even gooder way, for bestest possible code?

Disclaimer: the code has not been tested or even called.


Cheers,

- Alf
 
V

Victor Bazarov

This code is in support of some API functionality:


Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}


Can this be written in an even gooder way, for bestest possible code?

Disclaimer: the code has not been tested or even called.

What is the code supposed to do? Are you checking if the size of 'r' is
not going to under- or overflow if you add dx and dy to it? Isn't this
the usual way to check if (b+a) is not going to overflow:

if (INT_MAX - b > a) // a+b will NOT overflow
..
?

V
 
A

Alf P. Steinbach

This code is in support of some API functionality:


Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}


Can this be written in an even gooder way, for bestest possible code?

Disclaimer: the code has not been tested or even called.

What is the code supposed to do? Are you checking if the size of 'r' is
not going to under- or overflow if you add dx and dy to it?
Yes.


Isn't this the usual way to check if (b+a) is not going to overflow:

if (INT_MAX - b > a) // a+b will NOT overflow

Don't know, but that expression has formally Undefined Behavior when b
is a negative signed integer, since then the checking itself overflows.

So, at least if one's not going to rely on two's complement form
wrap-around (g++ can be made to trap on that), I /think/ it would yield
more verbose code, possibly also more complicated?


Cheers,

- Alf
 
I

Ian Collins

Alf said:
This code is in support of some API functionality:


Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool[/QUOTE]

Alf,

Why do you insist on using this form rather than the sorter, more
conventional form?
 
A

Alf P. Steinbach

Alf said:
This code is in support of some API functionality:

Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool[/QUOTE]

Why do you insist on using this form rather than the sorter, more
conventional form?[/QUOTE]

Uhm, the word "insist" incorrectly indicates some kind of opposition to
the adoption of `auto`. ;-)

Anyway, there are many good reasons, but for me the most important is a
consistent visual layout:

* function name at fixed place.
* return type visually separated.

Which means that the human eye finds it much easier to just scan through
the code to find a name or note the names, or inspect return types, with
no to read and analyze the text.

Second in importance for me, there is the matter of consistency. Instead
of using two different declaration syntaxes for the same kind of
function, with the choice depending on whether one needs some `decltype`
for the function result, I just use just ...

* one single declaration syntax for all non-void functions.

This is the main reason for using the syntax also for `main`.

I use the old form for "void" functions, consistent with the above
points, and with much the same rationale as for the `procedure` versus
`function` distinction in Pascal. That is, a declaration syntax
distinction between routines that produce expression values, versus
those that don't. I'm not sure how USEFUL that distinction is, but it
feels right.


Cheers & hth.,

- Alf

PS: For readers who are unfamiliar with C++11-stuff, here's Scott Meyers
with a short blog posting about `decltype` and `auto`: <url:
http://scottmeyers.blogspot.no/2013/07/when-decltype-meets-auto.html>.
 
V

Victor Bazarov

This code is in support of some API functionality:


Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}


Can this be written in an even gooder way, for bestest possible code?

Disclaimer: the code has not been tested or even called.

What is the code supposed to do? Are you checking if the size of 'r' is
not going to under- or overflow if you add dx and dy to it?
Yes.


Isn't this the usual way to check if (b+a) is not going to overflow:

if (INT_MAX - b > a) // a+b will NOT overflow

Don't know, but that expression has formally Undefined Behavior when b
is a negative signed integer,

Yes, but that is easy to check itself, is it not?
since then the checking itself overflows.

It won't if you don't do *that particular checking* for negative b, now,
will it?
So, at least if one's not going to rely on two's complement form
wrap-around (g++ can be made to trap on that), I /think/ it would yield
more verbose code, possibly also more complicated?

You mean, more difficult to read and understand than your fiddling with
the bits? Wrap it into smaller functions, name them appropriately, and
you shouldn't have any problem... Or don't.<shrug>

If you wanted an argument, you should have said so. Although,
admittedly, upon re-reading your message, I ought to guessed as much
from the use of "gooder" and "bestest"... You got me.

V
 
A

Alf P. Steinbach

Alf said:
This code is in support of some API functionality:


Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}


Can this be written in an even gooder way, for bestest possible code?

Disclaimer: the code has not been tested or even called.

Yes, it can be. A more generic algorithm, for any signed int type:

template<typename signed_int_t>
signed_int_t add_with_overflow_check(signed_int_t a, signed_int_t b)
{
signed_int_t c=a+b;

if (b > 0)
{
if (c < a)
do_whatever_you_want_when_you_overflow();
}
else
{
if (c > a)
do_whatever_you_want_when_you_overflow();
}

return c;
}

Your C++ homework assignment consists of three parts:

1. Explain why the above code works

Hm, that's a tough one!

I agree that something in this direction is how things ideally should
work, in part because I think that the optimizations that are enabled by
the formal signed overflow UB are both marginal, often radically
unexpected, and generally easy to achieve by more explicit means.

However, with the C++ standard as it is (as of C++11) this code is not
formally portable, and worse, it /should not be/ practically portable to
environments where "-ftrapv" is used with the g++ compiler. So, I tried
your code with MinGW g++ 4.7.2, and the code appears to detect the
overflow even with the "-ftrapv" option specified, where the overflow
should be trapped. Apparently that is a COMPILER BUG. :-(


Code:
#include <stdexcept>

void do_whatever_you_want_when_you_overflow()
{
throw std::runtime_error( "integer overflow");
}

template<typename signed_int_t>
signed_int_t add_with_overflow_check(signed_int_t a, signed_int_t b)
{
signed_int_t c=a+b;

if (b > 0)
{
if (c < a)
do_whatever_you_want_when_you_overflow();
}
else
{
if (c > a)
do_whatever_you_want_when_you_overflow();
}

return c;
}

#include <iostream>
#include <stdlib.h>
using namespace std;

auto main() -> int
{
try
{
int const x = add_with_overflow_check<int>( int(-1/2u), 1 );
cout << x << endl;
return EXIT_SUCCESS;
}
catch( exception const& x )
{
cerr << "!" << x.what() << endl;
}
catch( ... )
{
cerr << "!unknown exception." << endl;
}
return EXIT_FAILURE;
}

[example]
[D:\dev\test]
version g++
g++ (GCC) 4.7.2

[D:\dev\test]
g++ foo.cpp -fno-wrapv -ftrapv
[D:\dev\test]
a
!integer overflow

[D:\dev\test]
[/code]


Apparently the lack of any trapping here means that MinGW g++ is
UNRELIABLE in this respect, i.e. that using "-ftrapv" with g++ is not a
reliable way to catch this kind of error (in the usual cases where the
overflow is an error, not intentional) in debug builds.

2. Implement a version for any unsigned int type

I'm sorry, but that question doesn't make much sense to me.

If you look at the code I posted originally, it shows one way to
leverage the well-defined'ness of unsigned arithmetic to detect overflow
reliably -- if not as elegant as I'd wished it to be, which is what I
asked about.

So, what's the problem there?

3. Use template specialization and std::numeric_limits to implement a
template that will work for any signed or unsigned int type

I think the apparent MinGW compiler bug for question (1) has to be dealt
with first, lest we get into compiler make and version sniffing. Then,
to make use of such a template practical, I think one would need
suitable integer wrapper types with no implicit narrowing, test suites
that demonstrate convincingly that this introduces no bugs and no added
inefficiency, and examples that demonstrate that one would not easily
use it incorrectly. This seems to be a pretty big effort?


Cheers,

- Alf
 
Ö

Öö Tiib

This code is in support of some API functionality:
Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}

Can this be written in an even gooder way, for bestest possible code?

Depends what is "goodness". I would write something like:

template<typename T>
inline
bool can_add(T const a, T const b)
{
return a < 0 ? std::numeric_limits<T>::min() - a =< b
: std::numeric_limits<T>::max() - a >= b;
}

inline
bool can_inflate( gdi::Rect const& r, int const dx, int const dy )
{
return can_add(r.left, -dx)
&& can_add(r.top, -dy)
&& can_add(r.right, dx)
&& can_add(r.bottom, dy);
}

Typed into post and did not test, sorry.
 
A

Alf P. Steinbach

This code is in support of some API functionality:
Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}

Can this be written in an even gooder way, for bestest possible code?

Depends what is "goodness". I would write something like:

template<typename T>
inline
bool can_add(T const a, T const b)
{
return a < 0 ? std::numeric_limits<T>::min() - a =< b
: std::numeric_limits<T>::max() - a >= b;
}

inline
bool can_inflate( gdi::Rect const& r, int const dx, int const dy )
{
return can_add(r.left, -dx)
&& can_add(r.top, -dy)
&& can_add(r.right, dx)
&& can_add(r.bottom, dy);
}

Typed into post and did not test, sorry.

It's more clear code. :)

But involves a branch for each test.

I wonder if the compiler can optimize that away, and if not, if one can
somehow reduce the whole thing to two branches (which appears minimum?).


Cheers, & thanks,

- Alf
 
Ö

Öö Tiib

This code is in support of some API functionality:
Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb)&&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}

Can this be written in an even gooder way, for bestest possible code?

Depends what is "goodness". I would write something like:

template<typename T>
inline
bool can_add(T const a, T const b)
{
return a < 0 ? std::numeric_limits<T>::min() - a =< b
: std::numeric_limits<T>::max() - a >= b;
}

inline
bool can_inflate( gdi::Rect const& r, int const dx, int const dy )
{
return can_add(r.left, -dx)
&& can_add(r.top, -dy)
&& can_add(r.right, dx)
&& can_add(r.bottom, dy);
}

Typed into post and did not test, sorry.



It's more clear code. :)

That they usually consider one aspect of goodness.
But involves a branch for each test.

I wonder if the compiler can optimize that away, and if not, if one can
somehow reduce the whole thing to two branches (which appears minimum?).

Maybe split the problem from other place then:

template<typename T>
inline
bool can_enwiden(T const low, T const high, T delta)
{
return delta < 0 ? (std::numeric_limits<T>::min() - delta <= high
&& std::numeric_limits<T>::max() + delta >= low)
: (std::numeric_limits<T>::max() - delta >= high
&& std::numeric_limits<T>::min() + delta <= low);
}

I feel more worried if the checks done are sufficient. For example does
the invariant of that 'gdi::Rect' really allow the turned around rectangles?
I mean where 'r.left > r.right'? Current checks tell that one 'can_inflate'
into such rectangles.
 
A

Alf P. Steinbach

Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}

Can this be written in an even gooder way, for bestest possible code?

Depends what is "goodness". I would write something like:

template<typename T>
inline
bool can_add(T const a, T const b)
{
return a < 0 ? std::numeric_limits<T>::min() - a =< b
: std::numeric_limits<T>::max() - a >= b;
}

inline
bool can_inflate( gdi::Rect const& r, int const dx, int const dy )
{
return can_add(r.left, -dx)
&& can_add(r.top, -dy)
&& can_add(r.right, dx)
&& can_add(r.bottom, dy);
}

Typed into post and did not test, sorry.



It's more clear code. :)

That they usually consider one aspect of goodness.
But involves a branch for each test.

I wonder if the compiler can optimize that away, and if not, if one can
somehow reduce the whole thing to two branches (which appears minimum?).

Maybe split the problem from other place then:

template<typename T>
inline
bool can_enwiden(T const low, T const high, T delta)
{
return delta < 0 ? (std::numeric_limits<T>::min() - delta <= high
&& std::numeric_limits<T>::max() + delta >= low)
: (std::numeric_limits<T>::max() - delta >= high
&& std::numeric_limits<T>::min() + delta <= low);
}

I didn't understand that at first glance, but it's pretty cool. ;-)

I feel more worried if the checks done are sufficient. For example does
the invariant of that 'gdi::Rect' really allow the turned around rectangles?
I mean where 'r.left > r.right'? Current checks tell that one 'can_inflate'
into such rectangles.

Yes, it's the Windows API's RECT, and testing this, at least as of
Window 7 it allows rectangles to be turned around. It has to allow
"turned around" rectangles as such because they're used with various
coordinate systems. But the API could conceivably refuse to invert a
rectangle via the inflation function (called InflateRect), since
presumably a given rectangle should make sense for some given coordinate
system, and then turning it around (so to speak) could reasonably fail
as meaningless. It's not documented, AFAICS. But it just inflated with
negative delta, turning the rectangle inside out.


Cheers,

- Alf
 
A

Alf P. Steinbach

On 11/9/2013 10:28 AM, Alf P. Steinbach wrote:
This code is in support of some API functionality:


Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}


Can this be written in an even gooder way, for bestest possible code?

Disclaimer: the code has not been tested or even called.

What is the code supposed to do? Are you checking if the size of 'r' is
not going to under- or overflow if you add dx and dy to it?
Yes.


Isn't this the usual way to check if (b+a) is not going to overflow:

if (INT_MAX - b > a) // a+b will NOT overflow

Don't know, but that expression has formally Undefined Behavior when b
is a negative signed integer, since then the checking itself overflows.

If you know (or have previously checked) that dx and/or dy are
non-negative, then there is no problem.
So, at least if one's not going to rely on two's complement form
wrap-around (g++ can be made to trap on that), I /think/ it would yield
more verbose code, possibly also more complicated?

Didn't you learn /anything/ from the thread about undefined behaviour on
signed overflow? Because signed overflow is undefined, the compiler can
generate /better/ code than it could if it had to support wrap-around
behaviour. It will certainly /never/ generate worse code.

Stop guessing (you are not "/thinking/", you are guessing) and try it
out. Look at the generated assembly on the target in question. Profile
it and see if it is too slow. Otherwise any attempt to improve the code
is a waste because you don't know that the code is a problem, and can't
check or measure any improvements.

I'm sorry but what you write sounds like trolling. It's certainly
idiocy, it's certainly more about your guessing about people's thoughts
than the technical, and your assertions are extremely dishonest.

I.e. you're a liar.

Plink.

- Alf
 
A

Alf P. Steinbach

[snipped lots of idiocy, then:]
Oh, and "int(-1/2u)" is undefined behaviour.

No, unsigned arithmetic is well-defined.

I think I plinked you earlier, are you posting under a new e-mail address?

Plink.


- ALf
 
A

Alf P. Steinbach

Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool
{
CPPX_XASSERT( INT_MIN/2 < dx && dx < INT_MAX/2 );
CPPX_XASSERT( INT_MIN/2 < dy && dy < INT_MAX/2 );

typedef unsigned long Unsigned_long;
auto const msb = ULONG_MAX - (ULONG_MAX >> 1);
return
(r.left & msb) == ((Unsigned_long( r.left ) - dx) & msb) &&
(r.top & msb) == ((Unsigned_long( r.top ) - dy) & msb) &&
(r.right & msb) == ((Unsigned_long( r.right ) + dx) & msb) &&
(r.bottom & msb) == ((Unsigned_long( r.bottom ) + dy) & msb);
}

Can this be written in an even gooder way, for bestest possible code?

Depends what is "goodness". I would write something like:

template<typename T>
inline
bool can_add(T const a, T const b)
{
return a < 0 ? std::numeric_limits<T>::min() - a =< b
: std::numeric_limits<T>::max() - a >= b;
}

inline
bool can_inflate( gdi::Rect const& r, int const dx, int const dy )
{
return can_add(r.left, -dx)
&& can_add(r.top, -dy)
&& can_add(r.right, dx)
&& can_add(r.bottom, dy);
}

Typed into post and did not test, sorry.

It's more clear code. :)

But involves a branch for each test.

I wonder if the compiler can optimize that away, and if not, if one can
somehow reduce the whole thing to two branches (which appears minimum?).

If you want to know what code the compiler generates, try it and see.
Look at the generated assembly. If you can't read the assembly, then
you are not qualified to judge the quality of the generated code - trust
your compiler, because it knows more than you do.

gcc (and other compilers) are pretty good at eliminating branch
instructions, as long as you enable reasonably powerful optimisations
(-O is, I think, sufficient on gcc - but -O2 is the norm for code when
you are interested in performance).

Anyway, do you /know/ that branches are costly here? On some cpus they
are slow and lead to pipeline flushes or stalls. But on modern x86-64
cpus, branches like this can often be sorted out early in the execution
path or handled by speculative execution, and become free. What did
your profiling runs tell you about how much effect this code has on the
program's run times?

Also make sure you are using the appropriate "-mtune" and "-march"
parameters to get the best code for the particular target you are using.


This all sounds to me like premature optimisation, which as we know
(thanks to Knuth) is the root of all evil. So if you are asking about
the generated code for performance reasons, you are doing things badly
wrong - but if it is out of interest or perfectionism and for your own
enjoyment, then that's fine. Like many evils, premature optimisation is
fun :)

I've already replied to earlier postings of yours in this thread.

You have demonstrated general incompetence, reasoning disability,
dishonesty and trolling.

Plink.


- Alf
 
V

Victor Bazarov

[snipped lots of idiocy, then:]
Oh, and "int(-1/2u)" is undefined behaviour.

No, unsigned arithmetic is well-defined.

AFAIUI, int(-1/2u) means "take the signed int -1, promote it to an
unsigned to be compatible with 2u (this promotion is UB for a negative
number),

Where did you get the UB portion of that? And it's not "promoted", it's
"converted". Promotions defined in [conv.prom] and they don't involve
'int' as the *source* type.
divide it by 2 (fine in unsigned), then convert it to a signed
int (fine if there is no overflow)".

"Fine" is not a definition of what's going to happen. It's
implementation-defined if the value cannot be represented in the
destination type. See [conv.integral].
Does it mean something else to you? (It's quite possible that I've got
this wrong, but I'd prefer a better reference. Certainly "unsigned
arithmetic is well-defined" is /not/ the answer.)

Well, get a copy of the Standard and ask questions after reading the
relevant sections of it.

V
 
V

Victor Bazarov

On 11/11/13 12:35, Alf P. Steinbach wrote:
On 11.11.2013 10:27, David Brown wrote:

[snipped lots of idiocy, then:]

Oh, and "int(-1/2u)" is undefined behaviour.

No, unsigned arithmetic is well-defined.

AFAIUI, int(-1/2u) means "take the signed int -1, promote it to an
unsigned to be compatible with 2u (this promotion is UB for a negative
number),

Where did you get the UB portion of that? And it's not "promoted", it's
"converted". Promotions defined in [conv.prom] and they don't involve
'int' as the *source* type.

I've read the section of the standard document, and I stand corrected -
thanks.

The "-1" is converted, rather than promoted, to "unsigned int" - and
that conversion is done modulo 2^n. So the conversion of -1 to
"unsigned int" is well defined.
divide it by 2 (fine in unsigned), then convert it to a signed
int (fine if there is no overflow)".

"Fine" is not a definition of what's going to happen. It's
implementation-defined if the value cannot be represented in the
destination type. See [conv.integral].

By "fine", I mean there is clearly defined behaviour for converting an
"unsigned int" into an "int" as long as there is no overflow - i.e., the
value can be represented identically as an "int".

If there /is/ an overflow (i.e., the value cannot be represented), then

The term "overflow" as used by some other folks here is not the same as
"requires more storage than can be given". I believe the use of the
term 'overflow' relates to the situation that can be recognized by the
processor and appropriately flagged (or trapped). It is run-time
behavior (or situation), not hypothetical relationship between numbers
that exists at the compile time.

So, given that your use of the term "overflow" appears different from
Alf's, I would recommend reviewing your understanding and your previous
statements in this thread. This might help you understand the
objections Alf had to what you had said/written.

According to the standard, the behavior is implementation-defined, that
means that whatever the behavior is, it shall be documented. The result
of such conversion *may* cause overflow (IOW allow such condition to be
registered and programmatically recognized and acted upon), but it does
not *need to*, nor does it *usually* happen.
the behaviour is implementation defined. (It is a good job I didn't say
what I thought would happen in this case - before reading the standards
closely here, I would probably have said it was "undefined behaviour".)


The obvious question to ask is, did I get it right this time?

I'd say, you're getting closer.

V
 
J

James Kanze

Alf said:
This code is in support of some API functionality:
Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool[/QUOTE] 
Why do you insist on using this form rather than the sorter, more
conventional form?[/QUOTE][/QUOTE]
[QUOTE]
Uhm, the word "insist" incorrectly indicates some kind of opposition to
the adoption of `auto`. ;-)[/QUOTE]
[QUOTE]
Anyway, there are many good reasons, but for me the most important is a
consistent visual layout:[/QUOTE]
[QUOTE]
* function name at fixed place.
* return type visually separated.[/QUOTE]

You mean like the way I've always written function definitions:

bool
can_inflate( ... )
{
}

The function name is always at the start of the line.  Has been
since I learned C.  (Back then, about all we had for searching
was grep, and "grep ^can_inflate" would always get the
definition, and nothing else.)
[QUOTE]
Which means that the human eye finds it much easier to just scan through
the code to find a name or note the names, or inspect return types, with
no to read and analyze the text.[/QUOTE]

You're style makes it harder to find the return type.  (I never
found this a problem in Modula-2, but C++ is not Modula-2.)
 
J

James Kanze

Alf P. Steinbach writes:
[...]
Yes, it can be. A more generic algorithm, for any signed int type:
template<typename signed_int_t>
signed_int_t add_with_overflow_check(signed_int_t a, signed_int_t b)
{
signed_int_t c=a+b;
if (b > 0)
{
if (c < a)
do_whatever_you_want_when_you_overflow();
}
else
{
if (c > a)
do_whatever_you_want_when_you_overflow();
}
return c;
}
Your C++ homework assignment consists of three parts:
1. Explain why the above code works

It doesn't. Unless the undefined behavior happens to give it
the appearance of working.

To correctly check for overflow of signed integral types, you
first have to check whether the signs are the same. (If they
aren't it won't overflow.) Then check either
std::numeric_limits<int_t>::max() - a or
std::numeric_limits<int_t>::min() + a with b, depending on
whether both are negative, or both are positive.
 
A

Alf P. Steinbach

Alf P. Steinbach wrote:
This code is in support of some API functionality:
Code:
inline
auto can_inflate( gdi::Rect const& r, int const dx, int const dy )
-> bool 
Why do you insist on using this form rather than the sorter, more
conventional form?[/QUOTE][/QUOTE]
[QUOTE]
Uhm, the word "insist" incorrectly indicates some kind of opposition to
the adoption of `auto`. ;-)[/QUOTE]
[QUOTE]
Anyway, there are many good reasons, but for me the most important is a
consistent visual layout:[/QUOTE]
[QUOTE]
* function name at fixed place.
* return type visually separated.[/QUOTE]

You mean like the way I've always written function definitions:

bool
can_inflate( ... )
{
}

The function name is always at the start of the line.  Has been
since I learned C.  (Back then, about all we had for searching
was grep, and "grep ^can_inflate" would always get the
definition, and nothing else.)[/QUOTE]

No, with C++11 that style can no longer (in practice) yield a consistent
layout, since in cases where the return type depends on the argument
types one avoids a lot of complication and verbosity by using `auto`.
For details of how bad it can be see Andrei's "min max revisited"
article in DDJ (I leave it to the reader to google it). For example,
even in C++11 you can NOT simply write

template< class T >
decltype( a*b ) mul( T a, T b )
{ return a*b; }

But you can write, and since you're pragmatic you will eventually write,

template< class T >
auto mul( T a, T b )
-> decltype( a*b )
{ return a*b; }

So, your currently favorite style was good for C, to the degree that a
syntax that both the C creators and the C++ creator have described as a
"failed experiment", can be good. It was good in that sense also for
C++03. With C++11 it's IMHO just ungood, since it no longer covers all
cases and thus yields an inconsistent mix of declaration styles.


d>> Which means that the human eye finds it much easier to just scan
through[QUOTE]
You're style makes it harder to find the return type.  (I never
found this a problem in Modula-2, but C++ is not Modula-2.)[/QUOTE]

Hm, I can't remember much about Modula-2 declarations.

I do remember that good old Niklaus, Praise be Upon Him, for some
inexplicable reason forced the Modula-2 programmer to use UPPERCASE
keywords. Or I think I remember that. Also it was nice with built-in
coroutine support.

Anyway... ;-)


Cheers & hth.,

- Alf
 
T

tomalak

I've already replied to earlier postings of yours in this thread.

You have demonstrated general incompetence, reasoning disability,
dishonesty and trolling.

Plink.

Alf,

Please refrain from this unnecessary vitriol. This is a technical forum forthe discussion of technical issues, and childish insults and finger-in-ear"la la la I can't hear you" screaming is off-topic. If you wish to "plink"someone then fine, do it, but there is no need to advertise the fact to the entire group, other than to "have the last word". Shame it's not even a real word.

Let's all stick to the matter at hand.

Thanks.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,007
Latest member
OrderFitnessKetoCapsules

Latest Threads

Top