Slow -- VERY slow brain

  • Thread starter Alf P. Steinbach /Usenet
  • Start date
A

Alf P. Steinbach /Usenet

Well, I think, it's not just me.

Since it's in the FAQ, and since we have discussed this for years, it must be
nearly all of us: we're idiots.

I'm thinking of the argument for `unsigned` type, that you can do

void foo( unsigned n )
{
assert( n < maxN );
// blah
}

which is reportedly both more concise and more efficient than

void foo( int n )
{
assert( 0 <= n && n < maxN );
// blah
}

But, the second one can always be written as

void foo( int n )
{
assert( unsigned( n ) < maxN );
// blah
}

which is perhaps not quite as concise, but is exactly as efficient (doing
exactly the same), and moreover sort of forces you to think about what the first
function's code really does -- which is the same as this.

In short, the efficiency argument is bogus.

Why did we not recognize that earlier (like, why is the bogus argument still in
the FAQ with no counter-argument), or, perhaps, why have I not recognized this
being mentioned in the postings about signed/unsigned?


Cheers,

- Alf (wondering)
 
B

Balog Pal

Alf P. Steinbach /Usenet said:
we're idiots.

I bet we are but for different reasons. :)
I'm thinking of the argument for `unsigned` type, that you can do

void foo( unsigned n )
{
assert( n < maxN );
// blah
}

which is reportedly both more concise and more efficient than

void foo( int n )
{
assert( 0 <= n && n < maxN );
// blah
}

Huh? Is there such a claim really? Can we say "efficiency" just from source
code? In my FAQ it is pretty clear: efficiency is stuff we *measure*. Using
profiler or equivalent tool. In some corner cases it's allowed to look at
generated assy, especially if we expect same code generated, thus further
work can be dismissed.

For most modern (and many obsolete) processors that have the usual ALU with
sign, carry and overflow flags, I'd predict a healthy optimizer generates
equivalent code for the expressions.
But, the second one can always be written as

void foo( int n )
{
assert( unsigned( n ) < maxN );
// blah
}

which is perhaps not quite as concise, but is exactly as efficient (doing
exactly the same), and moreover sort of forces you to think about what the
first function's code really does -- which is the same as this.

I sure would not write it that way instead of the first. But we use compiler
and C instead of assy for the very reason to let it figure out the best
instructions, don't we? And keep the source abstract. If I want my number
be on [0..N] or [0..N) range, I shall say that instead of juggling with
casts that are on the blacklist for a good reason. :)
In short, the efficiency argument is bogus.

If measurements were not presented, it was bogus even if one form was
actually less efficient. (And just because you can make the above source
conversion, id does not mean you got it for free. IIRC for a 6502 processor
you need different amount of code for a signed/unsigned char; I'm lazy to
figure out the int case...)
Why did we not recognize that earlier (like, why is the bogus argument
still in the FAQ with no counter-argument),

Are you serious? Who on Earth reads the f*** FAQ? ;-) Those are strictly
write only documents.
 
P

Puppet_Sock

On Jun 15, 7:08 pm, "Alf P. Steinbach /Usenet" <alf.p.steinbach
(e-mail address removed)> wrote:
[snip]
   void foo( int n )
   {
       assert( 0 <= n && n < maxN );
       // blah
   }

But, the second one can always be written as

   void foo( int n )
   {
       assert( unsigned( n ) < maxN );
       // blah
   }

which is perhaps not quite as concise, but is exactly as efficient (doing
exactly the same), and moreover sort of forces you to think about what the first
function's code really does --  which is the same as this.

Um. Ok, maybe it's the same thing. But that second
one will send me off to find out what you get when
the value of n is -3. And if I found this during a
code review I'd flag it for the code writer to get
rid of in favor of the first. Putting a typecast
in an assert is more than my brain wants to deal
with on only one cup of coffee.
Socks
 
K

Keith H Duggar

Well, I think, it's not just me.

Since it's in the FAQ, and since we have discussed this for years, it must be
nearly all of us: we're idiots.

I'm thinking of the argument for `unsigned` type, that you can do

   void foo( unsigned n )
   {
       assert( n < maxN );
       // blah
   }

which is reportedly both more concise and more efficient than

   void foo( int n )
   {
       assert( 0 <= n && n < maxN );
       // blah
   }

But, the second one can always be written as

   void foo( int n )
   {
       assert( unsigned( n ) < maxN );
       // blah
   }

which is perhaps not quite as concise, but is exactly as efficient (doing
exactly the same), and moreover sort of forces you to think about what the first
function's code really does --  which is the same as this.

In short, the efficiency argument is bogus.

Indeed that puts the situation into the sharpest relief.
Why did we not recognize that earlier (like, why is the bogus argument still in
the FAQ with no counter-argument), or, perhaps, why have I not recognizedthis
being mentioned in the postings about signed/unsigned?

When I was in Silvio Micali's Cryptography and Cryptoanalysis
course at MIT, he taught us a most valuable lesson. Here is a
recount of the lesson (sorry, don't recall the date):

The first day in 6.875 Cryptography and Cryptanalysis, Prof.
Silvio Micali put a simple secure communication problem on
the board and asked if anyone had a solution.

After a minute, a student raised their hand and proposed a
solution. Micali quickly demonstrated in a few lines of chalk
how to crack the proposal; it was not secure.

A minute more and another hand shot up. The result? Again
Prof. Micali cracked the proposal. Again and again, proposal
after proposal was demonstrated insecure. No student could
find a solution.

After about 15 minutes, Prof. Micali said the following (as
best I can't remember:

"Ok. This is a perfect place to begin this course. This room
is filled with what, 30 or 40 MIT grad and undergrad students?
And none of you can solve this problem.

Now, I'm going to show you the answer. And when I do it's
going to be so simple that you are going to say 'Hah! That's
easy.' No! Something that's 'easy' is easy from the beginning.
After somebody has to show you the answer, it's only simple"

In short

simple != easy/obvious/trivial

Many of the simplest ideas have taken humanity millennia to discover.

KHD
 
M

madamemandm

Well, I think, it's not just me.

Since it's in the FAQ, and since we have discussed this for years, it must be
nearly all of us: we're idiots.

I'm thinking of the argument for `unsigned` type, that you can do

   void foo( unsigned n )
   {
       assert( n < maxN );
       // blah
   }

which is reportedly both more concise and more efficient than

   void foo( int n )
   {
       assert( 0 <= n && n < maxN );
       // blah
   }

But, the second one can always be written as

   void foo( int n )
   {
       assert( unsigned( n ) < maxN );
       // blah
   }
[snip]
Really? On a system where UINT_MAX = INT_MAX = -INT_MIN wouldn't the
second fire the assert while the third would not? (Assuming maxN >
0).

(Not that the effeciency argument isn't bogus anyway, but I don't
think this demonstrates it.)

Martin Shobe
 
M

madamemandm

Well, I think, it's not just me.
Since it's in the FAQ, and since we have discussed this for years, it must be
nearly all of us: we're idiots.
I'm thinking of the argument for `unsigned` type, that you can do
   void foo( unsigned n )
   {
       assert( n < maxN );
       // blah
   }
which is reportedly both more concise and more efficient than
   void foo( int n )
   {
       assert( 0 <= n && n < maxN );
       // blah
   }
But, the second one can always be written as
   void foo( int n )
   {
       assert( unsigned( n ) < maxN );
       // blah
   }

[snip]
Really?  On a system where UINT_MAX = INT_MAX = -INT_MIN wouldn't the
second fire the assert while the third would not?  (Assuming maxN >
0).

(Not that the effeciency argument isn't bogus anyway, but I don't
think this demonstrates it.)

Oops! Should double check before I send. Let n = INT_MIN and assume
maxN > 1.

Martin Shobe
 
A

Alf P. Steinbach /Usenet

* (e-mail address removed), on 16.06.2011 16:31:
Well, I think, it's not just me.
Since it's in the FAQ, and since we have discussed this for years, it must be
nearly all of us: we're idiots.
I'm thinking of the argument for `unsigned` type, that you can do
void foo( unsigned n )
{
assert( n< maxN );
// blah
}
which is reportedly both more concise and more efficient than
void foo( int n )
{
assert( 0<= n&& n< maxN );
// blah
}
But, the second one can always be written as
void foo( int n )
{
assert( unsigned( n )< maxN );
// blah
}

[snip]
Really? On a system where UINT_MAX = INT_MAX = -INT_MIN wouldn't the
second fire the assert while the third would not? (Assuming maxN>
0).

(Not that the effeciency argument isn't bogus anyway, but I don't
think this demonstrates it.)

Oops! Should double check before I send. Let n = INT_MIN and assume
maxN> 1.

The standard guarantees that when x is of integral type then unsigned(x) is
congruent to x modulo 2^n where n is the number of value representation bits in
`unsigned`. No, this isn't it a quote. I just haven't had my coffee yet so not
quite able to express things more simply, sorry.

Anyway, you don't need the math to see that the two codes do the same.

In the first case the conversion happens at the call site as an implicit
promotion, in the second case the same conversion happens locally via a cast.


Cheers & hth.,

- Alf
 
M

madamemandm

* (e-mail address removed), on 16.06.2011 16:31:




On Jun 15, 6:08 pm, "Alf P. Steinbach /Usenet"<alf.p.steinbach
(e-mail address removed)>  wrote:
Well, I think, it's not just me.
Since it's in the FAQ, and since we have discussed this for years, itmust be
nearly all of us: we're idiots.
I'm thinking of the argument for `unsigned` type, that you can do
    void foo( unsigned n )
    {
        assert( n<  maxN );
        // blah
    }
which is reportedly both more concise and more efficient than
    void foo( int n )
    {
        assert( 0<= n&&  n<  maxN );
        // blah
    }
But, the second one can always be written as
    void foo( int n )
    {
        assert( unsigned( n )<  maxN );
        // blah
    }
[snip]
Really?  On a system where UINT_MAX = INT_MAX = -INT_MIN wouldn't the
second fire the assert while the third would not?  (Assuming maxN>
0).
(Not that the effeciency argument isn't bogus anyway, but I don't
think this demonstrates it.)
Oops!  Should double check before I send.  Let n = INT_MIN and assume
maxN>  1.

The standard guarantees that when x is of integral type then unsigned(x) is
congruent to x modulo 2^n where n is the number of value representation bits in
`unsigned`. No, this isn't it a quote. I just haven't had my coffee yet so not
quite able to express things more simply, sorry.

That is correct, and if you do the math under the conditions I
described, you find that unsigned(INT_MIN) == 1.
Anyway, you don't need the math to see that the two codes do the same.

Not without some additional restrictions on the values of n.
In the first case the conversion happens at the call site as an implicit
promotion, in the second case the same conversion happens locally via a cast.

You need some gaurentees not provided by the language to get any pair
of the three versions to behave the same. The second and third differ
as described above. The first differs from the second and third in
that values of n (before conversion at the call-site) such that
INT_MAX < n <= UINT_MAX is well-defined for the first case, and
implementation defined for the other two.

Martin Shobe
 
A

Alf P. Steinbach /Usenet

* (e-mail address removed), on 16.06.2011 22:10:
* (e-mail address removed), on 16.06.2011 16:31:




On Jun 16, 9:22 am, "(e-mail address removed)"<[email protected]>
wrote:
On Jun 15, 6:08 pm, "Alf P. Steinbach /Usenet"<alf.p.steinbach
(e-mail address removed)> wrote:
Well, I think, it's not just me.
Since it's in the FAQ, and since we have discussed this for years, it must be
nearly all of us: we're idiots.
I'm thinking of the argument for `unsigned` type, that you can do
void foo( unsigned n )
{
assert( n< maxN );
// blah
}
which is reportedly both more concise and more efficient than
void foo( int n )
{
assert( 0<= n&& n< maxN );
// blah
}
But, the second one can always be written as
void foo( int n )
{
assert( unsigned( n )< maxN );
// blah
}
[snip]
Really? On a system where UINT_MAX = INT_MAX = -INT_MIN wouldn't the
second fire the assert while the third would not? (Assuming maxN>
0).
(Not that the effeciency argument isn't bogus anyway, but I don't
think this demonstrates it.)
Oops! Should double check before I send. Let n = INT_MIN and assume
maxN> 1.

The standard guarantees that when x is of integral type then unsigned(x) is
congruent to x modulo 2^n where n is the number of value representation bits in
`unsigned`. No, this isn't it a quote. I just haven't had my coffee yet so not
quite able to express things more simply, sorry.

That is correct, and if you do the math under the conditions I
described, you find that unsigned(INT_MIN) == 1.

$3.9.1/3 requires the same value representation for a signed type as for its
corresponding unsigned type.

$3.9/4 defines value representation as the set of bits that holds the value.

With the same number of bits for the value representation the standard does not
allow your condition UINT_MAX = INT_MAX; you are guaranteed that for every
signed T value, there is a unique unsigned T value.

Not without some additional restrictions on the values of n.

That's meaningless, sorry.

You need some gaurentees not provided by the language to get any pair
of the three versions to behave the same. The second and third differ
as described above. The first differs from the second and third in
that values of n (before conversion at the call-site) such that
INT_MAX< n<= UINT_MAX is well-defined for the first case, and
implementation defined for the other two.

That's again meaningless, sorry.


Cheers & hth.
 
W

Werner

Well, I think, it's not just me.

Since it's in the FAQ, and since we have discussed this for years, it must be
nearly all of us: we're idiots.

I'm thinking of the argument for `unsigned` type, that you can do

   void foo( unsigned n )
   {
       assert( n < maxN );
       // blah
   }

which is reportedly both more concise and more efficient than

   void foo( int n )
   {
       assert( 0 <= n && n < maxN );
       // blah
   }

But, the second one can always be written as

   void foo( int n )
   {
       assert( unsigned( n ) < maxN );
       // blah
   }

which is perhaps not quite as concise, but is exactly as efficient (doing
exactly the same), and moreover sort of forces you to think about what the first
function's code really does --  which is the same as this.

I'm assuming maxN is of type unsigned.

I'm not sure they do exactly the same thing. This all depends
on the value of <n> and the value of <maxN>, doesn't it?

#include <cstddef>
#include <iostream>

unsigned maxN = UINT_MAX - 5;

void foo( int n )
{
std::cout << "foo( " << n << " ):" << std::endl;

//Fails when (n == -7), uncomment to test next case
assert( 0 <= n );

//Fails when (n == -6) only
assert( unsigned( n ) < maxN );
}


int main()
{
int m = int( maxN ) - 1,
n = int( maxN );

//Fails because negative
foo( m );

//Fails because converted value exceeds maxN
// and because negative
foo( n );
}

In the case here above...

assert( 0 <= n );

....clearly means something different, doesn't it?
Perhaps I'm a little dull to see your point.

Kind regards,

Werner Erasmus
 
M

madamemandm

* (e-mail address removed), on 16.06.2011 22:10: [snip]
That is correct, and if you do the math under the conditions I
described, you find that unsigned(INT_MIN) == 1.

$3.9.1/3 requires the same value representation for a signed type as for its
corresponding unsigned type.

n3092 says that they have the same object representation, not the same
value representation.
Which version are you getting yours from?

[snip]
That's meaningless, sorry.

What's meaningless about it? There is a range of values of n for
which both versions are quarenteed to do the same thing. There are
values of n outside that range where they might not do the same thing.
That's again meaningless, sorry.

What's meaningless about it? There is a range of values of n for
which both versions are quarenteed to do the same thing. There are
values of n outside that range (provided above) where they might not
do the same thing.

Martin Shobe
 
Ö

Öö Tiib

I trust that it is not beautiful anyway and such things should be left
to compiler to optimize. ;)
I'm assuming maxN is of type unsigned.

No. Alf suggests to measure all sizes and counts with signed positive
values. To use unsigned only for bit-wise arithmetic. That makes lot
of sense *1).
The only problem with that philosophy is that standard library uses
unsigned values for all sizes *2).
I'm not sure they do exactly the same thing. This all depends
on the value of <n> and the value of <maxN>, doesn't it?

The case with that sign bit is a red herring. That was clear 20 years
ago when most PCs had 16 bit processors *3) so it should be same now
when most PC-s have 64 bit processors.

[snip examples that concentrated on UINT_MAX]

If you always use signed values for counting and sizes then using
UINT_MAX in the context is a bug.


*1) Because signed arithmetic is lot easier to understand for average
mind than modular arithmetic. What you actually need is almost never
modular arithmetic.

*2) It is not sure why unsigned won to be size_t and got into standard
library as count. Possibly the flame warriors for using unsigned
argued better on boards back then. I did not participate, my English
skill was near none. The workaround is that you can wrap accessing
standard library to keep things pure or you can use alternative
libraries (like Qt) that do count with signed values.

*3) Back then we often had 16 bit int for various things, since it was
quickest. We used that for various counts that "is always less than 20
000". Now it sometimes happened that the requirements changed and the
count become "up to 40 000". It was mistake to change the count
variable type to 16 bit unsigned int because it did not help. When
requirement does jump from "up to 20 000" to "up to 40 000" then the
times when the requirement changes again to "count is up to 70 000"
are as near as next year. Changing it into 32 bit int immediately
saved time, next time it was only to modify "maxN".
 
K

Kai-Uwe Bux

* (e-mail address removed), on 16.06.2011 22:10: [snip]
On Jun 16, 11:51 am, "Alf P. Steinbach
The standard guarantees that when x is of integral type then
unsigned(x) is congruent to x modulo 2^n where n is the number of
value representation bits in `unsigned`. No, this isn't it a quote. I
just haven't had my coffee yet so not quite able to express things
more simply, sorry.
That is correct, and if you do the math under the conditions I
described, you find that unsigned(INT_MIN) == 1.

$3.9.1/3 requires the same value representation for a signed type as for
its corresponding unsigned type.

n3092 says that they have the same object representation, not the same
value representation.
Which version are you getting yours from?
[...]

Actual standard [3.9.1/3]:

For each of the signed integer types, there exists a corresponding (but
different) unsigned integer type: "unsigned char", "unsigned short int",
"unsigned int", and "unsigned long int," each of which occupies the same
amount of storage and has the same alignment requirements (3.9) as the
corresponding signed integer type40) ; that is, each signed integer type
has the same object representation as its corresponding unsigned integer
type.

That is the sentence, you read. But read further in the same clause:

The range of nonnegative values of a signed integer type is a subrange of
the corresponding unsigned integer type, and the value representation of
each corresponding signed/unsigned type shall be the same.

In particularm note the "and the value representation ..." part.


In draft n3291 [3.9.1/3] you find the same:

For each of the standard signed integer types, there exists a
corresponding (but different) standard unsigned integer type: "unsigned
char", "unsigned short int", "unsigned int", "unsigned long int", and
"unsigned long long int", each of which occupies the same amount of
storage and has the same alignment requirements (3.11) as the
corresponding signed integer type45 ; that is, each signed integer type
has the same object representation as its corresponding unsigned integer
type. Likewise, for each of the extended signed integer types there exists
a corresponding extended unsigned integer type with the same amount of
storage and alignment requirements. The standard and extended unsigned
integer types are collectively called unsigned integer types. The range of
non-negative values of a signed integer type is a subrange of the
corresponding unsigned integer type, and

the value representation of each corresponding signed/unsigned type
shall be the same. [highlighting added]

The standard signed integer types and standard unsigned integer types are
collectively called the standard integer types, and the extended signed
integer types and extended unsigned integer types are collectively called
the extended integer types.


You might want to check n3092 again. Chances are, it also contains the
provision about the value representation.


Best,

Kai-Uwe Bux
 
W

Werner

No. Alf suggests to measure all sizes and counts with signed positive
values. To use unsigned only for bit-wise arithmetic. That makes lot
of sense *1).
The only problem with that philosophy is that standard library uses
unsigned values for all sizes *2).

Pardon my naiveness, but where is the modular arithmetic in ...

void foo( unsigned n )
{
assert( n < maxN );
}

.... assuming maxN is unsigned when n unsigned?

Are you speaking of the possibility of modular arithmetic
at the call site due to possible integral conversion
(iaw. 4.7.2)?
The case with that sign bit is a red herring. That was clear 20 years
ago when most PCs had 16 bit processors *3) so it should be same now
when most PC-s have 64 bit processors.

[snip examples that concentrated on UINT_MAX]

If you always use signed values for counting and sizes then using
UINT_MAX in the context is a bug.

Yes, but what if you have the case where you need the weight of
the extra bit (positively)? Would you resolve to unsigned again?

E.g:
What if maxN is greater than INT_MAX, but less than UINT_MAX.
*1) Because signed arithmetic is lot easier to understand for average
mind than modular arithmetic. What you actually need is almost never
modular arithmetic.
OK

*2) It is not sure why unsigned won to be size_t and got into standard
library as count. Possibly the flame warriors for using unsigned
argued better on boards back then. I did not participate, my English
skill was near none. The workaround is that you can wrap accessing
standard library to keep things pure or you can use alternative
libraries (like Qt) that do count with signed values.

I find this amusing. I also initially had the stance: If something
can't be negative (and index), don't allow it to be negative. Integral
conversions are likely to cause it to be VERY BIG and fail (having
containers in mind). What has changed? Wrap around during subtraction
(inevitable) has burnt me... It has not prevented me from shunning
the std lib though.

*3) Back then we often had 16 bit int for various things, since it was
quickest. We used that for various counts that "is always less than 20
000". Now it sometimes happened that the requirements changed and the
count become "up to 40 000". It was mistake to change the count
variable type to 16 bit unsigned int because it did not help. When
requirement does jump from "up to 20 000" to "up to 40 000" then the
times when the requirement changes again to "count is up to 70 000"
are as near as next year. Changing it into 32 bit int immediately
saved time, next time it was only to modify "maxN".

Or a couple of years prior, changing it to <unsigned> as opposed
to <int> gave you that extra bit (when INT_MAX < maxN < UINT_MAX)?

Was that perhaps (at the time that processors were only 16 bits),
but more memory was addressable, the reason why they chose
std::size_t to be unsigned?

All said, I prefer you signed types too. I still think I see the
case where unsigned may be inevitable.

Kind regards,

Werner
 
P

Paul

Alf P. Steinbach /Usenet said:
* (e-mail address removed), on 16.06.2011 16:31:
On Jun 15, 6:08 pm, "Alf P. Steinbach /Usenet"<alf.p.steinbach



(e-mail address removed)> wrote:
Well, I think, it's not just me.

Since it's in the FAQ, and since we have discussed this for years, it
must be
nearly all of us: we're idiots.

I'm thinking of the argument for `unsigned` type, that you can do

void foo( unsigned n )
{
assert( n< maxN );
// blah
}

which is reportedly both more concise and more efficient than

void foo( int n )
{
assert( 0<= n&& n< maxN );
// blah
}

But, the second one can always be written as

void foo( int n )
{
assert( unsigned( n )< maxN );
// blah
}

[snip]
Really? On a system where UINT_MAX = INT_MAX = -INT_MIN wouldn't the
second fire the assert while the third would not? (Assuming maxN>
0).

(Not that the effeciency argument isn't bogus anyway, but I don't
think this demonstrates it.)

Oops! Should double check before I send. Let n = INT_MIN and assume
maxN> 1.

The standard guarantees that when x is of integral type then unsigned(x)
is congruent to x modulo 2^n where n is the number of value representation
bits in `unsigned`. No, this isn't it a quote. I just haven't had my
coffee yet so not quite able to express things more simply, sorry.

Anyway, you don't need the math to see that the two codes do the same.

That depends what maxN is.
If its the absolute of the most negative value +1 , they are not the same.

On my system the following input causes only condition 2 to be true, with
that specific value for maxN.

void foo(int n){
if(0<=n && n<2147483649){
std::cout<<"Condition 1 is true\n";
}
if(unsigned(n)<2147483649){
std::cout<<"Condition 2 is true\n";
}
}

int main(){
int x = -2147483648;
foo(x);
}
 
M

madamemandm

You might want to check n3092 again. Chances are, it also contains the
provision about the value representation.

Yep, you're right. My mistake.

Martin Shobe
 
A

Alf P. Steinbach /Usenet

* Werner, on 17.06.2011 10:33:
Well, I think, it's not just me.

Since it's in the FAQ, and since we have discussed this for years, it must be
nearly all of us: we're idiots.

I'm thinking of the argument for `unsigned` type, that you can do

void foo( unsigned n )
{
assert( n< maxN );
// blah
}

which is reportedly both more concise and more efficient than

void foo( int n )
{
assert( 0<= n&& n< maxN );
// blah
}

But, the second one can always be written as

void foo( int n )
{
assert( unsigned( n )< maxN );
// blah
}

which is perhaps not quite as concise, but is exactly as efficient (doing
exactly the same), and moreover sort of forces you to think about what the first
function's code really does -- which is the same as this.

I'm assuming maxN is of type unsigned.

I'm not sure they do exactly the same thing. This all depends
on the value of<n> and the value of<maxN>, doesn't it?
Yes.


#include<cstddef>
#include<iostream>

unsigned maxN = UINT_MAX - 5;

void foo( int n )
{
std::cout<< "foo( "<< n<< " ):"<< std::endl;

//Fails when (n == -7), uncomment to test next case
assert( 0<= n );

//Fails when (n == -6) only
assert( unsigned( n )< maxN );
}


int main()
{
int m = int( maxN ) - 1,
n = int( maxN );

//Fails because negative
foo( m );

//Fails because converted value exceeds maxN
// and because negative
foo( n );
}

In the case here above...

assert( 0<= n );

...clearly means something different, doesn't it?
Yes.


Perhaps I'm a little [too] dull to see your point.

Yes. :)

---

The efficiency argument is about testing x<maxN with unsigned operation in order
to efficiently catch both the case of negative actual argument, and the case of
x too large.

The efficiency argument therefore does not apply when a negative actual argument
can be mapped to maxN via implicit conversion, so it requires maxN<unsigned(-1)/2.

My argument showed that it's bogus in its original formulation. Your argument
shows that it's bogus when you allow larger maxN. In short, it's bogus.


Cheers & hth.,

- Alf
 
N

Noah Roberts

On Jun 15, 7:08 pm, "Alf P. Steinbach /Usenet"<alf.p.steinbach
(e-mail address removed)> wrote:
[snip]
void foo( int n )
{
assert( 0<= n&& n< maxN );
// blah
}

But, the second one can always be written as

void foo( int n )
{
assert( unsigned( n )< maxN );
// blah
}

which is perhaps not quite as concise, but is exactly as efficient (doing
exactly the same), and moreover sort of forces you to think about what the first
function's code really does -- which is the same as this.

Um. Ok, maybe it's the same thing. But that second
one will send me off to find out what you get when
the value of n is -3. And if I found this during a
code review I'd flag it for the code writer to get
rid of in favor of the first. Putting a typecast
in an assert is more than my brain wants to deal
with on only one cup of coffee.
Socks

Agreed. Besides, the important part for me is not what is in the
assert, but what type the parameter is. An unsigned type tells me that
the function expects >=0 arguments. This tells me that if I've got a
signed number I need to do some checking before calling the function.

For me it's about expression and self-documentation. I don't think
there's likely to be much, if any measurable difference in the assert.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top