How to elegantly get the enum code from its string type

K

Keith H Duggar

Generally speaking, it a number can not have negative numbers, it
shouldn't be signed (as Leigh said in other tread). You can introduce
bugs by using signed int, if you forget to check if the value is negative.

As opposed to that "negative" being silently converted to a
huge "positive" number?? Have you actually /thought/ about
this or bothered to read any of the lengthy discussions of
the topic (and I don't mean this thread)?
Off course not.

In this specific example, it is irrelevant whether the Size type is
signed or unsigned. I fail to see how could you possible introduce bugs
by choosing the unsigned int in this case.

Where can I read about problems of the unsigned types?

I've already posted a search link once in the thread. Go find
it and/or google search comp.lang.c++.moderated for "unsigned"
and begin your studies.

However, unlike Leigh, don't stop immediately as soon as you
find a post that agrees with your preconceptions. Instead study
the entirety of the arguments.

By the way, if one stops searching as soon as they find results
they agree with, then one is guilty of what is called "selection
bias". It allows one to remain ignorant for quite some time.

KHD
 
K

Keith H Duggar

If you care to look at my reply else-thread you will see that I did not

Do you realize that when someone starts writing a reply they
don't scan every second to see if Leigh has corrected/updated/
changed/etc his posts? You are just not that important and
nobody except you cares about your "face" or "reputation". It
seems very important to you that you are not shown to be wrong
or mistaken. Hence the large number of masturbation posts and
pointing out that you have "already" corrected/clarified/etc
yourself. Nobody except you cares, Leigh. Do you understand
that? Get over your ego and try to control yourself better.
I am extremely confident that my view of the benefits of using
unsigned integral types in abstractions is real.

Funny then that you have no substantive arguments on the
subject other than "it's perfectly fine" and "see the std
does it" and "C does it".
I did a quick look at that google groups link you posted and
the first post I looked at agreed with what I am saying so I
couldn't be arsed trawling through it any further.

Exactly, you stop as soon as your preconceptions are reinforced.
Such selection bias is the quickest route to "forever ignorant
and proud of it". And basically you are admitting you are too
lazy to educate yourself further.
A consensus of idiots is not a good thing.

LOL. Oh my oh my.

KHD
 
Ö

Öö Tiib

On Apr 14, 9:27 am, "Leigh Johnston" <[email protected]> wrote:

[Also Alf P. Steinbach wrote and so on.]
Do you have ANYTHING, anything NEW at all to offer us on this
topic? So far you have nothing except tired old rants, a broken
noisy recording.

People, you cannot just be so childish??? It is difficult to say
something new on SO OLD TOPIC.

What is subject of argument? Seems it is storage. Storage has size.
Storage elements can be indcated by index that starts from 0 and is
less than size. Index that is smaller than 0 does not make sense.
Index that isn't less than storages size does not also make sense.

Now someone tells us index i and asks to do something with i-th
element in storage S.
We can tell back that this request does not make sense when i is less
than 0 or is not less than storages size.

Positive effect of unsigned value is that we do not need to check if
it is less than 0 (because it can not be) and so we can check only
that its value is less than size of storage when we want to ensure
that this request makes sense. That means less typing.

Positive effect of signed value is that we can have different
diagnostic to indicate if given index was out of lower bound or higher
bound of its possible values.

There are lot of other positive and negative effects ... but these are
more context-specific. Like for example if the i is indexing floor of
house and -1 means topmost cellar then that trick with unsigned does
not make sense there.
 
A

Alf P. Steinbach

* Vladimir Jovic:
Where can I read about problems of the unsigned types?

Books and discussions on the net. I'd hoped the FAQ would be helpful but it's
not, just repeating the old idea of saving on one comparision by using unsigned
types. Which some may think is a good idea (and at the time that FAQ item was
written many did think it was a good idea) but fails to consider any problems.

However, it's not more than can be easily summarized in a Usenet posting, so
I'll do that.

First, unsigned types are not problematic of themselves: they're eminently
suitable for doing bit-level things, and that's what they're intended for
(according to e.g. Bjarne who created the language). They can also be good for
some other things. E.g., in some contexts an unsigned type can be preferable for
representing character codes, although that can also be problematic; I give a
concrete example of one case where it's absolutely necessary, below.

Unsigned types are problemetic in arithmetic and comparisions, because

* unsigned arithmetic is modulo 2^n, where n is the number of value
representation bits (i.e. you have wrap-around behavior, guaranteed), and

* with like size types U and I in the same expression you get promotion
to U (you also get promotion to U if U is of greater size than I).

These problems, discussed in detail below, are so common that e.g. James
Gosling, creator of Java, have used them as an argument in favor of Java:

"Quiz any C developer about unsigned, and pretty soon you discover that almost
no C developers actually understand what goes on with unsigned, what unsigned
arithmetic is. Things like that made C complex."

Of course I don't buy that argument, but what's interesting about it in the
context of answering your question is that the problems of unsigned arithmetic
is an uncontroversial notion. Gosling takes it for granted that the reader is
familiar with this. And Bjarne, present in the same interview, made no comment.
No expert would deny that the problems exist. It's only in a group where novices
seek enlightenment, or in an introductory programming book, that it's even
meaningful to discuss it; it's basic. It should have been in the FAQ. It isn't.


-- Modulo arithmetic.

With an unsigned type with M values and maximum value max = M-1, the expression
max+1 yields 0, and 0-1 yields max. The value range wraps around. If you'd
otherwise get a result outside the value range, then it's reduced to to a number
in the value range as if a sufficient number of M's was added or subtracted.
This behavior is guaranteed by the standard, and it results from the most simple
way to implement arithmetic with binary numbers. It's called modular arithmetic
or clock arithmetic (since clock times also behave that way); see Wikipedia.

With most C++ implementations, and in particular on the PC, you also get wrap
around behavior -- modulo arithmetic -- with signed types, but the wrap
around for a signed type is from most negative value to most positive value, and
vice versa, and is therefore not a problem for usual "small" values. At the bit
level the wraparound for signed types is due to a very simple way of
representing negative integers called two's complement form, where a negative
value -x is represented by the same bit pattern as the unsigned -x+M = M-x. The
name "two's complement" stems from M-x = ((M-1)-x) + 1, where, since M is a
power of 2, (M-1) is an all-1's bitpattern, and (M-1)-x therefore corresponds to
inverting every bit in x, called the "one's complement" (in the binary system).

Let's say you want to call the C standard libary's 'islower' function to check
whether Norwegian 'æ' is a lowercase letter -- it is, but, uh ...


<code>
#include <iostream>
#include <locale.h> // setlocale
#include <ctype.h> // islower

int main()
{
using namespace std;

setlocale( LC_ALL, "" );
cout << boolalpha << !!islower( 'æ' ) << endl;
}
</code>

<result>
false
</result>


(Note: since it's UB the result may be 'true', as one could naïvely expect, but
what matters is the /possibility/ of getting 'false'). What went wrong?

islower takes an 'int' argument that needs to be EOF or a non-negative character
code, while in practice 'char' is a signed type, and Norwegian 'æ', a character
outside the positive 'char' range, is therefore necessarily a negative value,
which is not supported by 'islower'.

Assuming two's complement form you preserve the bitpattern by casting to
unsigned type. This is the same as adding (the type-dependent) M, which in the
case above is the same as adding UCHAR_MAX+1. But the cast is more idiomatic:


<code>
#include <iostream>
#include <locale.h> // setlocale
#include <ctype.h> // islower

bool isLower( char const c )
{
typedef unsigned char UChar;
return !!islower( UChar( c ) );
}

int main()
{
using namespace std;

setlocale( LC_ALL, "" );
cout << boolalpha << !!isLower( 'æ' ) << endl;
}
</code>

<result>
true
</result>


The first version of the program is a very common novice error, failing to
understand the issues of unsigned representation and unsigned arithmetic, that
for the default signed 'char' type the value of 'æ' has been wrapped and is
therefore outside the required range for 'is_lower'.

Even professional programmers tend to make such mistakes, for as Gosling
observed "almost no C developers actually understand what goes on with
unsigned", and that applies also to C++ -- it's the same.

This doesn't mean that using 'unsigned char' is a good solution. It means that
mixing unsigned and signed, as the 'is_lower' standard lib function does, is a
recipe for disaster. It's just too darned easy to get the /usage/ wrong.

Here's an expression example:

a < b + n

With signed integer types and common not-overly-large values for a and b, this
can also be expressed as

a - n < b

However, if the type involved is an unsigned one, then instead of a - n possibly
producing a negative value it will in that case wrap around, with the result
that when the former expression is 'true', the latter expression is 'false'...

I.e., with unsigned arithmetic the usual math rules don't apply for ordinary
not-overly-large values -- which are the values most often occurring.

Most programmers are aware of this when writing e.g. a loop that counts down,
but keeping it in mind for expressions like the above is much much harder. IIRC
one almost grotesque example in recent years was when someone noticed that
Microsoft's code for rendering a picture had such a bug, which with a suitable
crafted picture made it possible to place arbitrary bytes in memory. This then
in turn allowed malware infection via Internet Explorer by simply presenting a
picture on your web site; uh oh, infected by a JPEG, oh dear.


-- Implicit promotion.

As a concrete example of implicit promotion, consider


<code>
#include <iostream>
#include <string>

std::string rightAdjusted( int n, std::string const& s )
{
if( s.length() >= n )
{
return s;
}
else
{
return std::string( n - s.length(), ' ' ) + s;
}
}

int main()
{
using namespace std;
for( int x = -5; x <= 5; ++x )
{
cout << rightAdjusted( x*x - 4, "o" ) << endl;
}
}
</code>

<result>
o
o
o
o

This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
</result>


Evidently there's something wrong. And e.g. the g++ compiler warns about that,
"warning: comparison between signed and unsigned integer expressions".

In the comparision

s.length() >= n

the result of the 'length' method is unsigned, while 'n' is signed. The unsigned
type is at least as large as the signed type. And so the signed value, 'n', is
promoted to unsigned, which when it's negative effectively means adding M.

Which is very large...

And so the comparision produces 'false', and the second return path is taken,
evaluating

std::string( n - s.length(), ' ' )

where again there is a promotion to unsigned type (in the first argument),
yielding a very very large value when n is negative. Hence the crash.

I.e. this happens not only with comparisions but also in simple arithmetic
expressions, where the compiler will usually /not/ warn you, as g++ didn't for
the above expression. It is real pitfall, a very common error. But it's very
simple to avoid, like just putting on a condom: simply don't introduce unsigned
values in the first place, e.g., define & use a signed type 'size' function.

Alternatively it can be avoided by always keeping in mind whether some value
might be unsigned and dealing with such values by special cases, by remembering
that expressions cannot be simply rewritten using the usual math rules, and by
whatnot. This is like the method of pulling out at the last instance, hoping
that that will not only prevent a possible pregnancy but also any veneral
disease. It's much harder to do, it's more work, and it's brittle.


Cheers & hth.,

- Alf
 
J

James Kanze

I got a problem in practice, and I cannot find a verly elegant
solution to it.
------------------------------------------------------------------------
enum Type{
REPLY = 100,
REP = 1052,
HAHA = 9523,};
------------------------------------------------------------------------
When I open the interface to users for configuration, I would
like the user to use "REPLY", "REP", "HAHA" like string format
because it's much easier to recognize and remember.
But in my program, I need to use the actual code (integral format).
My solution is to define a map<string, int>, and insert the
string and integral format pairs into the map for query. But
it's really anoying and difficult to use, especially when
initializing.

That should be map<string, Type>. But in practice, a map is
often overkill. I generally use a simple C style array of
struct { char const*; Type}, and linear search. Which has the
advantage (not always important) that I can use it in
constructors of static objects, since the array is statically
initialized.

As for generating the array, it's fairly easy to parse the C++
code (ignoring everything but enums:)) to generate the static
tables.
 
J

James Kanze

* Leigh Johnston:
Translated: you don't understand the above and you're
wondering why.

Realistically, I'd just use int. An int is guaranteed to be
able to handle up to 32000 elements, and realistically, I dont
expect to see enums with more than 32000 values.

And int is the default type for integral values. Unless there
is a very strong reason for using something else, you should use
int.
The main reason is that dealing with unsigned types for
anything other than bit manipulation, leads to lots of
possible bugs due to implicit promotion. For example, if v is
a std::vector, then v.size() > -1 yields false. However, with
the definition above, Size( v.size() ) > -1 yields true
(correct).

More generally, unsigned types, in C++, are not cardinal
numbers, but have very special (and somewhat curious) semantics.
You use them in two cases: when you're doing bit manipulations,
and unsigned char for raw memory. (Plus the very rare case
where you actually want the special semantics---calculating hash
codes is the only case which comes to mind.)
Even in the cases where you know that the code is safe, such
as the comparision in the loop below, the compiler doesn't
know and may issue some diagnostic. You don't want that
diagnostic. At least if you take in pride in your work.
for( int i = 0; i < size( values ); ++i )
{
if( values.first == name ) { return values.second; }
}
throw std::runtime_error( "someEnumFrom: no such name" );
}

This is fine if you only have a few items however it doesn't
scale (linear complexity) so for the more general case a map
might be superior.


It's always a tradeoff. In practice, linear search will beat
most of the alternatives up to 10 or 15 elements, and isn't too
far behind up to 25 or 30. If most of your enums contain more
than 30 values, it's worth considering alternatives, but in
practice (again), I've found linear search to be the best
variant, in a number of different applications.
 
J

James Kanze


[...]
Eschewing a language feature because you can create bugs with
it is a nonsense as you can create bugs with any language
feature. Bugs are inevitable. Bug fixes are inevitable.

But Alf wasn't recommending that. He was eschewing a language
feature because it didn't do what he wanted. In C++ (inherited
from C), "unsigned" is broken, if you interpret it as a cardinal
type. Unsigned types in C++ have somewhat special semantics,
and good programmers avoid them unless these special semantics
are needed.

[...]
Use a signed integral type when negative values make sense.

Use int for everything, unless there is some overriding reason
to do otherwise. (IMHO, Alf was wrong to use ptrdiff_t---he
should have simply used int.) And negative values have nothing
to do with the issue: C++ doesn't have ranged types, and
unsigned integral types have special semantics.
Use an unsigned integral type when negative values do not make
sense.

But are there cases where they don't, given C++'s rules for type
promotion. In other words, are there cases where -1 > 0 makes
sense? (Or in other words, never use C++'s unsigned types where
differences or comparisons for inequality might make sense.)
 
J

James Kanze

messagenews:e86bc9d5-0014-4401-840b-292ee28cda1c@y21g2000vbf.googlegroups.com...
Consider this:
1) std::vector<T>::eek:perator[](size_type) is widely used (in practice not
theory)
2) std::vector<T>::size_type is a typedef of std::allocator<T>::size_type
3) std::allocator<T>::size_type is a typedef of std::size_t
4) std::size_t is an unsigned integral type

unsigned integral types are widely used in practice (not theory).

Ergo: the person who designed the STL didn't really understand
unsigned types in C++. (Regretfully true.)

In general: mixing signed and unsigned should be avoided (since
it's broken in C++). So having to compare with e.g.
I don't attach much value to the debates that occur in this
newsgroup primarily because it is not moderated and the
debates themselves are more often than not simply pissing
contests. :)

You'll find the same arguments (and the same conclusions) in the
moderated groups. Or elsewhere. (Stroustrup uses plain int
almost everywhere; he was, I believe, strongly influenced by
Kernighan and Richie in this, having worked under Kernighan for
many years.)
 
J

James Kanze

Sorry James but you are spouting garbage again. One word: size_t.

So who uses size_t on a modern machine?

Back when C was being standardize, 16 bit machines were still
legion, and the extra bit was necessary. Today, realistically,
size_t is an anachronism, and not really necessary. (It's worth
noting that the STL was originally developed using Borland C++,
on a 16 bit machine. Which possibly accounts for it's use of
unsigned size_type as well.)

Even today, size_t is designed to support the very lowest level
of programming, and there are contexts (e.g. writing a garbage
collector) where you really do need that extra bit. (A garbage
collector on a 32 bit machine has to handle memory blocks which
are larger than 2GB.) But you don't normally use it in
application level code, except to avoid signed/unsigned
comparison warnings when using the STL.
 
J

James Kanze

* Vladimir Jovic:

[...]
At a guess you'd need at least some thousand enum values in
order for a std::map to perform better. In that case you'd use
some system instead of mapping strings to individual values
(and you'd not use an enum). And even with the horror of
thousands of non-systematic enum values a std::map would be a
premature optimization.

Anything over a hundred, and std::map will begin to make a
significant difference. But... how many enum's contain more
than a 100 constants? And in how many programs will this
difference be significant?

I've written a program which generates such mappings
automatically. I've used it in many applications. Globally,
linear search outperforms std::map here, because most enum's
have relatively few members (and linear search has less fixed
overhead). And done correctly, linear search allows static
initialization, which in turn means that the mapping can safely
be used in the constructors of static objects.
 
J

James Kanze

Yes code should reflect intent and my use of unsigned integral
types reflect the fact that I am using values that only make
sense when positive.

That's true for people who don't know the language. In fact,
the use of unsigned integral types says more than that: it says
that modulo arithmetic is desired, and that differences of the
values don't make sense.
Alf's assertion that unsigned types indicate "modular
arithmetic" is a nonsense

Nonsense or not, it's what the standard says.
as sizeof(T) does not indicate "modular arithmetic"

It indicates that, on certain, very old machines, you need that
extra bit.
and the type of the sizeof operator is std::size_t which is an
unsigned integral type.

Or that the STL was developed on 16 bit machines, where that
extra bit might have been relevant.
std::size_t is used extensively in the real world so unsigned
types are also used extensively in the real world and not just
in "modular arithmetic" contexts.

Once code has been tainted by an unsigned type, you more or less
have to go with the flow.
std::allocator<T>::size_type is a typedef of std::size_t which
is an unsigned integral type.
As I mentioned elsewhere in this thread the following is perfectly fine:
typedef std::vector<int> items;
...
items theItems;
...
for (items::size_type i = 0; i != theItems.size(); ++i)
{
/* do stuff with theItems */
}

making "i" above an "int" or even more perversely a ptrdiff_t
would be just plain wrong.

Agreed. Since theItems.size() has tainted the code with
unsignedness, you more or less have to follow suite. That
doesn't mean that it's a good choice in general.
 
J

James Kanze

unsigned types in C++ /ARE/ modular arithmetic. This is
defined by the standard. As to whether they "indicate", well
who knows? Indicate to whom? In what context?

FWIW, and since no one seems to have mentionned it: the real
problem is that arithmetic on unsigned integral types is
modular, and so doesn't follow the rules of natural arithmetic.
If unsigned types are to be used as an abstraction for cardinal
numbers, then either substraction is forbidden, or the results
of substraction are a signed type. Neither is the case for
C++'s unsigned integral types.

I wouldn't limit there use to cases where modular arithmetic is
required---I find them quite useful in cases where *no*
arithmetic makes sense. Regretflly, the standard does use
unsigned types for sizes of objects, but this seems to be a
mistake, since it does make sense to ask the difference between
two sizes (e.g. abs(sizeof(a)-sizeof(b))).
"indicate" aside, it is fact that they are modular arithmetic
and as such "positive" and "negative" lose distinction because
of modular congruence. And that when combined with C++ implicit
conversion rules is the heart of the problem.

Along with the fact that subtraction doesn't give a "natural"
result.

[...]
What exactly do you think this "extensive use" demonstrates?
Did you miss Alf's claim that the "consensus" now is that the
standard got this wrong?

The issue concerning sizeof is somewhat complex. If you're
writing a garbage collector for a 32 bit machine, you need that
extra bit. But such cases are exceptional (and don't justify
the use of an unsigned size_type in the STL).
 
J

James Kanze

Keith H Duggar wrote:

[...]
Just a nit: unsigned type do not only support modular
arithmetic. They still have operator< and the like; and all
unsigned values are >= 0. In that very precise sense, they are
all non-negative.

But they don't support difference correctly. Unsigned or not,
4 - 6 is -2, not some very big value. Unless you're working in
the context of modular arithmetic. And one normally expects
that abs(a-b) == abs(b-a): using abs on the results of a
difference of positive values is the standard way of obtaining a
relevant positive value.
 
K

Kai-Uwe Bux

James said:
That's true for people who don't know the language. In fact,
the use of unsigned integral types says more than that: it says
that modulo arithmetic is desired, and that differences of the
values don't make sense.
[...]

I think, you are overstating your case: The use of unsigned types indicates
not that modulo arithmetic is desired, at least not more than the use of
signed types indicates that undefined behavior is desired.

If a and b are randomly chosen ints with uniform distribution over all
admissible values, the probability for a+b to be undefined is about 1/4 and
a*b is undefined almost always; that demonstrates that undefined behavior is
a "feature" of the signed types and not just some dark corner case. However,
it is not a feature of the _use_ of signed types. Similarly, modulo
arithmetic is a feature of unsigned types but not necessarily a feature of
their _use_.

Of course, this is _not_ to say that unsigned types are the right choice as
soon as only non-negative values arise. I just don't buy the reason from
"unsigned signals the intent of modulo arithmetic". What the use of unsigned
actually signals depends a lot on context, code base, project or the
companies coding guidelines.


Best

Kai-Uwe Bux
 
A

Alf P. Steinbach

* Leigh Johnston:
* James Kanze

James your outlook is very antiquated. Modern C++ software development
does (or at least should, embedded aside) involve extensive use of the
"STL"

James is a bit of an expert on the STL...

and therefore extensive use of std::allocator<T>::size_type.

.... and this conclusion, if it is a conclusion, does not follow from your
premise. Perhaps it isn't a conclusion but just a silly wordplay, thinking that
if size_type is involved then it's "use"d at the application level. But that's
idiotic, so I assume you didn't mean that.


Use unsigned integral types where they make sense such as for
representing a "size" which cannot be negative or an "index" into an
array which cannot be negative.

And this is your umpteenth re-assertion of that viewpoint without any argument
to back it up, and ignoring all counter-arguments.

Take a look at the new C++0x array
container, its size_type is an explicit typedef of std::size_t and it is
used for both size() and element indexing.

Yes, it must, by historical accident; this is unfortunately the convention of
the library, too late to be fixed now.

Yes you have to make sure you do not create any signed/unsigned bugs but
that is what you are being paid for.

No. You may be paid to rapidly produce software that to the causal user looks OK
but is really a mess with both a lot of errors wrt. to spec and a spec that is
itself erroneous (especially when the customer is e.g. the Dutch army), or you
may be paid to somewhat slower produce correct software. But you're not paid to
knowingly use a technique that is more work in the first place and is likely to
introduce bugs, and then waste time identifying and fixing those bugs.


Cheers & hth.,

- Alf
 
J

Jonathan Lee

FWIW, and since no one seems to have mentionned it: the real
problem is that arithmetic on unsigned integral types is
modular,

Technically, so is arithmetic on signed types. Both are
complete residue classes.
and so doesn't follow the rules of natural arithmetic.

Ditto for signed. The boundaries of what's "natural" has
simply been moved.
If unsigned types are to be used as an abstraction for cardinal
numbers,

Er.. natural numbers.. unless you _really_ mean to express
aleph-0 in an unsigned type.

/end mathematician

My 2 cents: this is really a practice vs theory
discussion. Neither one of you is going to convince
the other.

--Jonathan
 
K

Kai-Uwe Bux

James said:
Keith H Duggar wrote:
[...]
"indicate" aside, it is fact that they are modular
arithmetic and as such "positive" and "negative" lose
distinction because of modular congruence.
Just a nit: unsigned type do not only support modular
arithmetic. They still have operator< and the like; and all
unsigned values are >= 0. In that very precise sense, they are
all non-negative.

But they don't support difference correctly. Unsigned or not,
4 - 6 is -2, not some very big value. Unless you're working in
the context of modular arithmetic. And one normally expects
that abs(a-b) == abs(b-a): using abs on the results of a
difference of positive values is the standard way of obtaining a
relevant positive value.

Now, that operator- is tricky for unsigned types is a completely different
statement from the one, I have issues with. BTW: your use of the word "big"
indicates that, for you, the comparison operators retain their original
meaning.

As far as the rules of the language are concerned, unsigned types are Janus-
faced. On the one hand, the comparison operators indicate that they model
counting numbers. The rules for arithmetic, however, view them as congruence
classes. Which view dominates in a particular line of code depends on the
context.


Best

Kai-Uwe Bux
 
A

Andrew Poelstra

I wouldn't limit there use to cases where modular arithmetic is
required---I find them quite useful in cases where *no*
arithmetic makes sense. Regretflly, the standard does use
unsigned types for sizes of objects, but this seems to be a
mistake, since it does make sense to ask the difference between
two sizes (e.g. abs(sizeof(a)-sizeof(b))).

You can cast (sizeof(a) - sizeof(b)) to a signed type and you
will get a sane result, even though the actual result of the
subtraction is (SIZE_MAX + n) (mod SIZE_MAX), with n possibly
negative.

Actually, this is not quite true, since neither the C nor C++
standards define a signed type corresponding to size_t. So in
theory if you tried casting to long, and LONG_MAX was greater
than SIZE_MAX, you would still get a giant nonsensical value.

I've never heard of such a system, but I believe the language
standards allow one to exist.

But for normal situations, with normal objects of reasonable
size, (int)(sizeof a - sizeof b) will give the right answer,
and the cast will shut up any spurious compiler warnings.
 
A

Alf P. Steinbach

* Jonathan Lee:
Technically, so is arithmetic on signed types. Both are
complete residue classes.

No.

For the theoretical perspective, overflow with signed type is Undefined
Behavior, not well defined.

For the practical perspective, with common not overly largely values signed
arithmetic works without problems, while unsigned works with wrap-around.

Ditto for signed. The boundaries of what's "natural" has
simply been moved.

There's a difference between a boundary 2 cm in front of your nose, which you're
likely to collide with, and one out in the Andromeda galaxy.

Er.. natural numbers.. unless you _really_ mean to express
aleph-0 in an unsigned type.

"Cardinal" is a common term for usnsigned integers in programming, e.g. consider
the Modula-2 CARDINAL type (hoping I recall that correctly).

In summary, the particular mathist viewpoint expressed above brings nothing
useful, but does rather confuse things.


Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* Andrew Poelstra:
You can cast (sizeof(a) - sizeof(b)) to a signed type and you
will get a sane result, even though the actual result of the
subtraction is (SIZE_MAX + n) (mod SIZE_MAX), with n possibly
negative.

For the formal, only if you define "sane result" as "Undefined Behavior".

But in practice you're right.

However, it's more to write, and not the least, when using unsigned types one
must remember to do it (and many other such things, special cases everywhere).

Actually, this is not quite true, since neither the C nor C++
standards define a signed type corresponding to size_t.

ptrdiff_t must fit the bill for values that can occur as the result of
subtraction of actual sizes.


Cheers & hth.,

- Alf
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top