condition true or false? -> (-1 < sizeof("test"))

N

Nomen Nescio

Kaz Kylheku said:
You forgot the obvious: compile with "gcc -Wall -W".

I guess you're talking to bartc. I don't use gcc.

test.c: In function ‘main’:
test.c:7:33: warning: comparison between signed and unsigned integer expressions
test.c:9:34: warning: comparison between signed and unsigned integer expressions
test.c:11:1: warning: control reaches end of non-void function




This is not better. Splint is not a tool that you can simply download, compile
and then run in this manner. Proper use of Splint requires you to RTFM and then
fine-tune the tool to actually look for problems. Making the most of Splint
requires special annotations in the code.

Fine but I thought most of the people here are such experts in C that they
would have no problems doing that. Since they like outsmarting gcc or
whatever C compiler they use, they surely have extra time for RTFM ;-)
If you just invoke it this way, you get reams of spewage full of all kinds of
false positive identifications of situations, most of which are are not
erroneous in any way.

No doubt. My point was simply there are tools that can find stupid mistakes
and they should probably be used. C is not my thing, but if it's yours I'm
sure you know all the tricks already.
 
T

Terence

Unsigned integers can be simple signed integers with enough bits to never
exceed the maximum positive number desired. Phylosophically there is no
difference. It's only the precision overflow than can cause a mathemetical
error in simple arithmetic.

Postings like this (not the integer bit, but the surrounding miasma of
complex code) cause me to ask 'WHY'?

Fortran has become more and more complex as the committees approve new
extensions or features and then vendors comply, and then users are 'forced'
to buy a new version or pay for an update if they want continued support.
And finally coding slows down.

It really doesn't have to be this way.

You've all read my discourses on continuing to use F77. I have my reasons,
even if I do also own a Fortran 90/95 compiler

The only negative that I have met (others might not) to my 1983 F77 compiler
is the limitation on contiguous memory space. I don't need any of the
'advances' beyond that version.
All the file formats I could possibly use are there. I can re-express any
modern Fortran code in an equivalent F77 code except for the contigous
memory requirements. (If I have to, I use work files).
I develop (very quickly indeed) in F77. If the client wants a native Windows
version I just recompile with the F95 one.

I don't have problems with a new setup on a newer computer (I'm now on my
gifted Mac Professional using MS Version 7; three earlier machine still work
fine). How many postings are about actually getting a compiler to work?

I have no problems with compiling, linking or even running on pretty much
any of my programs on any machine. How many postings are about how to do
this, or interpret the errors and warnings that a problematic
compiler-linker setup brings?

If I were to teach (again) any students in Fortran, I would still start with
F77. Only later, would I point out what you get with more recent compilers;
the use of modules and intent definitions; the distinction between public
and private data that is useful for OOPs work (strange abbreviation!), the
shortening of code (and reliability) with matrix-formulated operations and
so on, but...

There are some English dialects where every third or forth word is an
automatic obcenity (e.g. Geordie). I prefer the language versions without
the unnecesary stuff. There's more clarity.
 
G

glen herrmannsfeldt

In comp.lang.fortran pete said:
glen herrmannsfeldt wrote:
C89 does not specify the representation of negative integer values.

As I understand it, it requires the representation of positive
signed and unsigned having the same value to be the same.
And that unsigned look like unsigne binary.
(Within the limits of the "as if" rule.)

The three I indicate are three that have that property.
Note that biased, or offset form does not work.
I suppose one could come up with a signed version that matched
unsigned for positive and was different than the three listed,
but it would have similar properties to those listed.

Specifically, it either does or does not have a negative zero.
In terms of bit operations, that can be pretty important!

-- glen
 
G

glen herrmannsfeldt

Note that in C this is applying the unary - operator to the
constant 1u.

In Fortran, except in rare cases (the DATA statement being one)
constants are not signed. (The implentation may apply the unary
minus operator before generating the in-memory constant.)

I would usually write ~0u instead of -1u, but the result is
the same in both cases.

In Fortran, one could write NOT(0), with the usual restriction
on the bitwise operators on values with the sign bit set.
(In addition to the fact that Fortran doesn't require a binary
representation for data.)

I don't know how ADA treats signed values, or how its bitwise
operators work. This is still posted to the ADA group, though.

-- glen
 
N

Nomen Nescio

glen herrmannsfeldt said:
Note that in C this is applying the unary - operator to the
constant 1u.

In Fortran, except in rare cases (the DATA statement being one)
constants are not signed. (The implentation may apply the unary
minus operator before generating the in-memory constant.)

I would usually write ~0u instead of -1u, but the result is
the same in both cases.

In Fortran, one could write NOT(0), with the usual restriction
on the bitwise operators on values with the sign bit set.
(In addition to the fact that Fortran doesn't require a binary
representation for data.)

I don't know how ADA treats signed values, or how its bitwise
operators work. This is still posted to the ADA group, though.

I'm not an expert on this but Ada is very strongly (statically) typed. There
are only two integer types in Ada to start with, signed integer and modular
integer. The compiler will flag an error if you try to compare variables of
those two types or indeed variables of any differing types. You can define
subtypes of existing types and completely new types and Ada insures you
can't make mistakes by mixing apples and oranges. While this is a bit
confining at times (when you know it's ok to do so) it does preclude the
possibility of something like what Bartc wrote from ever happening. If you
know the conversion is ok there are ways (usually by providing an unchecked
conversion function) to assign or compare across types.
 
B

BartC

Nomen Nescio said:
I'm not an expert on this but Ada is very strongly (statically) typed.
There
are only two integer types in Ada to start with, signed integer and
modular
integer. The compiler will flag an error if you try to compare variables
of
those two types or indeed variables of any differing types.

Signed and modular is a better way of explaining how C treats these types,
with a clear distinction between them.

Signed types clearly should be the primary way to represent actual integer
quantities as most of us think of them. A signed integer range can represent
any positive value (given enough bits); an unsigned range can't represent
any positive or negative value (not with a finite number of bits anyway).

That's why it's odd that when there is any hint of a unsigned operand, C
should switch to modular arithmetic for both.

But, given that C works at a lower level, and the choice of bit-widths is
limited, then perhaps there ought to be 3 integer types:

- Signed
- New-unsigned
- Modular

With the new-unsigned type behaving as I have suggested a few times [in the
original thread in comp.lang.c], just a subrange of a signed type (for
example, a hypothetical signed integer type that is one bit wider).

With the new-unsigned type, a negation operation must have signed result
(because, other than -0, it will be negative). While subtraction would need
to be signed too, as results can be negative. And of course there is the
likelihood of overflow or underflow, for which I will leave to other
language proposals to deal with..
 
G

Georg Bauhaus

I don't know how ADA treats signed values, or how its bitwise
operators work. This is still posted to the ADA group, though.

A feature of Ada (a lady's first name; some older book's
have capitals in titles) is that you would not normally have
to worry; you still can if you like, or must.

First, one cannot store signed (unsigned) values in places whose
type is unsigned (signed) unless using force such as explicit
type conversion, or Unchecked_Conversion. Different types,
no implicit conversions, as mentioned in the thread.

As needed, one selects from the three kinds of types that
have logical operations, Boolean, modular, and packed array
of Boolean.

Some redundant assertions in the following.

with Interfaces; -- for unsigned modular "hardware" types
with Ada.Unchecked_Conversion;
procedure Bops is

-- modular types with modular arithmetic and Boolean ops:

type U8 is mod 2**8;

pragma Assert (U8'Pred (0) = -1);
pragma Assert (U8'Succ (U8'Last) = 0);

X8 : U8 := 2#1111_1111#;
--X9 : U8 := 2#1111_1111_1#; -- compiler rejects, value not in range
X1 : U8 := -1; -- modular arithmetic
X0 : U8 := not 0; -- has Boolean ops

pragma Assert (X0 = X1);
pragma Assert (X1 = 2#1111_1111#);

-- convert from signed:

type S8 is range -128 .. 127;
for S8'Size use 8;

function To_U8 is new Ada.Unchecked_Conversion (S8, U8);
I8 : S8 := -1; -- a negative signed value
--XI : U8 := U8 (I8); -- type conv will raise! value out of range
XU : U8 := To_U8 (I8); -- o.K., not checked

pragma Assert (XU = 8#377#);

-- unsinged_N "hardware types" when supported by the compiler;
-- includes shifting operations and such, is modular

use type Interfaces.Unsigned_64; -- import "+" for literals
U64 : Interfaces.Unsigned_64 := 16#FFFF_FFFF_FFFF_0000# + 2#101#;


-- types for convenient access to individual bits

type Bitvec is array (Natural range <>) of Boolean;
pragma Pack (Bitvec); -- guaranteed
subtype Bitvec_8 is Bitvec (0 .. 7);

Y : Bitvec (0 .. 11) := (5 => True, others => False);

pragma Assert (Y(5) and not Y(11));

Z : Bitvec_8;
Toggle : constant Bitvec_8 := (others => True);
begin
Y (11) := True;
Z := Y(8 - 6 + 1 .. 8) & Y(10 .. 11) xor Toggle;
pragma Assert (Z = (True, True, False, True,
True, True, True, False));
end Bops;


-- Georg
 
D

Dmitry A. Kazakov

With the new-unsigned type behaving as I have suggested a few times [in the
original thread in comp.lang.c], just a subrange of a signed type (for
example, a hypothetical signed integer type that is one bit wider).

With the new-unsigned type, a negation operation must have signed result
(because, other than -0, it will be negative). While subtraction would need
to be signed too, as results can be negative. And of course there is the
likelihood of overflow or underflow, for which I will leave to other
language proposals to deal with..

Integer arithmetic is more than just + and -. Did you ask yourself what is
the motivation to have it one bit wider? The answer is to have + and -
closed. What about *? That would make it x2 bits wider. What about
exponentiation? Much more bits. Factorial?
 
G

gwowen

Note: "mathematically wrong" would require identification of specific
aspects of math that are supposed to correspond to C operators. Oddly
enough, mathematics is sufficiently broad field of study that it does
encompass concepts that correspond very well with the behavior required
by the C standard; they're just not the concepts you think should be
relevant. The problem is not that the choices made by the C standard are
mathematically wrong, it's only that they're different from the ones you
think they should have made.

While there's a lot of truth to that, the problem is that the C
language defines operators on those abstractions that no mathematician
would.

For arithmetic (+,-,* and [maybe] /) the unsigned integers model Z_N
for some large value of N. That's all fine and dandy, but no sane
mathematician attempts to consider Z_N to be an ordered field, because
there aren't any total orderings that behave sanely w.r.t. addition or
multiplication.

So the unsigned integers aren't just "not the mathematical concept we
think", they're a solidly defined mathematical concept, with a load of
horribly inappropriate concepts bolted on, whose behaviour is far more
closely related to how silicon works, rather than how mathematics
works.

So as soon as we use comparison operators, the "its just a closed
group" argument falls utterly to pieces.
 
F

Fritz Wuehler

Thanks Terence. What you say goes against the neverending drive by most
technology people to constantly change things that are mostly fine, and to
think new is and old is. I agree with your sentiments.
I prefer the language versions without the unnecesary stuff. There's more
clarity.

Sadly, I find most of modern Fortran unreadable whereas I found FORTRAN (at
the time) perfectly readable.
 
B

BartC

Dmitry A. Kazakov said:
With the new-unsigned type behaving as I have suggested a few times [in
the
original thread in comp.lang.c], just a subrange of a signed type (for
example, a hypothetical signed integer type that is one bit wider).

With the new-unsigned type, a negation operation must have signed result
(because, other than -0, it will be negative). While subtraction would
need
to be signed too, as results can be negative. And of course there is the
likelihood of overflow or underflow, for which I will leave to other
language proposals to deal with..

Integer arithmetic is more than just + and -. Did you ask yourself what is
the motivation to have it one bit wider?

Yes. So that given 32 bits for example, we can count to approximately 4
billion instead of 2 billion.
The answer is to have + and -
closed. What about *? That would make it x2 bits wider. What about
exponentiation? Much more bits. Factorial?

We're talking about C where operations and their results usually have the
same predefined width. If it was necessary to worry about worst-cases on
every operation, that would make programming at this level pretty much
impossible.

Instead, that sort of auto-ranging, multi-precision utility is left to
higher-level languages, and languages such as C are used to implement them.

For that purpose, full-width unsigned arithmetic, preferably with carry
status, as available in most processors, is extremely useful.
 
D

Dmitry A. Kazakov

Dmitry A. Kazakov said:
With the new-unsigned type behaving as I have suggested a few times [in the
original thread in comp.lang.c], just a subrange of a signed type (for
example, a hypothetical signed integer type that is one bit wider).

With the new-unsigned type, a negation operation must have signed result
(because, other than -0, it will be negative). While subtraction would need
to be signed too, as results can be negative. And of course there is the
likelihood of overflow or underflow, for which I will leave to other
language proposals to deal with..

Integer arithmetic is more than just + and -. Did you ask yourself what is
the motivation to have it one bit wider?

Yes. So that given 32 bits for example, we can count to approximately 4
billion instead of 2 billion.
The answer is to have + and -
closed. What about *? That would make it x2 bits wider. What about
exponentiation? Much more bits. Factorial?

We're talking about C where operations and their results usually have the
same predefined width.

Irrelevant. The concept either works or does not. In this case it does not.
 
B

BartC

Dmitry A. Kazakov said:
Irrelevant. The concept either works or does not. In this case it does
not.

That's fine. But in that case, every calculator or computer ever made is
useless, because you can always think up some calculation just beyond it's
capacity.
 
M

Martin Shobe

gwowen said:
Note: "mathematically wrong" would require identification of specific
aspects of math that are supposed to correspond to C operators. Oddly
enough, mathematics is sufficiently broad field of study that it does
encompass concepts that correspond very well with the behavior required
by the C standard; they're just not the concepts you think should be
relevant. The problem is not that the choices made by the C standard are
mathematically wrong, it's only that they're different from the ones you
think they should have made.

While there's a lot of truth to that, the problem is that the C
language defines operators on those abstractions that no mathematician
would.

For arithmetic (+,-,* and [maybe] /) the unsigned integers model Z_N
for some large value of N. That's all fine and dandy, but no sane
mathematician attempts to consider Z_N to be an ordered field, because
there aren't any total orderings that behave sanely w.r.t. addition or
multiplication.

That, and for the Ns in question (powers of 2), Z_N isn't a field.
So the unsigned integers aren't just "not the mathematical concept we
think", they're a solidly defined mathematical concept, with a load of
horribly inappropriate concepts bolted on, whose behaviour is far more
closely related to how silicon works, rather than how mathematics
works.

Except that there is a mathematical object that corresponds exactly to
the defined behaviour (including <), namely Z_N with the ordered induced
by the natural order of N. Yes, it's not as "nice" as an ordered field,
but it's still there.
So as soon as we use comparison operators, the "its just a closed
group" argument falls utterly to pieces.

Since the definition of group doesn't restrict the ordering in any way,
nothing at all happens to the "it's just a closed group" argument.

Martin Shobe
 
F

Fritz Wuehler

BartC said:
Signed and modular is a better way of explaining how C treats these types,
with a clear distinction between them.

That's only partially true from an Ada perspective. For one thing, Ada
allows you to specify a range for modular types (and for integer types as
well) so you (and the compiler) actually know what modulo you are using. In
C, if you assign a negative or invalid value to an unsigned int I believe
the result you get depends on the C implementation and execution platform.
In Ada, the result should be consistent across all implementations and
platforms.
Signed types clearly should be the primary way to represent actual integer
quantities as most of us think of them. A signed integer range can represent
any positive value (given enough bits); an unsigned range can't represent
any positive or negative value (not with a finite number of bits anyway).

I don't agree with this because in many practical cases unsigned integers
are more meaningful (counters, for example) and also extend the useful range
of machines with small word sizes. If really depends on your application.
That's why it's odd that when there is any hint of a unsigned operand, C
should switch to modular arithmetic for both.

I agree. It's the opposite of what you would expect. It seems to me unsigned
integers should probably be promoted to signed integer to be the most useful
but of course you will also need some way to handle overflows.
 
D

Dmitry A. Kazakov

That's fine. But in that case, every calculator or computer ever made is
useless, because you can always think up some calculation just beyond it's
capacity.

But calculators do not work with non-modular unsigned integers.

The idea of having a constrained subtype of a constrained or not integer
type is all OK. This is how such types are defined in Ada:

subtype Natural is new Integer range 0..Integer'Last;

Wrong is an attempt to define a new type pretending it were unconstrained
because some extra bits.
 
T

Tim Rentsch

BartC said:
I didn't see anything obvious amongst all the options. And I made sure
it was the latest version.

I just downloaded a Digital Mars C compiler, and compiled
this program (held in file n.c):

#include <stdio.h>

int
main(){
printf( "Hello, world!\n" );
printf( "-1 < sizeof 1 == %d\n", -1 < sizeof 1 );
printf( "-1 < 0l + sizeof 1 == %d\n", -1 < 0l + sizeof 1 );
printf( "-1 < 0ll + sizeof 1 == %d\n", -1 < 0ll + sizeof 1 );
return 0;
}

which, when run, produced

Hello, world!
-1 < sizeof 1 == 0
-1 < 0l + sizeof 1 == 0
-1 < 0ll + sizeof 1 == 1

for its output. This was dmc852, IIRC.
 
B

BartC

Tim Rentsch said:
I just downloaded a Digital Mars C compiler, and compiled
this program (held in file n.c):

#include <stdio.h>

int
main(){
printf( "Hello, world!\n" );
printf( "-1 < sizeof 1 == %d\n", -1 < sizeof 1 );
printf( "-1 < 0l + sizeof 1 == %d\n", -1 < 0l + sizeof 1 );
printf( "-1 < 0ll + sizeof 1 == %d\n", -1 < 0ll + sizeof 1 );
return 0;
}

which, when run, produced

Hello, world!
-1 < sizeof 1 == 0
-1 < 0l + sizeof 1 == 0
-1 < 0ll + sizeof 1 == 1

for its output. This was dmc852, IIRC.

Mine said 8.42n. But if you used the -v2 verbose switch, it said 8.52.5n.

Your code also gave 0,0,1 on my DMC, and on two other C compilers. But 0,0,0
on lcc-win32.

I extended the program to do sizeof on strings again (and add some
parentheses):

printf( "-1 < sizeof 1 == %d\n", -1 < sizeof 1 );
printf( "-1 < (0l + sizeof 1) == %d\n", -1 < (0l + sizeof 1) );
printf( "-1 < (0ll + sizeof 1) == %d\n", -1 < (0ll + sizeof 1) );
printf( "-1 < sizeof \"\" == %d\n", -1 < sizeof "" );
printf( "-1 < (0l + sizeof \"\") == %d\n", -1 < (0l + sizeof "") );
printf( "-1 < (0ll + sizeof \"\") == %d\n", -1 < (0ll + sizeof "") );

This gave these results:

lccwin32: 0,0,0, 0,0,0
DMC: 0,0,1, 1,1,1
gcc: 0,0,1, 0,0,1
PellesC: 0,0,1, 0,0,1

So definitely something funny with DMC when using sizeof on string literals,
up to 32-bits anyway.

One thing I found puzzling, was why the long-long version compared
differently. Is seemed to give the wrong result (for what C purports to do)
on every compiler except lcc-win32.

But on the whole, I'm more confused now than before..
 
J

James Kuyper

On 05/22/2012 05:49 PM, BartC wrote:
....
I extended the program to do sizeof on strings again (and add some
parentheses):

printf( "-1 < sizeof 1 == %d\n", -1 < sizeof 1 );
printf( "-1 < (0l + sizeof 1) == %d\n", -1 < (0l + sizeof 1) );
printf( "-1 < (0ll + sizeof 1) == %d\n", -1 < (0ll + sizeof 1) );
printf( "-1 < sizeof \"\" == %d\n", -1 < sizeof "" );
printf( "-1 < (0l + sizeof \"\") == %d\n", -1 < (0l + sizeof "") );
printf( "-1 < (0ll + sizeof \"\") == %d\n", -1 < (0ll + sizeof "") );

This gave these results:

lccwin32: 0,0,0, 0,0,0
DMC: 0,0,1, 1,1,1
gcc: 0,0,1, 0,0,1
PellesC: 0,0,1, 0,0,1

So definitely something funny with DMC when using sizeof on string literals,
up to 32-bits anyway.

One thing I found puzzling, was why the long-long version compared
differently. Is seemed to give the wrong result (for what C purports to do)
on every compiler except lcc-win32.

Check the values of SIZE_MAX and LLONG_MAX. If SIZE_MAX < LLONG_MAX, the
usual arithmetic conversions result in the sizeof expression and -1 both
being converted to long long, so a result of 1 is to be expected.
Perhaps lccwin32 is the only one of those compilers for which SIZE_MAX
= LLONG_MAX?

The result you're getting with DMC are non-conforming; sizeof 1 and
sizeof "" both have the type size_t, and a non-negative value, so you
should be getting the same result with either expression, whether that
value is 0 or 1.
 
T

Terence

Les:
See UK TV program "Geordy shore". I'm a northerner myself. The post-1945
period brought back voluble armed forces personnel and I first saw these
obsenities enter the spoken language (especially in the military area; I
also met this as a cadet sargeant, 1948-53). It's there; I'm ashamed for
usage on international programs fo rthe impresion it gives.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,139
Latest member
JamaalCald
Top