union

B

Bun Head

Does anyone see a problem with..

#include <stdio.h>

typedef struct {
float x;
float y;
float z;
} _XYZ_;

typedef union {
float arr[3];
_XYZ_ xyz;
} XYZ;

int main(int argc, char** argv)
{
XYZ vec;

printf("union test\n");

printf("setting array to get _XYZ_\n");
vec.arr[0] = 1.0F;
vec.arr[1] = 2.0F;
vec.arr[2] = 3.0F;
printf("vec.xyz is %f, %f, %f\n", vec.xyz.x,vec.xyz.y,vec.xyz.z);

printf("setting _XYZ_ to get arr\n");
vec.xyz.x = 4.0F;
vec.xyz.y = 5.0F;
vec.xyz.z = 6.0F;
printf("vec.arr[] is %f, %f, %f\n",
vec.arr[0],vec.arr[1],vec.arr[2]);

return 0;
}

... On a side note ..

#include <stdio.h>

int main(int argc, char** argv)
{
printf("%15s %s\n", "TYPE", "SIZE");
printf("%15s %s\n", "---------------", "----");
printf("%15s: %02d %02d\n", "char", sizeof(char),sizeof(char)*8);
printf("%15s: %02d %02d\n", "short", sizeof(short),sizeof(short)*8);
printf("%15s: %02d %02d\n", "int", sizeof(int),sizeof(int)*8);
printf("%15s: %02d %02d\n", "long", sizeof(long),sizeof(long)*8);
printf("%15s: %02d %02d\n", "float", sizeof(float),sizeof(float)*8);
printf("%15s: %02d %02d\n", "double",
sizeof(double),sizeof(double)*8);
printf("%15s: %02d %02d\n", "long int", sizeof(long int),sizeof(long
int)*8);
printf("%15s: %02d %02d\n", "long double", sizeof(long
double),sizeof(long double)*8);

return 0;
}

... produces ..

char: 01 08
short: 02 16
int: 02 16
long: 04 32
float: 04 32
double: 08 64
long int: 04 32
long double: 10 80

(This was compiled and executed on WinXP)

I thought that "int" type was 32-bit native;
and, "long int" was 64-bit native. In every
variety of UNIX that I try this on my thought
is correct. But, when going to a windoze
environment I get these results. Is there a
way to detect the native type sizes that are
available?

Anyway, if I do a union on UNIX such as..

typedef union {
int val;
char bytes[4];
}

... it will be invalid on WinXP.. sheesh!!!
 
A

Arthur J. O'Dwyer

Does anyone see a problem with..

[reformatted to save space in quotation]
#include <stdio.h>
typedef struct {
float x,y,z;
} _XYZ_;

Invasion of implementation namespace with "_X..." This is
unlikely to be a cause of silent errors, though it *is* technically
a case of undefined behavior AFAIK.
typedef union {
float arr[3];
_XYZ_ xyz;
} XYZ;

int main(int argc, char** argv)
{
XYZ vec;
vec.arr[0] = 1.0F;
vec.arr[1] = 2.0F;
vec.arr[2] = 3.0F;
printf("vec.xyz is %f, %f, %f\n", vec.xyz.x,vec.xyz.y,vec.xyz.z);

Undefined behavior, because you are trying to read out of the
object 'vec.xyz.y', which has never been initialized. ('vec.xyz.x'
shares the memory location of 'vec.arr[0]', but the same is not
necessarily true of 'y' and '1', or 'z' and '2'.)
In fact, I'm not even sure if the wording of the Standard permits
you to write to 'vec.arr' and then read from 'vec.xyz' without an
intermediate write to 'vec.xyz' in *any* case... but it certainly
happens often enough that it *ought* to be allowed. [As in, I think
I would have heard by now if it weren't.]
vec.xyz.x = 4.0F;
vec.xyz.y = 5.0F;
vec.xyz.z = 6.0F;
printf("vec.arr[] is %f, %f, %f\n",
vec.arr[0],vec.arr[1],vec.arr[2]);

Same problem here; the value of 'vec.arr[1]' is indeterminate.
.. On a side note ..

#include <stdio.h>

int main(int argc, char** argv)
{
printf("%15s %s\n", "TYPE", "SIZE");
printf("%15s %s\n", "---------------", "----");
printf("%15s: %02d %02d\n", "char", sizeof(char),sizeof(char)*8);

'%d' format specifier, size_t argument. This is a recipe for
undefined behavior, and your compiler will warn you about it if
you turn the warning level high enough (-W -Wall on gcc, e.g).
The appropriate fix is

printf("%15s: %02lu %02lu\n", "char",
(unsigned long) sizeof (char),
(unsigned long) (8 * sizeof (char)));

And it might interest you to learn that macros can be used in C
to ease the pain of programs like this one:

#define PRINTSIZE(t) printf("%15s: %02lu $02lu\n", #t, \
(unsigned long) sizeof (t), \
(unsigned long) (8 * sizeof (t)))
PRINTSIZE(char);
PRINTSIZE(short);
PRINTSIZE(int);
[...]
PRINTSIZE(long double);

#undef PRINTSIZE
char: 01 08
short: 02 16
int: 02 16
long: 04 32
float: 04 32
double: 08 64
long int: 04 32
long double: 10 80

(This was compiled and executed on WinXP)

I thought that "int" type was 32-bit native;
and, "long int" was 64-bit native. In every
variety of UNIX that I try this on my thought
is correct.

Then you are lucky. On all the systems to which I have access,
int is long is 32 bits. You should thank your lucky stars to have
access not only to a 64-bit-'long' system, but also a 16-bit-'int'
system. That's uncommon, AFAIK.
In C, the only guarantees you have are that

sizeof(char) == 1
sizeof(char <= sizeof(short) <= sizeof(int) <= sizeof(long)

and that the width of 'char' >= 8 bits,
the widths of 'int' and 'short' >= 16 bits,
and the width of 'long' >= 32 bits.
Is there a way to detect the native type sizes that are
available?

Sure. You just did.

if (sizeof(int) == 4) {
puts("Hey, 'int' is a 4-byte type!");
}
else if (sizeof(long) == 4) {
puts("Hey, 'long' is a 4-byte type!");
}

et cetera. In C99, I think there are macros that can tell you
whether the implementation defines types int32_t, int64_t, etc.,
or something similar. In general, though, you don't need to
know how big 'int' is, so why bother?
Anyway, if I do a union on UNIX such as..

typedef union {
int val;
char bytes[4];
}

.. it will be invalid on WinXP.. sheesh!!!

Not on any major WinXP compiler of which I'm aware.
You're not still trying to run Turbo C on WinXP, are you?

-Arthur
 
M

Malcolm

Bun Head said:
Does anyone see a problem with..

#include <stdio.h>

typedef struct {
float x;
float y;
float z;
} _XYZ_;

typedef union {
float arr[3];
_XYZ_ xyz;
} XYZ;

int main(int argc, char** argv)
{
XYZ vec;

printf("union test\n");

printf("setting array to get _XYZ_\n");
vec.arr[0] = 1.0F;
vec.arr[1] = 2.0F;
vec.arr[2] = 3.0F;
printf("vec.xyz is %f, %f, %f\n", vec.xyz.x,vec.xyz.y,vec.xyz.z);
This is illegal. You have set the union using the "arr" member, then you try
to access it using the "x", "y" and "z" members.
On most platforms this will work OK but there is no guarantee.
 
M

Mark McIntyre

Does anyone see a problem with..

#include <stdio.h>

typedef struct {
float x;
float y;
float z;
} _XYZ_;

This name possibly invades compiler namespace. Dont use leading underscores
in type or variable names.
typedef union {
float arr[3];
_XYZ_ xyz;
} XYZ;

while it might seem that arr and xyz are coincident in memory, they may not
be - xyz may contain packing between the structure members.

(snip example setting a union via one member and reading it via another)

writing to one union member and reading from another is unspecified
behaviour ie its not guaranteed to be meaningful. That said, its a common
extension to put meaning to this behaviour.
I thought that "int" type was 32-bit native;
and, "long int" was 64-bit native. In every

The size of both is implementation-specific and need not relate to any
hardware optimum.
variety of UNIX that I try this on my thought
is correct. But, when going to a windoze
environment I get these results. Is there a
way to detect the native type sizes that are
available?

You just did.
Anyway, if I do a union on UNIX such as..

typedef union {
int val;
char bytes[4];
}

.. it will be invalid on WinXP.. sheesh!!!

Its valid, you're just making nonportable assumptions. Don't do that.
 
D

Default User

Malcolm said:
This is illegal. You have set the union using the "arr" member, then you try
to access it using the "x", "y" and "z" members.
On most platforms this will work OK but there is no guarantee.

It's implementation-defined, not illegal. The implementation could of
course define that as a runtime fault or something else dire.




Brian Rodenborn
 
E

E. Robert Tisdale

Bun said:
Does anyone see a problem with..

#include <stdio.h>

typedef struct XYZ_t {
float x;
float y;
float z;
} XYZ_t;

typedef union XYZ {
float arr[3];
XYZ_t xyz;
} XYZ;

int main(int argc, char* argv[]) {

XYZ vec;

printf("union test\n");

printf("setting array to get XYZ_t\n");
vec.arr[0] = 1.0F;
vec.arr[1] = 2.0F;
vec.arr[2] = 3.0F;
printf("vec.xyz is %f, %f, %f\n",
vec.xyz.x, vec.xyz.y, vec.xyz.z);
printf("setting XYZ_t to get arr\n");
vec.xyz.x = 4.0F;
vec.xyz.y = 5.0F;
vec.xyz.z = 6.0F;
printf("vec.arr[] is %f, %f, %f\n",
vec.arr[0], vec.arr[1], vec.arr[2]);

return 0;
}

Not really. It will port anywhere.
.. On a side note ..

#include <stdio.h>

int main(int argc, char** argv) {

printf("%15s %s\n", "TYPE", "SIZE");
printf("%15s %s\n", "---------------", "----");
printf("%15s: %02d %02d\n", "char", sizeof(char), sizeof(char)*8);
printf("%15s: %02d %02d\n", "short", sizeof(short), sizeof(short)*8);
printf("%15s: %02d %02d\n", "int", sizeof(int), sizeof(int)*8);
printf("%15s: %02d %02d\n", "long", sizeof(long), sizeof(long)*8);
printf("%15s: %02d %02d\n", "float", sizeof(float), sizeof(float)*8);
printf("%15s: %02d %02d\n",
"double", sizeof(double), sizeof(double)*8);
printf("%15s: %02d %02d\n",
"long int", sizeof(long int), sizeof(long int)*8);
printf("%15s: %02d %02d\n",
"long double", sizeof(long double), sizeof(long double)*8);

return 0;
}

.. produces ..

char: 01 08
short: 02 16
int: 02 16
long: 04 32
float: 04 32
double: 08 64
long int: 04 32
long double: 10 80

(This was compiled and executed on WinXP)

I thought that "int" type was 32-bit native;
and, "long int" was 64-bit native.
In every variety of UNIX that I try this on,
my thought is correct.
But, when going to a windoze environment,
I get these results.
Is there a way to detect the native type sizes
that are available?

#include said:
Anyway, if I do a union on UNIX such as...

typedef union Some_t {
int val;
char bytes[4];
} Some_t;

.. it will be invalid on WinXP.. sheesh!

Try:

#include <stdint.h>
typedef union Some_t {
int32_t val;
char bytes[4];
} Some_t;
 
J

josh

Bun said:
.. On a side note ..

#include <stdio.h>

int main(int argc, char** argv)
{
printf("%15s %s\n", "TYPE", "SIZE");
printf("%15s %s\n", "---------------", "----");
printf("%15s: %02d %02d\n", "char", sizeof(char),sizeof(char)*8);
printf("%15s: %02d %02d\n", "short", sizeof(short),sizeof(short)*8);
printf("%15s: %02d %02d\n", "int", sizeof(int),sizeof(int)*8);
printf("%15s: %02d %02d\n", "long", sizeof(long),sizeof(long)*8);
printf("%15s: %02d %02d\n", "float", sizeof(float),sizeof(float)*8);
printf("%15s: %02d %02d\n", "double",
sizeof(double),sizeof(double)*8);
printf("%15s: %02d %02d\n", "long int", sizeof(long int),sizeof(long
int)*8);
printf("%15s: %02d %02d\n", "long double", sizeof(long
double),sizeof(long double)*8);

return 0;
}

.. produces ..

char: 01 08
short: 02 16
int: 02 16
long: 04 32
float: 04 32
double: 08 64
long int: 04 32
long double: 10 80

(This was compiled and executed on WinXP)

I thought that "int" type was 32-bit native;
and, "long int" was 64-bit native. In every

On Win32, "int" bits and "long int" are usually both 32 bits. Try a
different compiler or different compiler flags.

Also note that a char is not guaranteed to be exactly 8 bits wide.
(although it almost certainly will be on any x86 machine)

With MSVC Toolkit 2003 on Windows 2000, I get:
TYPE SIZE
--------------- ----
char: 01 08
short: 02 16
int: 04 32
long: 04 32
float: 04 32
double: 08 64
long int: 04 32
long double: 08 64

-josh
 
J

J. J. Farrell

Anyway, if I do a union on UNIX such as..

typedef union {
int val;
char bytes[4];
}

.. it will be invalid on WinXP.. sheesh!!!

It's unlikely to be invalid on WinXP, but why the "sheesh"?
Where did that magic number '4' come from? Why did you go
to the trouble of finding it out or dreaming it up? If you
want the char array to be the same size as an int, then
the obvious thing to code is

typedef union {
int val;
char bytes[sizeof(int)];
} u;
 
E

Emlyn Peter Corrin

E. Robert Tisdale said:
(code snipped)

Not really. It will port anywhere.

Not to the DS9000 it won't.
Anyway, if I do a union on UNIX such as...

typedef union Some_t {
int val;
char bytes[4];
} Some_t;

.. it will be invalid on WinXP.. sheesh!

Try:

#include <stdint.h>
typedef union Some_t {
int32_t val;
char bytes[4];
} Some_t;

Might work on WinXP, but what about platforms where sizeof (int32_t) == 1?
 
M

Malcolm

E. Robert Tisdale said:
== 1?

Which platforms are those?
Some super-computers can only read memory in 32-bit bytes, which creates
problems for people writing C compilers. One solution is to mess around with
pointer internals to create the illusion of 8 bit bytes, the other is to
have 32-bit chars.
 
E

Emlyn Corrin

Malcolm said:
Some super-computers can only read memory in 32-bit bytes, which
creates problems for people writing C compilers. One solution is to
mess around with pointer internals to create the illusion of 8 bit
bytes, the other is to have 32-bit chars.

As well as some embedded processors.
The point is, why not use portable code when it's just as easy to write as
the non-portable version, especially when it better expresses the intent.

i.e.
typedef union Some_t {
some_type val;
char bytes[sizeof (some_type)];
} Some_t;
 
D

Dan Pop

Some super-computers can only read memory in 32-bit bytes,

Concrete examples, please.
which creates
problems for people writing C compilers. One solution is to mess around with
pointer internals to create the illusion of 8 bit bytes, the other is to
have 32-bit chars.

Concrete examples of implementations with 32-bit chars for super-computers
please.

The only thing remotely resembling this description was the original
Alpha processor, whose memory system was 8-bit based, but that couldn't
read entities smaller than 32 bits. Yet, all the implementations for
the 21064 chip used 8-bit bytes and tried to optimise character
processing as much as possible.

The 32-bit bytes must be searched at the opposite end of the spectrum:
DSP chips for embedded control applications and their freestanding
implementations. Such things are not used for character processing, so
making char a 32-bit type doesn't waste any resources.

Dan
 
C

CBFalconer

Emlyn said:
.... snip ...

The point is, why not use portable code when it's just as easy
to write as the non-portable version, especially when it better
expresses the intent.

i.e.
typedef union Some_t {
some_type val;
char bytes[sizeof (some_type)];
} Some_t;

Because it isn't portable code if used as a mechanism to access
the representation of some_type. Worse, the usual implementations
will appear to work.
 
D

Dave Thompson

Every(? at least most) Unix variety *now common*, probably, but
certainly there have been Unices and Unix-alikes, including the
original, that were not I32L64.
Then you are lucky. On all the systems to which I have access,
int is long is 32 bits. You should thank your lucky stars to have
access not only to a 64-bit-'long' system, but also a 16-bit-'int'
system. That's uncommon, AFAIK.
In C, the only guarantees you have are that

sizeof(char) == 1
sizeof(char <= sizeof(short) <= sizeof(int) <= sizeof(long)
Nit: you aren't actually *guaranteed* the sizes are nondecreasing;
only that sizeof (all flavors of) char is 1 and sizeof any other type
is >= 1 (and integral). You are guaranteed that the _widths_, i.e.
number of significant bits, are nondecreasing, and in practice this
means the sizes also are.
and that the width of 'char' >= 8 bits,
the widths of 'int' and 'short' >= 16 bits,
and the width of 'long' >= 32 bits.
Right. And 'long long' >= 64 bits in C99; and nondecreasing from char
to short to int to long (to long long). And the same for unsigneds.
Sure. You just did.

if (sizeof(int) == 4) {
puts("Hey, 'int' is a 4-byte type!");
}
else if (sizeof(long) == 4) {
puts("Hey, 'long' is a 4-byte type!");
}
You can also check the value ranges, which are clumsier but may
actually be better at determining whether a type is suitable for some
purpose said:
et cetera. In C99, I think there are macros that can tell you
whether the implementation defines types int32_t, int64_t, etc.,
or something similar. In general, though, you don't need to
know how big 'int' is, so why bother?

- David.Thompson1 at worldnet.att.net
 
A

Alan Balmer

I thought that "int" type was 32-bit native;
and, "long int" was 64-bit native. In every
variety of UNIX that I try this on my thought
is correct.

Apparently you haven't tried very many.
 
D

Dan Pop

In said:
Apparently you haven't tried very many.

All the Unices with native 64-bit support (which is not the same as all
the Unices running on 64-bit hardware) I'm familiar with behave as
described by Bun Head. For a Unix with no native 64-bit support, there
is no such thing as "64-bit native long int" in the first place.

Dan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,070
Latest member
BiogenixGummies

Latest Threads

Top