Pointing to high and low bytes of something

  • Thread starter Lorenzo J. Lucchini
  • Start date
L

Lorenzo J. Lucchini

My code contains this declaration:

: typedef union {
: word Word;
: struct {
: byte Low;
: byte High;
: } Bytes;
: } reg;

The colons are not part of the declaration.

Assume that 'word' is always a 16-bit unsigned integral type, and that
'byte' is always an 8-bit unsigned integral type ('unsigned short int'
and 'unsigned char' respectively on my implementation).

My understanding, after browsing through previous threads on this and
other newsgroup, is that given a variable Var of type reg, accessing
Var.Word after having assigned values to Var.Bytes.Low and
Var.Bytes.High or, conversely, accessing Var.Bytes.Low and
Var.Bytes.High after having assigned a value to Var.Word, results in
implementation-defined behavior (or possibly undefined behavior).

If it is indeed implementation-defined behavior, my question is: can
the implementation only take the liberty to choose whether
Var.Bytes.Low or Var.Bytes.High will contain the LSB of Var.Word, and
whether Var.Bytes.High or Var.Bytes.Low will contains the MSB, or can
the implementation take other liberties?

Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.

Anyway, it all comes down to: assume that I am willing to sacrifice
portability by forcing the maintainer to exchange the positions of the
two members of Bytes depending on the implementation; do I then have a
guarantee that Var.Bytes.Low will always evaluate to the LSB of
Var.Word, and that Var.Bytes.High will always evaluate to the MSB of
Var.Word?

If not, then I would gladly accept suggestions on how to change my
code.
Keep in mind that I need to access:
1) Var.Word (or its equivalent after the change) by address
2) Var.Bytes.Low (or its equivalent) by address
3) Var.Bytes.High (or its equivalent) by address
to the effect that this code can be modified in a straight-forward way
to work as intended:

: #include <stdlib.h>
: #include <stdio.h>
:
: int main() {
: reg Var;
: reg *VarWordP;
: reg *VarLSBP;
: reg *VarMSBP;
: VarWordP=&(Var.Word);
: VarLSBP=&(Var.Bytes.Low);
: VarMSBP=&(Var.Bytes.High);
: *VarWordP=0x1234;
: printf("%x %x %x\n", *VarWordP, *VarLSBP, *VarMSBP);
: return 0;
: }

Assume type reg has been defined as above. I should always get
1234 34 12
as the program's output, save any changes that could be needed in the
printf() format specifiers.


by LjL
(e-mail address removed)
 
R

Richard Bos

: typedef union {
: word Word;
: struct {
: byte Low;
: byte High;
: } Bytes;
: } reg;
Assume that 'word' is always a 16-bit unsigned integral type, and that
'byte' is always an 8-bit unsigned integral type ('unsigned short int'
and 'unsigned char' respectively on my implementation).
If it is indeed implementation-defined behavior, my question is: can
the implementation only take the liberty to choose whether
Var.Bytes.Low or Var.Bytes.High will contain the LSB of Var.Word, and
whether Var.Bytes.High or Var.Bytes.Low will contains the MSB, or can
the implementation take other liberties?

Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.

They can, but I've never seen one that does; probably the scarcity of
implementations that do so is part of the reason for the belief that
they can't.

Richard
 
D

Derk Gwen

(e-mail address removed) (Lorenzo J. Lucchini) wrote:
# My code contains this declaration:
#
# : typedef union {
# : word Word;
# : struct {
# : byte Low;
# : byte High;
# : } Bytes;
# : } reg;

# Anyway, it all comes down to: assume that I am willing to sacrifice
# portability by forcing the maintainer to exchange the positions of the

If you're willing to sacrafice portability, then try this on each machine
you're interested in, and if it works there, you're done. You can also
have a dynamic test in main(), perhaps,
{
reg x; x.Word = 0x1234;
if (x.Bytes.Low==0x34 && x.Bytes.High==0x12)
puts("reg okay");
else if (x.Bytes.Low==0x12 && x.Bytes.High==0x34)
puts("reg byte-swabbed");
else
puts("reg completely confused");
}
 
D

Dan Pop

In said:
My code contains this declaration:

: typedef union {
: word Word;
: struct {
: byte Low;
: byte High;
: } Bytes;
: } reg;

The colons are not part of the declaration.

Assume that 'word' is always a 16-bit unsigned integral type, and that
'byte' is always an 8-bit unsigned integral type ('unsigned short int'
and 'unsigned char' respectively on my implementation).

My understanding, after browsing through previous threads on this and
other newsgroup, is that given a variable Var of type reg, accessing
Var.Word after having assigned values to Var.Bytes.Low and
Var.Bytes.High or, conversely, accessing Var.Bytes.Low and
Var.Bytes.High after having assigned a value to Var.Word, results in
implementation-defined behavior (or possibly undefined behavior).

Accessing Var.Bytes.High and Var.Bytes.Low (after initialising Var.Word)
will always provide implementation-defined results, with no possibility
of undefined behaviour. But not the other way round.
If it is indeed implementation-defined behavior, my question is: can
the implementation only take the liberty to choose whether
Var.Bytes.Low or Var.Bytes.High will contain the LSB of Var.Word, and
whether Var.Bytes.High or Var.Bytes.Low will contains the MSB, or can
the implementation take other liberties?

Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.

Your intuition is correct: in theory, the compiler *can* do that.
In practice, padding bytes are inserted only when they serve a *good*
purpose. Inserting padding byte(s) between Low and High would be
downright perverse, since, *in the framework of your assumptions*, no
padding bytes are needed at all: you're merely aliasing a two-byte object
by two independent bytes.
Anyway, it all comes down to: assume that I am willing to sacrifice
portability by forcing the maintainer to exchange the positions of the
two members of Bytes depending on the implementation; do I then have a
guarantee that Var.Bytes.Low will always evaluate to the LSB of
Var.Word, and that Var.Bytes.High will always evaluate to the MSB of
Var.Word?

In practice, yes, assuming that your initial assumptions still hold.
If not, then I would gladly accept suggestions on how to change my
code.
Keep in mind that I need to access:
1) Var.Word (or its equivalent after the change) by address
2) Var.Bytes.Low (or its equivalent) by address
3) Var.Bytes.High (or its equivalent) by address
to the effect that this code can be modified in a straight-forward way
to work as intended:

: #include <stdlib.h>
: #include <stdio.h>
:
: int main() {
: reg Var;
: reg *VarWordP;
: reg *VarLSBP;
: reg *VarMSBP;
: VarWordP=&(Var.Word);
: VarLSBP=&(Var.Bytes.Low);
: VarMSBP=&(Var.Bytes.High);
: *VarWordP=0x1234;
: printf("%x %x %x\n", *VarWordP, *VarLSBP, *VarMSBP);
: return 0;
: }

Assume type reg has been defined as above. I should always get
1234 34 12
as the program's output, save any changes that could be needed in the
printf() format specifiers.

You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*lp, (unsigned)*hp);

Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.

Also note the casts in the printf call: %x expects an unsigned value and
there is no guarantee that any of the three values will get promoted to
this type (signed int is far more probable). So, you must provide the
right type explicitly (again, the code will work without the casts as well
in practice, but you have nothing to gain by not doing the right thing).

Dan
 
L

Lorenzo J. Lucchini

Derk Gwen said:
[using unions to extract MSB and LSB from something]

If you're willing to sacrafice portability, then try this on each machine
you're interested in, and if it works there, you're done. You can also
have a dynamic test in main(), perhaps,
{
reg x; x.Word = 0x1234;
if (x.Bytes.Low==0x34 && x.Bytes.High==0x12)
puts("reg okay");
else if (x.Bytes.Low==0x12 && x.Bytes.High==0x34)
puts("reg byte-swabbed");
else
puts("reg completely confused");
}

I am interested in every machine someone could decide to compile my
code on.
By "sacrificing portability" I simply mean that this hypothetical
person should go through the hassle of checking whether his or her
machine is little-endian or big-endian, and uncomment the relevant
#define.
What I would *not* like to get on any machine is the "reg completely
confused".
See the reply I'm just about to write to Dan Pop's article for further
questions about if and when the "confused" part can occur.

by LjL
(e-mail address removed)
 
S

Simon Biber

Dan Pop said:
You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*lp, (unsigned)*hp);

ITYM pl ph
Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.

You are pointing at two bytes of foo directly, yes. There are no
guarantees that what you get out will be either 0x12 or 0x34 for
either of the byte results. The object representation of an
unsigned integer type (apart from unsigned char) is not specified
to the level where you must be able to identify definite MSB and
LSB where the value is given as (MSB << n) + LSB -- the bits could
be organised in a different order, and there could be padding bits.
 
L

Lorenzo J. Lucchini

In said:
My code contains this declaration:

[unions to extract MSB and LSB from something]

My understanding, after browsing through previous threads on this and
other newsgroup, is that given a variable Var of type reg, accessing
Var.Word after having assigned values to Var.Bytes.Low and
Var.Bytes.High or, conversely, accessing Var.Bytes.Low and
Var.Bytes.High after having assigned a value to Var.Word, results in
implementation-defined behavior (or possibly undefined behavior).

Accessing Var.Bytes.High and Var.Bytes.Low (after initialising Var.Word)
will always provide implementation-defined results, with no possibility
of undefined behaviour. But not the other way round.

What do you mean with "not the other way round"? That accessing
Var.Word after initializing Var.Bytes.High and Var.Bytes.Low could
result in *undefined* behavior? If so, then I'm already invoking nasal
demons.
Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.

Your intuition is correct: in theory, the compiler *can* do that.
In practice, padding bytes are inserted only when they serve a *good*
purpose. Inserting padding byte(s) between Low and High would be
downright perverse, since, *in the framework of your assumptions*, no
padding bytes are needed at all: you're merely aliasing a two-byte object
by two independent bytes.

Couldn't an architecture require that my 'byte's be aligned on word
boundaries?
Then two of my 'byte's could have take two machine words, while one of
my 'word' would take only one machine word (assuming the machine word
is 16 bit).
Am I missing something the standard requires here?

(Note that, while I'm calling my types 'byte' and 'word', they don't
have to correspond to a machine byte and a machine word; they only
need to be 8 bits and 16 bits wide respectively. For the record,
'byte' and 'word' try to mimic machine bytes and machine words of a
Z80)
[Am I guaranteed my approach will work?]

In practice, yes, assuming that your initial assumptions still hold.

My assumptions will hold as soon as someone has changed the #define's
the way I told them to.
I have no problem with people having to change even more #define's to
tell my program which byte is in Var.Bytes.High and which is in
Var.Bytes.Low... but I *would* have a problem if someone could have a
machine where the result won't be right no matter which #define is
uncommented.

You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*lp, (unsigned)*hp);

Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.

This looks like an interesting solution - to be sure I'm on the safe
side, if nothing else.
But... assuming the scenario I outlined above (machine with
word-aligned bytes and such) isn't forbidden by the standard, couldn't
it cause problems (or invoke demons) with this formulation, too?
I can see a hint that it shouldn't in the ANSI rationale, but I can't
call it more than a hint (I'm clueless, I'll admit).
Also note the casts in the printf call: %x expects an unsigned value and
there is no guarantee that any of the three values will get promoted to
this type (signed int is far more probable). So, you must provide the
right type explicitly (again, the code will work without the casts as well
in practice, but you have nothing to gain by not doing the right thing).

Thank you for pointing this out; I should remember this more often
than I currently do, since while the code might work, it's nowhere
near nice to see stuff like "ffffff3" where an "f3" was expected.

by LjL
(e-mail address removed)
 
D

Dan Pop

In said:
ITYM pl ph


You are pointing at two bytes of foo directly, yes. There are no
guarantees that what you get out will be either 0x12 or 0x34 for
either of the byte results. The object representation of an
unsigned integer type (apart from unsigned char) is not specified
to the level where you must be able to identify definite MSB and
LSB where the value is given as (MSB << n) + LSB -- the bits could
be organised in a different order, and there could be padding bits.

The initial set of assumptions excludes the possibility of padding bits:
there is no place for them in a 16-bit unsigned integer type.

Your observation about LSB and MSB is theoretically correct, but C
implementations on 8-bit bytes machines are supposed to use the
underlying hardware bit order, which never assigns the bits randomly.
The only known variation is the byte order, but not the order of bits
inside a byte.

Dan
 
L

Lorenzo J. Lucchini

Simon Biber said:
ITYM pl ph


You are pointing at two bytes of foo directly, yes. There are no
guarantees that what you get out will be either 0x12 or 0x34 for
either of the byte results. The object representation of an
unsigned integer type (apart from unsigned char) is not specified
to the level where you must be able to identify definite MSB and
LSB where the value is given as (MSB << n) + LSB -- the bits could
be organised in a different order, and there could be padding bits.

Add to this the fact that my initial assumptions ('byte' is an 8-bit
unsigned type, 'word' a 16-bit unsigned type) may not be met by *any*
type on a specific implementation.

I was pondering the implications this fact yesterday night before
sleeping: it seemed to me that the only viable way to be really
portable (but my definition of being 'portable' accepts having
#define's to tweak for each implementation) was to use an 'at least
8-bit wide unsigned type' (unsigned char, namely) instead of 'byte'
and an 'at least 16-bit wide unsigned type' (unsigned short int)
instead of 'word'; a consequence of this is that I'd have to
explicitly do every arithmetic operation in modulo-256 or
modulo-65536, while this was achieved quietly with 'exactly 8-bit' and
'exactly 16-bit' types.

Do you think many implementation will optimize my %256's and %65536's
away when they're compiling for a machine with 8-bit chars and 16-bit
shorts?
The knowledge that many do would help me undertake the change much
more light-heartedly.

The problem would remain, though, of how to access the MSB and LSB of
the at-least-16-bits-wide values (which would be guaranteed by my
modulo operations never to go past 65535).
Of course I can use shifts and masks... but those don't give lvalues,
while dereferenced pointers certainly can.

It would seem that I'll have to rework the logic of my program... but
I do hope I am mistaken here.


by LjL
(e-mail address removed)
 
D

Dan Pop

In said:
[email protected] (Dan Pop) wrote in message news: said:
In said:
My code contains this declaration:

[unions to extract MSB and LSB from something]

My understanding, after browsing through previous threads on this and
other newsgroup, is that given a variable Var of type reg, accessing
Var.Word after having assigned values to Var.Bytes.Low and
Var.Bytes.High or, conversely, accessing Var.Bytes.Low and
Var.Bytes.High after having assigned a value to Var.Word, results in
implementation-defined behavior (or possibly undefined behavior).

Accessing Var.Bytes.High and Var.Bytes.Low (after initialising Var.Word)
will always provide implementation-defined results, with no possibility
of undefined behaviour. But not the other way round.

What do you mean with "not the other way round"? That accessing
Var.Word after initializing Var.Bytes.High and Var.Bytes.Low could
result in *undefined* behavior? If so, then I'm already invoking nasal
demons.

Yup. It cannot happen in your particular case, because there is no place
for padding bits, but you cannot count on it in the general case.
Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.

Your intuition is correct: in theory, the compiler *can* do that.
In practice, padding bytes are inserted only when they serve a *good*
purpose. Inserting padding byte(s) between Low and High would be
downright perverse, since, *in the framework of your assumptions*, no
padding bytes are needed at all: you're merely aliasing a two-byte object
by two independent bytes.

Couldn't an architecture require that my 'byte's be aligned on word
boundaries?

Nope. Not as long as your 'byte' is defined as a character type.
Then two of my 'byte's could have take two machine words, while one of
my 'word' would take only one machine word (assuming the machine word
is 16 bit).
Am I missing something the standard requires here?

Yup. Character types have no alignment requirements. And your 'byte'
must be defined as a character type if it has to be an 8-bit type.
Any other standard C89 type is at least 16 bits wide.
(Note that, while I'm calling my types 'byte' and 'word', they don't
have to correspond to a machine byte and a machine word; they only
need to be 8 bits and 16 bits wide respectively. For the record,
'byte' and 'word' try to mimic machine bytes and machine words of a
Z80)

It doesn't matter. In C89, an 8-bit type is either a character type or
nothing at all.
This looks like an interesting solution - to be sure I'm on the safe
side, if nothing else.
But... assuming the scenario I outlined above (machine with
word-aligned bytes and such) isn't forbidden by the standard, couldn't

There is no such thing. Character types have no alignment restrictions.
it cause problems (or invoke demons) with this formulation, too?
I can see a hint that it shouldn't in the ANSI rationale, but I can't
call it more than a hint (I'm clueless, I'll admit).

No, aliasing an object by an array of unsigned char is guaranteed to work.
Thank you for pointing this out; I should remember this more often
than I currently do, since while the code might work, it's nowhere
near nice to see stuff like "ffffff3" where an "f3" was expected.

This is not going to happen if the types you use are unsigned, even if
the type they are promoted to is signed.

Dan
 
E

Eric Sosman

Dan said:
Your observation about LSB and MSB is theoretically correct, but C
implementations on 8-bit bytes machines are supposed to use the
underlying hardware bit order, which never assigns the bits randomly.
The only known variation is the byte order, but not the order of bits
inside a byte.

<pet-peeve>

Unless the machine is able to address objects smaller
than a byte, "the order of bits inside a byte" is undetectable
and need not even be meaningful at all.

An example, from the ever-popular DeathStation 9000.
As everyone knows, some models of the DS9000 use an eleven-
bit byte, and the hardware manual says that those eleven
bits occupy all but one of the vertices of a regular
dodecahedron (the twelfth vertex is reserved for future
expansion).

Another DS9000 model also uses eleven-bit bytes, but
arranges them differently: the ten low-order bits are
stored in five four-state "fits" with the sign bit in a
single two-state device positioned between the second
and third fits:

512, 1 : leftmost fit
256, 2 : second fit
sign : bit
128, 4 : third fit
64, 8 : fourth fit
32, 16 : rightmost fit

The challenge is to devise a C program that can
determine which DS9000 model it is running on. I do not
believe the challenge can be met, and so I assert that
"the order of bits inside a byte" is a vacuous concept
on machines that don't support sub-byte addressing.

</pet-peeve>
 
D

Dan Pop

In said:
<pet-peeve>

Unless the machine is able to address objects smaller
than a byte, "the order of bits inside a byte" is undetectable
and need not even be meaningful at all.

When you map a wider object by an array of bytes, it helps a lot if the
order of bits inside the byte is consistent with the order of bits inside
the wider object. Both hardware designers and C implementors seem to
agree on this point.

Dan
 
E

Eric Sosman

Dan said:
When you map a wider object by an array of bytes, it helps a lot if the
order of bits inside the byte is consistent with the order of bits inside
the wider object. Both hardware designers and C implementors seem to
agree on this point.

<topicality degree="straying">

I think you've missed the thrust of the argument, or
perhaps the argument's thrust went wide of you. I'm saying
that (1) bit order is not detectable by any C construct I
can imagine, (2) bit order is not detectable by any CPU
instruction on a machine that lacks bit-level addressing,
(3) by Occam's Razor, that which is undetectable is better
omitted from discussion.

I offered two fanciful examples of situations where bit
order could not really be said to exist at all. For a real-
life example, consider the signals that travel between a pair
of modems. Early modems used two easily-distinguished tones
to transmit individual bits: BEEP for zero and BOOP for one,
as it were. Later, it was found that higher speeds could be
obtained by associating the bits with the transitions between
the tones rather than with the tones themselves. Modern
modems go even further: they use a whole palette of tones
(BEEP, BOOP, BRAAP, BZZZ, ...) and encode a whole bunch of
bits in each transition.

The question: What is the "bit order" of the N bits
encoded by one single BZZZ-to-BEEP transition in this scheme?
Note that all N bits leave the transmitter encoded in one
single event and arrive at the receiver the same way: they
are simultaneous and indivisible -- and I say the entire
idea of "bit order" in such a situation is meaningless.

</topicality>
 
C

Chris Torek

... I'm saying that (1) bit order is not detectable by any C
construct I can imagine, (2) bit order is not detectable by any CPU
instruction on a machine that lacks bit-level addressing,
(3) by Occam's Razor, that which is undetectable is better
omitted from discussion.

Indeed. One can, however, bring up C's bitfield-in-structure
construct:

struct bits { int a:1, b:1, c:1 /* fill in more */ ; };

which might *seem* to expose the hardware's bit order.

In fact, it does not -- the bitfields are allocated by the
compiler, and two different C compilers on the same hardware
will sometimes use different bit orders.

Even for case (2), sometimes the CPUs themselves exhibit a split
personality. The Motorola 680x0 series did this: the single-bit
instructions that operate on D registers (e.g., BIT #3, D0) use
the opposite order from the bitfield instructions (e.g., BFEXT).

[modern modem example, snipped]
The question: What is the "bit order" of the N bits
encoded by one single BZZZ-to-BEEP transition in this scheme?
Note that all N bits leave the transmitter encoded in one
single event and arrive at the receiver the same way: they
are simultaneous and indivisible -- and I say the entire
idea of "bit order" in such a situation is meaningless.

Correct. Sequencing cannot arise unless there is a sequence. If
there is no defined time or space division -- if all is an
indistinguishable, atomic lump -- then the notion of "the part on
the left" or "the part at the front of the queue" makes no sense.

Of course, in C code, we (programmers) can take apart any byte
(unsigned char) value in any way we like, using shifts and masks:
val & 0x80, val & 0x01, val & 0x04, val & 0x02, val & 0x10, ...
defines a bit order. But *we* have defined it, and thus we control
the horizontal and the vertical. Only if you allow someone else
to define the order -- say, by taking a two-byte object (one with
sizeof obj == 2) and addressing it as two separate "unsigned char"s
-- have you given up control; only then do you need to beg the one
to whom you gave up that control: "pretty please, tell me the order
*you* used, so that I may accomodate you". As Humpty Dumpty put
it, the question is who is to be the master. :)
 
S

Simon Biber

Dan Pop said:
When you map a wider object by an array of bytes, it helps a
lot if the order of bits inside the byte is consistent with
the order of bits inside the wider object. Both hardware
designers and C implementors seem to agree on this point.

I think Dan gets the point while Eric does not.

Even if we assume that the type unsigned short
(a) is 16 bits
(b) is two bytes
(c) has no padding bits

If I wrote:
unsigned short x = 0x1234;
unsigned char *a = (unsigned char *)&x;

a[0] need not be either 0x12 or 0x34, and a[1] need not be
either 0x12 or 0x34. This is because the value bits can be
stored in a DIFFERENT order for unsigned short compared to
unsigned char.

The value 0x1234 could be mapped into the two bytes as:
a[0] == 0x13
a[1] == 0x24
If we then replace:
a[0] = 0x26
a[1] = 0x48
And then see that
x == 0x2468
I believe that would be a conforming implementation.
 
M

mike gillmore

Chris Torek said:
... I'm saying that (1) bit order is not detectable by any C
construct I can imagine, (2) bit order is not detectable by any CPU
instruction on a machine that lacks bit-level addressing,
(3) by Occam's Razor, that which is undetectable is better
omitted from discussion.

Indeed. One can, however, bring up C's bitfield-in-structure
construct:

struct bits { int a:1, b:1, c:1 /* fill in more */ ; };

which might *seem* to expose the hardware's bit order.

In fact, it does not -- the bitfields are allocated by the
compiler, and two different C compilers on the same hardware
will sometimes use different bit orders.

Even for case (2), sometimes the CPUs themselves exhibit a split
personality. The Motorola 680x0 series did this: the single-bit
instructions that operate on D registers (e.g., BIT #3, D0) use
the opposite order from the bitfield instructions (e.g., BFEXT).

[modern modem example, snipped]
The question: What is the "bit order" of the N bits
encoded by one single BZZZ-to-BEEP transition in this scheme?
Note that all N bits leave the transmitter encoded in one
single event and arrive at the receiver the same way: they
are simultaneous and indivisible -- and I say the entire
idea of "bit order" in such a situation is meaningless.

Correct. Sequencing cannot arise unless there is a sequence. If
there is no defined time or space division -- if all is an
indistinguishable, atomic lump -- then the notion of "the part on
the left" or "the part at the front of the queue" makes no sense.

Of course, in C code, we (programmers) can take apart any byte
(unsigned char) value in any way we like, using shifts and masks:
val & 0x80, val & 0x01, val & 0x04, val & 0x02, val & 0x10, ...
defines a bit order. But *we* have defined it, and thus we control
the horizontal and the vertical. Only if you allow someone else
to define the order -- say, by taking a two-byte object (one with
sizeof obj == 2) and addressing it as two separate "unsigned char"s
-- have you given up control; only then do you need to beg the one
to whom you gave up that control: "pretty please, tell me the order
*you* used, so that I may accomodate you". As Humpty Dumpty put
it, the question is who is to be the master. :)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://67.40.109.61/torek/index.html (for the moment)
Reading email is like searching for food in the garbage, thanks to
spammers.

I have used this little program for many years to discover the machine
endian-ness. Use it in good health.



#include <stdio.h>

int
main ( )
{

/*
* This short program will simply determine if this machine is
* a little endian or a big endian byte ordering architecture.
*
* If this machine is little endian, this program returns a zero. If
* this is big endian machine, this program returns non-zero
*/

unsigned short testValue = 0xdead;
unsigned char * firstBytePtr = ( unsigned char * )&testValue;
int isBigEndian;

isBigEndian = ( *firstBytePtr != ( unsigned char )testValue );

printf( " %x %s %x isBigEndian = %s(%d)\n",
*firstBytePtr, isBigEndian ? "!=" : "==", ( unsigned char )testValue,
isBigEndian ? "TRUE" : "FALSE", isBigEndian );

return ( isBigEndian );
} /* main() */
 
L

Lorenzo J. Lucchini

Simon Biber said:
Dan Pop said:
When you map a wider object by an array of bytes, it helps a
lot if the order of bits inside the byte is consistent with
the order of bits inside the wider object. Both hardware
designers and C implementors seem to agree on this point.

I think Dan gets the point while Eric does not.

Even if we assume that the type unsigned short
(a) is 16 bits
(b) is two bytes
(c) has no padding bits

If I wrote:
unsigned short x = 0x1234;
unsigned char *a = (unsigned char *)&x;

a[0] need not be either 0x12 or 0x34, and a[1] need not be
either 0x12 or 0x34. This is because the value bits can be
stored in a DIFFERENT order for unsigned short compared to
unsigned char.

The value 0x1234 could be mapped into the two bytes as:
a[0] == 0x13
a[1] == 0x24
If we then replace:
a[0] = 0x26
a[1] = 0x48
And then see that
x == 0x2468
I believe that would be a conforming implementation.

Sure, an implementation conforming to my desire to strangle it.
But I can see your point. Do you have any suggestion on how to solve
my puzzle (with bit-masks being the only viable solution I suppose,
given the above)? My problem can be basically summarized as:
1) I am given a 'pointer' to a 'byte' or a 'word' (I know in advance
whether it'll be 'byte' or 'word', so I can branch accordingly). While
my 'pointers' are real pointers ATM, feel free to extend the meaning
of 'pointer' as "anything that uniquely identifies the object it
refers to".
2) I should be able to use the dereferenced pointer both as an
expression value and as an lvalue; I need to assign to it.
3) Whenever I assign to a dereferenced pointer, the value of the
dereferenced pointer itself changes (obviously), but there is at least
another dereferenced pointer among those I can get at point 1) that
changes simultaneously. Specifically, if I assign to a deref. pointer
to 'word', two deref. pointers to two 'byte's will change; if I assign
to a deref. pointer to 'byte', one deref. pointer to 'word' will
change.

In real life, in case it's easier to understand, this translates to:
I am simulating a processor that has some registers called B, C, D, E,
H and L. These are 8-bit. The processor, however, can also treat them
as the 16-bit pairs BC, DE and HL.
Given a simulated machine instruction (which by definition tells me
whether it wants to access a register or a register pair), I can call
a function(said instruction) that returns me a pointer to the operand
- that is, depending on the instruction, a pointer to an 8-bit or to a
16-bit value.
I then use the dereferenced pointer how I see fit.

With a solution that has (B, C, D, E, H) and (BC, DE, HL) as separate
variables, when one group gets modified, the other group does not need
to be synchronized immediately, it can wait the next loop iteration,
if this helps.

Of course, I do have (more than) a solution: for example, I could
simply use an 'assign' function, instead of the normal C assignment
operator, that takes care of synchronizing the values.
But it's not a solution I like too much, and I was hoping someone here
could find a more elegant one (or, dare I say, a 'more efficient' one,
with the word 'efficiency' being defined vaguely enough - say, as few
dumb synchronize-thee function calls as possible).


On a side note, anyone who has the temper to tell me "keep using your
pointers, no implementation in the next 50 years will mess them up"?
:)


by LjL
(e-mail address removed)
 
A

Alan Balmer

Sure, an implementation conforming to my desire to strangle it.
But I can see your point. Do you have any suggestion on how to solve
my puzzle (with bit-masks being the only viable solution I suppose,
given the above)?

Sure, but it's OT here. Stop worrying about it, and include a comment
saying "If this should fail when ported to another implementation,
please call Simon."
 
D

Dan Pop

In said:
In real life, in case it's easier to understand, this translates to:
I am simulating a processor that has some registers called B, C, D, E,
H and L. These are 8-bit. The processor, however, can also treat them
as the 16-bit pairs BC, DE and HL.
Given a simulated machine instruction (which by definition tells me
whether it wants to access a register or a register pair), I can call
a function(said instruction) that returns me a pointer to the operand
- that is, depending on the instruction, a pointer to an 8-bit or to a
16-bit value.
I then use the dereferenced pointer how I see fit.

With a solution that has (B, C, D, E, H) and (BC, DE, HL) as separate
variables, when one group gets modified, the other group does not need
to be synchronized immediately, it can wait the next loop iteration,
if this helps.

The "no assumptions" solution is to simply use an array of unsigned char
for storing the values of the individual registers, in the order
B, C, D, E, H, L, padding or F, A. This order is made obvious by the
Z80/8080 instruction encoding.

When you need a register pair, you compute it on the fly:

words[DE] = (unsigned)regs[D] << 8 + regs[E];

When an instruction has modified a register pair (few Z80 and even fewer
8080 instructions can do that), you update the individual registers:

regs[D] = words[DE] >> 8;
regs[E] = words[DE] & 0xFF;

I also believe that this approach will actually simplify the overall
coding of your simulator, because it allows using the register fields
inside the opcodes to be used as indices in the array, so you never have
to figure out what is the variable corresponding to a value of 2 in the
register field, you simply use 2 as an index in the registers array.
The instruction decoding becomes a piece of cake, this way.

The words array doesn't have to be kept in sync at all, except when
simulating a word instruction or indirect addressing via HL (and even
then, only the relevant elements have to be synchronised).

Mapping the registers by words doesn't work well for little endian
platforms, because the right way of doing it (in the framework of your
initial assumptions) would be:

unsigned short words[4];
unsigned char *regs = (unsigned char *)words;

But this would map B into the LSB of BC and C into the MSB of BC, which
is wrong. And you really want to store the registers in the order
defined above.

The union approach may look tempting, but it doesn't fit very well into
the scheme of a simple and efficient emulator. Using the right data
structure and format for the registers is essential for the rest of the
code of the simulator and I believe that my solution, apart from relying
on no assumptions, is also optimal for the rest of the program.

Dan
 
L

Lorenzo J. Lucchini

In said:
[Registers and register pairs on a Z80 and how to handle a
simulation of them in C]

The "no assumptions" solution is to simply use an array of unsigned char
for storing the values of the individual registers, in the order
B, C, D, E, H, L, padding or F, A. This order is made obvious by the
Z80/8080 instruction encoding.

When you need a register pair, you compute it on the fly:

words[DE] = (unsigned)regs[D] << 8 + regs[E];

When an instruction has modified a register pair (few Z80 and even fewer
8080 instructions can do that), you update the individual registers:

regs[D] = words[DE] >> 8;
regs[E] = words[DE] & 0xFF;

I also believe that this approach will actually simplify the overall
coding of your simulator, because it allows using the register fields
inside the opcodes to be used as indices in the array, so you never have
to figure out what is the variable corresponding to a value of 2 in the
register field, you simply use 2 as an index in the registers array.
The instruction decoding becomes a piece of cake, this way.

While it looks like this approach will require some quite extensive
reworking of my code - which I hoped to avoid -, it does look
extremely interesting. I'll do it.

Clearly, I knew perfectly well that instructions have a register
field, which in turn means it's 'obvious' that there is intrinsicly a
preferred order for the registers... Nevertheless, I don't think I
would have ever thought of putting them in an array. I cannot but
thank you for the suggestion.

I'll probably submit some code for review, when it's in a better
shape.

by LjL
(e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top