bit order in xx-endian system

  • Thread starter zhangsonglovexiaoniuniu
  • Start date
K

Kaz Kylheku

So you're saying that if I have CS5 set, the least significant bit
of an octet is sent first.

What I said is correct. In fact, the application will still write an
array of bytes holding octets to that tty file descriptor, even if CS5
is set. The least significant bit of each octet is sent first.

With CS5 set, however (and that being programmed down to the hardware
which supports such a ridiculous mode, such as the 16550 UART) the
transmission of the octet abruptly ends just at the point where the
pentet, recently evolved from a quartet, is on the verge of becoming a
sextet.

:)
 
N

Nick Keighley

SNA is an entire communication protocol suite, analogous to TCP/IP.
There is a data link protocol in the suite, SDLC, but that's layer 2.
It's defined in terms of octets, not bits on a wire. I can't easily
find any info on the IBM communication hardware for which SDLC was
originally targetted.


First a framing bit is sent, the start bit. Then the data bits, least
significant first. Then the parity bit, if any, followed by the stop
bit(s).

note this only applies to asynchronous transmission. Synchronous
transmission dispenses with the start and stop bits. End of
transmission is indicated with a special bit sequence that
cannot occur in data (extra bits are "stuffed" into the data
to ensure this). SDLC is a synchronous protocol (the "S" in SDLC).

SDLC is related to HDLC which is the L2 for X25.
 
T

Tor Rustad

Walter said:
Is that true for (e.g.,) SNA, that the least significant bit
is transmitted first?

SNA might run on top of SDLC, X.25 or Token ring, these days IBM
would support more, e.g. Ethernet, FDDI, Frame relay etc.

I have programmed some HDLC, which SDLC was a forerunner to. SDLC and
HDLC are at OSI level 2 (link level), but I have still no idea which bit
ordering was used, since this is controlled by hardware (OSI level 1).
And how does the above transmission
scheme deal with the cases where the serial ports when the character
size is not exactly an octet wide, such as if the termios
c_cflag CSIZE field is set to CS5, CS6, or CS7?

AFAIK, SDLC is a bit level protocol, so at least in this case, there
should be no worries.
If parity bits are in effect for the serial port, then at which
point are they transmitted?

From the idle state of serial, first comes the start bit, followed by
data bits, then the (optional) parity bit, and finally the stop bit(s).

BTW, for a long time after using 9-pin RS-232, I believed that all the
pins was connected to a separate wire, was quite surprised when a HW
engineer told me, there where only three wires: one Tx, one Rx and one
GD! :)
 
N

Nagrik

Walter said:
SNA might run on top of SDLC, X.25 or Token ring, these days IBM
would support more, e.g. Ethernet, FDDI, Frame relay etc.

I have programmed some HDLC, which SDLC was a forerunner to. SDLC and
HDLC are at OSI level 2 (link level), but I have still no idea which bit
ordering was used, since this is controlled by hardware (OSI level 1).


AFAIK, SDLC is a bit level protocol, so at least in this case, there
should be no worries.


From the idle state of serial, first comes the start bit, followed by
data bits, then the (optional) parity bit, and finally the stop bit(s).

BTW, for a long time after using 9-pin RS-232, I believed that all the
pins was connected to a separate wire, was quite surprised when a HW
engineer told me, there where only three wires: one Tx, one Rx and one
GD! :)

Dear Group,

Can someone explain the following syntex. What does it mean and
why does it have to be inside a struct and not as an stand alone
unsigned int/char.

struct {
unsigned char a:1;
unsigned char b:1;
unsigned char c:1;
unsigned char d:1;
unsigned char e:1;
unsigned char f:1;
unsigned char g:1;
unsigned char h:1;
}a;

Thanks.

nagrik
 
C

Chris Torek

Can someone explain the following syntex. What does it mean and
why does it have to be inside a struct and not as an stand alone
unsigned int/char.

It is tempting to ask "what does your book-on-C say", but a lot
of books-on-C are not so good. :)
struct {
unsigned char a:1;
[snippage]

This is a struct (with no tag, in this case) followed by a
"struct-declarator-list", and as such, it declares a new type.
In general, one should usually include a tag, e.g.:

struct zorg {

because without a tag, there is no way to refer back to the
same struct-type later. (There are specific exceptions to
this rule, but "in general" it is best to use tags.)

Presumably, however, the syntax you are interested in is the
part that happens after the open-brace, i.e., the

unsigned char a:1;

part. Here we need to look at the grammar specified in the
C Standard, which reads in part:

struct-declaration-list:
struct-declaration
struct-declaration-list struct-declaration

struct-declaration:
specifier-qualifier-list struct-declarator-list ;

specifier-qualifier-list:
type-specifier specifier-qualifier-list-opt
type-qualifier specifier-qualifier-list-opt

struct-declarator-list:
struct-declarator
struct-declarator-list , struct-declarator

struct-declarator:
declarator
declarator-opt : constant-expression

In this case, the grammar production involved is the very last one
-- an optional declarator, followed by a colon (':') character,
followed by an integral constant expression. In this case, the
declarator -- "unsigned char a" -- is present. The Standard then
goes on to say:

[#7] A member of a structure or union may have any object
type other than a variably modified type. In addition, a
member may be declared to consist of a specified number of
bits (including a sign bit, if any). Such a member is
called a bit-field; its width is preceded by a colon.

[#8] A bit-field shall have a type that is a qualified or
unqualified version of signed int or unsigned int. A bit-
field is interpreted as a signed or unsigned integer type
consisting of the specified number of bits.

Thus, this structure-member (the one named "a") is a "bit-field",
consisting of one (1) bit. The type -- "unsigned char" -- is
incorrect and invalid; a diagnostic is required.

Finally:

[#9] An implementation may allocate any addressable storage
unit large enough to hold a bit-field. If enough space
remains, a bit-field that immediately follows another bit-
field in a structure shall be packed into adjacent bits of
the same unit. If insufficient space remains, whether a
bit-field that does not fit is put into the next unit or
overlaps adjacent units is implementation-defined. The
order of allocation of bit-fields within a unit (high-order
to low-order or low-order to high-order) is implementation-
defined. The alignment of the addressable storage unit is
unspecified.

Hence, exactly which bit the member "a" represents, within whatever
"unit of storage" the implementation chooses to use, is up to the
implementation. Or, in less general but perhaps more understandable
terms, you -- the C programmer -- have said: "Hey, Mister Compiler,
pick out some glob of bytes somewhere, and then pick out one bit
within that glob of bytes, and use that bit, but don't tell me
which bit in which bytes, because I don't care!"

If you *do* care which bit(s) are used in which byte(s), do not
use bitfields.
 
D

David Thompson

authoritative
authenticated (in principle data should be authenticated if and only
if it is in fact authentic, but an important part of security schemes
like DNSSEC is to deal with the exception cases where this fails)

C does not define BYTE_ORDER, and I am not -aware- of any C compilers
that define it. BYTE_ORDER and BIG_ENDIAN and LITTLE_ENDIAN are very
likely tested and defined by a program outside of the program you
are showing -- perhaps by a program such as "GNU automake". You
would have to look at that outside test <snip>

And even assuming that the BYTE_ORDER setting is correct, there is no
reason to expect that bitfield allocation within a storage unit is the
same 'endianness' as bytes; nor, as already noted, that the storage
unit is one byte. (Nor that a C byte is 8 bits, but that is at least
common and can be verified using CHAR_BIT from <limits.h>.)

OTOH on the wire the counts, following the general practice for
multioctet numbers in IP-stack protocols, must be bigendian. Simply
declaring a bitfield of :16, even if it does in fact allocate 2 octets
which is not required, won't get them right on a littleendian machine.
In principle ID is also bigendian, but as it's arbitrary anyway, that
may not really matter as long as you're consistent.
But my advise in a case such as this is not to start by looking
at the program that is constructing the BYTE_ORDER test. My advice
in a case like this is to refer right back to the standard that
defines DNS headers. The standards that define DNS headers are always
very particular about the exact order of bits transmitted. Code such

Pretty much all IP-stack standards (RFCs) are particular about the
_layout_ of bits within octets, and larger things over octets. The
(time) _order_ of transmission, and particularly bit transmission (if
serial) is left to the lowest level link standards, if at all.
as you have quoted is attempting to match the specification in the
standard. However (probably for ease of programming), the code
restricts itself to a small number of the possible meanings of
"big endian" and "little endian", taking the most -common- cases,
and Just Not Working Right for the less common cases.

- formerly david.thompson1 || achar(64) || worldnet.att.net
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,056
Latest member
GlycogenSupporthealth

Latest Threads

Top