G
GalenTX
I am looking for opinions on a possible approach to coping with
bitfields in a heterogenous environment.
We've got a bunch of legacy code which maps to hardware using
bitfields. The code was developed for a PowerPC/VxWorks/Tornado
enviromant, with the bits packed in a big-endian manner: msb first. We
now have to develop an Intel/Windows/MSVC++ version (lsb assigned
first) which must maintain compatibility with the hardware interface
and use the same source for manipulating the related bitfields. So
discussion of better approaches than using bitfields is moot.
I am considering the following approach to minimize impact to the
legacy code. It uses nested macro calls to define the bitfields in
forward or reverse order according to the environment:
#ifdef BITFIELDS_LITTLE_ENDIAN
#define DEFINE_BITFIELDS_MSB_FIRST(A,B) B A
#else
#define DEFINE_BITFIELDS_MSB_FIRST(A,B) A B
#endif
struct bitfield_struct
{
DEFINE_BITFIELDS_MSB_FIRST( int bit31 : 1; ,
DEFINE_BITFIELDS_MSB_FIRST( int bit20_30 : 11; ,
DEFINE_BITFIELDS_MSB_FIRST( int bit10_19 : 10; ,
DEFINE_BITFIELDS_MSB_FIRST( int bit1_9 : 9; ,
int bit0 : 1;
))))
};
The preprocessor generates the following if BITFIELDS_LITTLE_ENDIAN is
defined:
struct bitfield_struct
{
int bit0 : 1; int bit1_9 : 9; int bit10_19 : 10; int bit20_30 : 11; int
bit31 : 1;
};
or otherwise,
struct bitfield_struct
{
int bit31 : 1; int bit20_30 : 11; int bit10_19 : 10; int bit1_9 : 9;
int bit0 : 1;
};
Of course, we will have to be careful to fill every bit position. We
will nest the calls to define exactly 32 bits worth at a time, so the
maximum nesting of the macro calls would be 32. We will also have to
put up with the collapse of multiple lines of code into a single line
for compiler error reporting and during debug.
If we are to avoid touching bitfield-processing code, the only other
option we have identified is to define the structures twice:
#ifdef BITFIELDS_LITTLE_ENDIAN
struct bitfield_struct
{
int bit0 : 1;
int bit1_9 : 9;
int bit10_19 : 10;
int bit20_30 : 11;
int bit31 : 1;
};
#else
struct bitfield_struct
{
int bit31 : 1;
int bit20_30 : 11;
int bit10_19 : 10;
int bit1_9 : 9;
int bit0 : 1;
};
#endif
I like the nested-macro approach because we only code the idea of the
hardware bit map once and inserting "DEFINE_BITFIELDS_MSB_FIRST(" in
front of the existing structures seems less error prone than trying to
manually invert the order of definition. I was surprised to not find
anything similar in previous discussions here and am wondering if it
has any drawbacks I have not considered. If it helps to focus the
discussion, we have no plans to go beyond the current 32-bit PowerPC
and Intel architectures, though we might migrate the Intel platform
from Windows/MSVC++ to Linux/gcc some day. We have 2000+ bitfield
definitions to cope with.
I look forward to your (constructive!) comments.
-Galen
bitfields in a heterogenous environment.
We've got a bunch of legacy code which maps to hardware using
bitfields. The code was developed for a PowerPC/VxWorks/Tornado
enviromant, with the bits packed in a big-endian manner: msb first. We
now have to develop an Intel/Windows/MSVC++ version (lsb assigned
first) which must maintain compatibility with the hardware interface
and use the same source for manipulating the related bitfields. So
discussion of better approaches than using bitfields is moot.
I am considering the following approach to minimize impact to the
legacy code. It uses nested macro calls to define the bitfields in
forward or reverse order according to the environment:
#ifdef BITFIELDS_LITTLE_ENDIAN
#define DEFINE_BITFIELDS_MSB_FIRST(A,B) B A
#else
#define DEFINE_BITFIELDS_MSB_FIRST(A,B) A B
#endif
struct bitfield_struct
{
DEFINE_BITFIELDS_MSB_FIRST( int bit31 : 1; ,
DEFINE_BITFIELDS_MSB_FIRST( int bit20_30 : 11; ,
DEFINE_BITFIELDS_MSB_FIRST( int bit10_19 : 10; ,
DEFINE_BITFIELDS_MSB_FIRST( int bit1_9 : 9; ,
int bit0 : 1;
))))
};
The preprocessor generates the following if BITFIELDS_LITTLE_ENDIAN is
defined:
struct bitfield_struct
{
int bit0 : 1; int bit1_9 : 9; int bit10_19 : 10; int bit20_30 : 11; int
bit31 : 1;
};
or otherwise,
struct bitfield_struct
{
int bit31 : 1; int bit20_30 : 11; int bit10_19 : 10; int bit1_9 : 9;
int bit0 : 1;
};
Of course, we will have to be careful to fill every bit position. We
will nest the calls to define exactly 32 bits worth at a time, so the
maximum nesting of the macro calls would be 32. We will also have to
put up with the collapse of multiple lines of code into a single line
for compiler error reporting and during debug.
If we are to avoid touching bitfield-processing code, the only other
option we have identified is to define the structures twice:
#ifdef BITFIELDS_LITTLE_ENDIAN
struct bitfield_struct
{
int bit0 : 1;
int bit1_9 : 9;
int bit10_19 : 10;
int bit20_30 : 11;
int bit31 : 1;
};
#else
struct bitfield_struct
{
int bit31 : 1;
int bit20_30 : 11;
int bit10_19 : 10;
int bit1_9 : 9;
int bit0 : 1;
};
#endif
I like the nested-macro approach because we only code the idea of the
hardware bit map once and inserting "DEFINE_BITFIELDS_MSB_FIRST(" in
front of the existing structures seems less error prone than trying to
manually invert the order of definition. I was surprised to not find
anything similar in previous discussions here and am wondering if it
has any drawbacks I have not considered. If it helps to focus the
discussion, we have no plans to go beyond the current 32-bit PowerPC
and Intel architectures, though we might migrate the Intel platform
from Windows/MSVC++ to Linux/gcc some day. We have 2000+ bitfield
definitions to cope with.
I look forward to your (constructive!) comments.
-Galen