A
Adam Ierymenko
I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.
I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it's state in a
portable fashion.
The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I'm not mistaken.
The problem is this: it's an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it's state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.
The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don't want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it's typedefs.h file nastiness.
So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?
able to find anything via groups.google.com or any web search that is
definative.
I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it's state in a
portable fashion.
The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I'm not mistaken.
The problem is this: it's an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it's state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.
The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don't want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it's typedefs.h file nastiness.
So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?