M
Manuel
Anyone know why for openGL applications must be used GLfloat (and GLint,
etc...) instead float, int, etc..?
thx,
Manuel
etc...) instead float, int, etc..?
thx,
Manuel
Manuel said:Anyone know why for openGL applications must be used GLfloat (and GLint,
etc...) instead float, int, etc..?
thx,
Manuel
W said:Portability. The GL Red Book says:
"Implementations of OpenGL have leeway in selecting which C data type to
use to represent OpenGL data types. If you resolutely use the OpenGL
defined data types throughout your application, you will avoid
mismatched types when porting your code between different implementations."
Manuel said:Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?
Regards,
Manuel
Manuel said:Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?
Manuel said:Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?
Yes. There is no requirement for an int to be the same size on every
platform.
sizeof returns a number of bytes. You are guaranteed that
sizeof char == 1
sizeof char <= sizeof short <= sizeof int <= sizeof long
CHAR_BIT >= 8
where CHAR_BIT, available by including the <climits> or <limits.h>
header, is the number of bits in a char (i.e. in a byte).
I believe there are also some *minimum* size guarantees for the
integral types. I'm not sure what they are, or whether they are
specified as a number of bits, a number of bytes, or a range of values
that must be accomodated.
Jack said:You can see the ranges for all the integer types, including the C
"long long" type that is not part of C++, yet, here:
http://www.jk-technology.com/c/inttypes.html#limits
You can easily work out the required minimum number of bits from the
required ranges:
char types, at least 8 bits
short int, at least 16 bits
int, at least 16 bits
long, at least 32 bits
long long (C since 1999, not official in C++) 64 bits
Ron said:Yes. This is why there are typedefs in the various API's that use C
and C++ to nail this down on a particular implementation. Even in
the standard language we have things like size_t.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.