Difference between "float" and "GLfloat" ?

M

Manuel

Anyone know why for openGL applications must be used GLfloat (and GLint,
etc...) instead float, int, etc..?

thx,

Manuel
 
W

W Marsh

Manuel said:
Anyone know why for openGL applications must be used GLfloat (and GLint,
etc...) instead float, int, etc..?

thx,

Manuel

Portability. The GL Red Book says:

"Implementations of OpenGL have leeway in selecting which C data type to
use to represent OpenGL data types. If you resolutely use the OpenGL
defined data types throughout your application, you will avoid
mismatched types when porting your code between different implementations."
 
M

Manuel

W said:
Portability. The GL Red Book says:

"Implementations of OpenGL have leeway in selecting which C data type to
use to represent OpenGL data types. If you resolutely use the OpenGL
defined data types throughout your application, you will avoid
mismatched types when porting your code between different implementations."


Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?


Regards,

Manuel
 
W

W Marsh

Manuel said:
Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?


Regards,

Manuel

It may be. Even a char can have more than 8-bits.
 
G

Gavin Deane

Manuel said:
Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?

Yes. There is no requirement for an int to be the same size on every
platform.

sizeof returns a number of bytes. You are guaranteed that

sizeof char == 1
sizeof char <= sizeof short <= sizeof int <= sizeof long
CHAR_BIT >= 8

where CHAR_BIT, available by including the <climits> or <limits.h>
header, is the number of bits in a char (i.e. in a byte).

I believe there are also some *minimum* size guarantees for the
integral types. I'm not sure what they are, or whether they are
specified as a number of bits, a number of bytes, or a range of values
that must be accomodated.

Gavin Deane
 
R

Ron Natalie

Manuel said:
Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?

Yes. This is why there are typedefs in the various API's that use C
and C++ to nail this down on a particular implementation. Even in
the standard language we have things like size_t.

The later version of the C standard even includes some "predefined"
typedefs that assign traits to various integer types for example.
 
J

Jack Klein

Yes. There is no requirement for an int to be the same size on every
platform.

sizeof returns a number of bytes. You are guaranteed that

sizeof char == 1
sizeof char <= sizeof short <= sizeof int <= sizeof long
CHAR_BIT >= 8

where CHAR_BIT, available by including the <climits> or <limits.h>
header, is the number of bits in a char (i.e. in a byte).

I believe there are also some *minimum* size guarantees for the
integral types. I'm not sure what they are, or whether they are
specified as a number of bits, a number of bytes, or a range of values
that must be accomodated.

They are specified by a range of values, but if look at the binary
representation of the values you can easily work out the minimum
number of bits, although the actual bit usage may be greater because
padding bits are allowed in all but the "char" types.

You can see the ranges for all the integer types, including the C
"long long" type that is not part of C++, yet, here:

http://www.jk-technology.com/c/inttypes.html#limits

You can easily work out the required minimum number of bits from the
required ranges:

char types, at least 8 bits
short int, at least 16 bits
int, at least 16 bits
long, at least 32 bits
long long (C since 1999, not official in C++) 64 bits
 
G

Gavin Deane

Jack said:
You can see the ranges for all the integer types, including the C
"long long" type that is not part of C++, yet, here:

http://www.jk-technology.com/c/inttypes.html#limits

You can easily work out the required minimum number of bits from the
required ranges:

char types, at least 8 bits
short int, at least 16 bits
int, at least 16 bits
long, at least 32 bits
long long (C since 1999, not official in C++) 64 bits

Thanks Jack. A useful reference.

Gavin Deane
 
M

Manuel

Ron said:
Yes. This is why there are typedefs in the various API's that use C
and C++ to nail this down on a particular implementation. Even in
the standard language we have things like size_t.

Thanks.
But I've some difficult yo understand the problem.

Maybe a loop go "out of size"?
The problem is not limited to openGL, but it is about all multiplatform
application?
If yes...how solve the problem without using GLtypes?

Can you show an example where using various "int types" can crash or
ruin the application?

Thanks!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,058
Latest member
QQXCharlot

Latest Threads

Top