M
Mantorok Redgormor
What have some of you guys read to have a solid understanding of how
floating-point numbers are represented or handled by the processor and
what the difference between single and double precision is?
I found this: http://docs.sun.com/source/806-3568/ncg_goldberg.html
Not sure if this is what I should be reading? Maybe a more authoritive
document exists?
floating-point numbers are represented or handled by the processor and
what the difference between single and double precision is?
I found this: http://docs.sun.com/source/806-3568/ncg_goldberg.html
Not sure if this is what I should be reading? Maybe a more authoritive
document exists?