Why float is called as 'float', not 'real'?

D

DirtyHarry

Good day everyone. This sounds like a stupid question, but I became
just curious yesterday, and I looked up several textbooks. However, no
textbooks on computer language (that I have ) mentioned this. So I am
asking to you, gurus...

Is there any particular reason to call 'float' instead of 'real'?
 
M

ma740988

Good day everyone. This sounds like a stupid question, but I became
just curious yesterday, and I looked up several textbooks. However, no
textbooks on computer language (that I have ) mentioned this. So I am
asking to you, gurus...

Is there any particular reason to call 'float' instead of 'real'?

What you call 'double' - real32?

I'm certainly not well versed in the history of the languages ( I
suspect this dates back to C ) nonetheless, Fortran is the only
language I've encountered that used REAL. Granted, rational,
irrational etc - numbers are approximations of real arithmetic, I
suspect machine precision limitations plays an important role. Beyond
that I'm unsure what the impetus is.
 
G

Gianni Mariani

DirtyHarry said:
Good day everyone. This sounds like a stupid question, but I became
just curious yesterday, and I looked up several textbooks. However, no
textbooks on computer language (that I have ) mentioned this. So I am
asking to you, gurus...

Is there any particular reason to call 'float' instead of 'real'?

float stands for "floating point" as opposed to "fixed point". It is a
more accurate description than "real" since not all real numbers are
representable by a floating point number.
 
J

Jim Langston

DirtyHarry said:
Good day everyone. This sounds like a stupid question, but I became
just curious yesterday, and I looked up several textbooks. However, no
textbooks on computer language (that I have ) mentioned this. So I am
asking to you, gurus...

Is there any particular reason to call 'float' instead of 'real'?

So if we call a float a real, does that mean we have to call an int a fake?
float is more descriptive I believe.
 
J

Juha Nieminen

DirtyHarry said:
Is there any particular reason to call 'float' instead of 'real'?

Because a 'float' is not a real number, but a floating point number.
There's a big difference.
 
S

SasQ

Dnia Fri, 23 Mar 2007 21:49:04 -0700, ma740988 napisa³(a):
Fortran is the only language I've encountered that used REAL.

Pascal used it too ;)
And it's probably the language used formerly by OP, causing
his confusion upon encountering 'float' ;J
 
R

Rolf Magnus

Jim said:
So if we call a float a real, does that mean we have to call an int a
fake?

I guess the OP is talking about the mathematical term as in "real number",
not the opposite of "fake".
float is more descriptive I believe.

Agreed.
 
R

Rolf Magnus

DirtyHarry said:
Good day everyone. This sounds like a stupid question, but I became
just curious yesterday, and I looked up several textbooks. However, no
textbooks on computer language (that I have ) mentioned this. So I am
asking to you, gurus...

Is there any particular reason to call 'float' instead of 'real'?

Counter question: Is there any particular reason to call it 'real' instead
of 'float'?
 
P

Pete Becker

DirtyHarry said:
Good day everyone. This sounds like a stupid question, but I became
just curious yesterday, and I looked up several textbooks. However, no
textbooks on computer language (that I have ) mentioned this. So I am
asking to you, gurus...

Is there any particular reason to call 'float' instead of 'real'?

float is short for "floating-point." double is short for
double-precision floating point.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
M

ma740988

Counter question: Is there any particular reason to call it 'real' instead
of 'float'?
I suspect the OP's thought process reflects what I - perhaps all of us
- was taught in a mathematical sense. In that regard, I'd talk in
terms of integers, real numbers and complex numbers. These are terms
everyone understands including C / C++ programmers. I had a similar
question hen I first encountered 'floats and double'. I'd like to
believe Real32 and Real64 would suffice, but considering C/C++ has
been around for sometime, you chalk it up to 'it is what it is' and
move on.

In the end, the OP could always - worse case - typedef the float/
double to what he/she considers a more meaningful description. That's
the beauty of the language.
 
G

Giff

Gianni Mariani ha scritto:

It is a
more accurate description than "real" since not all real numbers are
representable by a floating point number.

neither are all the integers by int...
 
V

Victor Bazarov

Giff said:
Gianni Mariani ha scritto:

It is a

neither are all the integers by int...

All integers from INT_MIN to INT_MAX are. For 'float' that is not so.

V
 
G

Greg Herlihy

I suspect the OP's thought process reflects what I - perhaps all of us
- was taught in a mathematical sense. In that regard, I'd talk in
terms of integers, real numbers and complex numbers. These are terms
everyone understands including C / C++ programmers. I had a similar
question hen I first encountered 'floats and double'. I'd like to
believe Real32 and Real64 would suffice, but considering C/C++ has
been around for sometime, you chalk it up to 'it is what it is' and
move on.

A "real" type would have to be able to represent irrational numbers -
which floating point numbers cannot represent, since all floating
point numbers (except for infinity) are rational.
In the end, the OP could always - worse case - typedef the float/
double to what he/she considers a more meaningful description. That's
the beauty of the language.

Defining a "real" typedef for float is not going to change the types
of numbers that a float can represent. So the typedef would not change
the fact that a float does not represent real numbers, but the "real"
name could mislead anyone who sees it - into believing that it does.

Greg
 
A

Alf P. Steinbach

* Greg Herlihy:
A "real" type would have to be able to represent irrational numbers -
which floating point numbers cannot represent, since all floating
point numbers (except for infinity) are rational.


Defining a "real" typedef for float is not going to change the types
of numbers that a float can represent. So the typedef would not change
the fact that a float does not represent real numbers, but the "real"
name could mislead anyone who sees it - into believing that it does.

Well. 'int' is rather misleading too. Come to think of it, is there
any type name in a programming language that isn't misleading, when one
thinks of it with mathematical-inspired expectations?
 
O

osmium

"DirtyHarry" worte:
Good day everyone. This sounds like a stupid question, but I became
just curious yesterday, and I looked up several textbooks. However, no
textbooks on computer language (that I have ) mentioned this. So I am
asking to you, gurus...

Is there any particular reason to call 'float' instead of 'real'?

My guess is that it originated with the hardware designers. It seemed
descriptive so others that came along just adopted it.
My theory seems to be supported by this link.

http://www.oars.utk.edu/math_archives/.http/hypermail/historia/apr99/0144.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top