Single precision floating point calcs?

G

Grant Edwards

I'm pretty sure the answer is "no", but before I give up on the
idea, I thought I'd ask...

Is there any way to do single-precision floating point
calculations in Python?

I know the various array modules generally support arrays of
single-precision floats. I suppose I could turn all my
variables into single-element arrays, but that would be way
ugly...
 
T

Terry Reedy

| I'm pretty sure the answer is "no", but before I give up on the
| idea, I thought I'd ask...

| Is there any way to do single-precision floating point
| calculations in Python?

Make your own Python build from altered source. And run it on an ancient
processor/OS/C compiler combination that does not automatically convert C
floats to double when doing any sort of calculation.

Standard CPython does not have C single-precision floats.

The only point I can think of for doing this with single numbers, as
opposed to arrays of millions, is to show that there is no point. Or do
you have something else in mind?

Terry Jan Reedy
 
R

Robert Kern

Grant said:
I'm pretty sure the answer is "no", but before I give up on the
idea, I thought I'd ask...

Is there any way to do single-precision floating point
calculations in Python?

I know the various array modules generally support arrays of
single-precision floats. I suppose I could turn all my
variables into single-element arrays, but that would be way
ugly...

We also have scalar types of varying precisions in numpy:


In [9]: from numpy import *

In [10]: float32(1.0) + float32(1e-8) == float32(1.0)
Out[10]: True

In [11]: 1.0 + 1e-8 == 1.0
Out[11]: False


If you can afford to be slow, I believe there is an ASPN Python Cookbook recipe
for simulating floating point arithmetic of any precision.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
G

Grant Edwards

| I'm pretty sure the answer is "no", but before I give up on the
| idea, I thought I'd ask...
|
| Is there any way to do single-precision floating point
| calculations in Python?

Make your own Python build from altered source. And run it on
an ancient processor/OS/C compiler combination that does not
automatically convert C floats to double when doing any sort
of calculation.

It wouldn't have to be that ancient. The current version of
gcc supports 32-bit doubles on quite a few platforms -- though
it doesn't seem to for IA32 :/

Simply storing intermediate and final results as
single-precision floats would probably be sufficient.
Standard CPython does not have C single-precision floats.

I know.
The only point I can think of for doing this with single numbers, as
opposed to arrays of millions, is to show that there is no point.

I use Python to test algorithms before implementing them in C.
It's far, far easier to do experimentation/prototyping in
Python than in C. I also like to have two sort-of independent
implementations to test against each other (it's a good way to
catch typos).

In the C implementations, the algorithms will be done
implemented in single precision, so doing my Python prototyping
in as close to single precision as possible would be "a good
thing".
 
G

Grant Edwards

Grant said:
I'm pretty sure the answer is "no", but before I give up on the
idea, I thought I'd ask...

Is there any way to do single-precision floating point
calculations in Python?

I know the various array modules generally support arrays of
single-precision floats. I suppose I could turn all my
variables into single-element arrays, but that would be way
ugly...

We also have scalar types of varying precisions in numpy:

In [9]: from numpy import *

In [10]: float32(1.0) + float32(1e-8) == float32(1.0)
Out[10]: True

Very interesting. Converting a few key variables and
intermediate values to float32 and then back to CPython floats
each time through the loop would probably be more than
sufficient.

So far as I know, I haven't run into any cases where the
differences between 64-bit prototype calculations in Python and
32-bit production calculations in C have been significant. I
certainly try to design the algorithms so that it won't make
any difference, but it's a nagging worry...
In [11]: 1.0 + 1e-8 == 1.0
Out[11]: False

If you can afford to be slow,

Yes, I can afford to be slow.

I'm not sure I can afford the decrease in readability.
I believe there is an ASPN Python Cookbook recipe for
simulating floating point arithmetic of any precision.

Thanks, I'll go take a look.
 
R

Ross Ridge

Grant Edwards said:
In the C implementations, the algorithms will be done
implemented in single precision, so doing my Python prototyping
in as close to single precision as possible would be "a good
thing".

Something like numpy might give you reproducable IEEE 32-bit floating
point arithmetic, but you may find it difficult to get that out of a
IA-32 C compiler. IA-32 compilers either set the x87 FPU's precision to
either 64-bits or 80-bits and only round results down to 32-bits when
storing values in memory. If you can target CPUs that support SSE,
then compiler can use SSE math to do most single precision operations
in single precision, although the compiler may not set the required SSE
flags for full IEEE complaince.

In other words, since you're probably going to have to allow for some
small differences in results anyways, it may not be worth the trouble
of trying to get Python to use 32-bit floats.

(You might also want to consider whether you want to using single
precision in your C code to begin with, on IA-32 CPUs it seldom makes
a difference in performance.)

Ross Ridge
 
S

sturlamolden

Is there any way to do single-precision floating point
calculations in Python?

Yes, use numpy.float32 objects.
I know the various array modules generally support arrays of
single-precision floats. I suppose I could turn all my
variables into single-element arrays, but that would be way
ugly...


Numpy has scalars as well.
 
G

Grant Edwards

Something like numpy might give you reproducable IEEE 32-bit
floating point arithmetic, but you may find it difficult to
get that out of a IA-32 C compiler.

That's OK, I don't run the C code on an IA32. The C target is
a Hitachi H8/300.
(You might also want to consider whether you want to using
single precision in your C code to begin with, on IA-32 CPUs
it seldom makes a difference in performance.)

Since I'm running the C code on a processor without HW floating
point support, using single precision makes a big difference.
 
B

Beliavsky

Off-topic, but maybe as practical as "[making] your own Python build
from altered source." ---

Fortran 95 (and earlier versions) has single and double precision
floats. One could write a Fortran code with variables declared REAL,
and compilers will by default treat the REALs as single precision, but
most compilers have an option to promote single precision variables to
double. In Fortran 90+ one can specify the KIND of a REAL, so if
variables as

REAL (kind=rp) :: x,y,z

throughout the code with rp being a global parameter, and one can
switch from single to double by changing rp from 4 to 8. G95 is a
good, free compiler. F95 has most but not all of the array operations
of NumPy.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,048
Latest member
verona

Latest Threads

Top