S
stevenj
Richard said:Myes. But are _much_ larger floating-point numbers really that much
faster than appropriately handled integers? Even if you don't have these
much larger FPs yet, and will have to emulate them? Don't get me wrong,
I see the value of the method, but I'm not so sure of its value to the
OP, who probably will have the same problems handling extended-precision
FPs that he has handling extended-size integers.
Apparently yes. Look at the source code of many major
arbitrary-precision arithmetic packages and you will see that they use
floating-point FFTs.
You typically just use double precision, which has a 53-bit
significand. The question is, how many integer bits do you pack into
each double-precision element? For an FFT up to size size 2^19 =
524288 or so, it is sufficient to use 12 bits per double if I remember
correctly (and this may be too conservative, as FFT roundoff errors
generally grow at most logarithmically with transform size).
Steven