Arithmetic with longs

T

Twisted

If two unsigned longs are summed, is there an easy and efficient way to
determine if there was an overflow? (An overflow would mean the actual
sum is 2^64 greater than what the result seemed to be.)
 
T

Twisted

Twisted said:
If two unsigned longs are summed, is there an easy and efficient way to
determine if there was an overflow? (An overflow would mean the actual
sum is 2^64 greater than what the result seemed to be.)

Hrm, Java doesn't seem to even have an unsigned long type. Looks like
I'll have to use positive signed longs up to 2^63-1 and check if the
sum is mysteriously negative...
 
T

Twisted

Twisted said:
Hrm, Java doesn't seem to even have an unsigned long type. Looks like
I'll have to use positive signed longs up to 2^63-1 and check if the
sum is mysteriously negative...

May be easier to use BigInteger for this ... but would it be faster?
How efficient is the typical Java deployment's implementation of
BigInteger anyway? About as fast as native C code working with arrays
of hardware-word-size integers as "digits" with "carries" based on
low-level trapping of overflow? Somewhat slower? Enormously slower?

(While I'm asking, may as well ask how well BigDecimal compares
speed-wise with C implementations of extended-precision floating- or
fixed-point arithmetic. That BigDecimal apparently stores numbers
decimal digit by decimal digit rather than in chunks of 32 or 64 bits
doesn't leave me hopeful.)
 
T

Tim B

Twisted said:
Hrm, Java doesn't seem to even have an unsigned long type. Looks like
I'll have to use positive signed longs up to 2^63-1 and check if the
sum is mysteriously negative...

Perhaps BigInteger would be of help here.
 
P

Patricia Shanahan

Twisted wrote:
....
(While I'm asking, may as well ask how well BigDecimal compares
speed-wise with C implementations of extended-precision floating- or
fixed-point arithmetic. That BigDecimal apparently stores numbers
decimal digit by decimal digit rather than in chunks of 32 or 64 bits
doesn't leave me hopeful.)

How did you reach that conclusion? My reading of the source code is that
BigDecimal is implemented as the combination of a BigInteger and an int
scale, and BigInteger uses an int[].

Patricia
 
E

EJP

Twisted said:
(While I'm asking, may as well ask how well BigDecimal compares
speed-wise with C implementations of extended-precision floating- or
fixed-point arithmetic. That BigDecimal apparently stores numbers
decimal digit by decimal digit rather than in chunks of 32 or 64 bits
doesn't leave me hopeful.)

Of course it does, that's its purpose! If you want a binary radix use
BigInteger.
 
T

Twisted

Patricia said:
How did you reach that conclusion? My reading of the source code is that
BigDecimal is implemented as the combination of a BigInteger and an int
scale, and BigInteger uses an int[].

Don't have the source code here.

Hrm. So both might perform decently, though I assume they use O(n^2)
multiplication that isn't as good as some other algorithms (FFT,
matrices) at really large sizes.
 
P

Patricia Shanahan

Twisted said:
Patricia said:
How did you reach that conclusion? My reading of the source code is that
BigDecimal is implemented as the combination of a BigInteger and an int
scale, and BigInteger uses an int[].

Don't have the source code here.

Do you have a JDK installed? If so, unzip the file "src.zip" in its top
directory.

Patricia
 
J

Jeffrey Schwab

Twisted said:
Patricia said:
How did you reach that conclusion? My reading of the source code is that
BigDecimal is implemented as the combination of a BigInteger and an int
scale, and BigInteger uses an int[].

Don't have the source code here.

Hrm. So both might perform decently, though I assume they use O(n^2)
multiplication that isn't as good as some other algorithms (FFT,
matrices) at really large sizes.

For some definition of "good."
 
T

Twisted

Jeffrey said:
Twisted said:
Patricia said:
How did you reach that conclusion? My reading of the source code is that
BigDecimal is implemented as the combination of a BigInteger and an int
scale, and BigInteger uses an int[].

Don't have the source code here.

Hrm. So both might perform decently, though I assume they use O(n^2)
multiplication that isn't as good as some other algorithms (FFT,
matrices) at really large sizes.

For some definition of "good."

Er?

_Introducing_Algorithms_ (ISBN 0-262-03141-8); also www.mersenne.org
(they use FFT for multiplying bignums); there are O(n log n) bignum
multiplies, but they have a large constant factor so are less efficient
for bignums below a certain length in bits.
 
J

Jeffrey Schwab

Twisted said:
Jeffrey said:
Twisted said:
Patricia Shanahan wrote:
How did you reach that conclusion? My reading of the source code is that
BigDecimal is implemented as the combination of a BigInteger and an int
scale, and BigInteger uses an int[].
Don't have the source code here.

Hrm. So both might perform decently, though I assume they use O(n^2)
multiplication that isn't as good as some other algorithms (FFT,
matrices) at really large sizes.
For some definition of "good."

Er?

_Introducing_Algorithms_ (ISBN 0-262-03141-8); also www.mersenne.org
(they use FFT for multiplying bignums); there are O(n log n) bignum
multiplies, but they have a large constant factor so are less efficient
for bignums below a certain length in bits.

Don't those techniques lose precision in the low-order digits? I
believe they're fine for most applications, but not quite a direct
replacement for the brute force O(n^2) method.
 
O

Oliver Wong

Twisted said:
May be easier to use BigInteger for this ... but would it be faster?
How efficient is the typical Java deployment's implementation of
BigInteger anyway? About as fast as native C code working with arrays
of hardware-word-size integers as "digits" with "carries" based on
low-level trapping of overflow? Somewhat slower? Enormously slower?

I didn't do benchmarks against C, but I compared Java's BigInteger to
Java's primitive int, and in my results, BigInteger is approximately 7 times
slower in a Eratosthenes Prime Number Sieve benchmark.

- Oliver
 
M

Mark Thornton

Jeffrey said:
Don't those techniques lose precision in the low-order digits? I
believe they're fine for most applications, but not quite a direct
replacement for the brute force O(n^2) method.

No they are exact.
 
J

Jeffrey Schwab

Mark said:
No they are exact.

Of course they are. Thank you for correcting me, and please excuse the
brain-fart.

Twisted mentioned FFTs, which do the job nicely, and "matrices." I'm
not sure what technique uses matrices to do fast multiplication of
scalars, except maybe as a technique for parallelization...
 
T

Twisted

Don't those techniques lose precision in the low-order digits? I
believe they're fine for most applications, but not quite a direct
replacement for the brute force O(n^2) method.

According to the documentation regarding mprime's n log n FFT method,
it does "lose precision in the low order digits" but the error
magnitude can be kept below 1/2, and when the ultimate purpose is
integer arithmetic, you can then snap to the nearest integer at the end
of the calculation.

The matrix based method is, so far as I know, exact.
 
T

Twisted

Twisted mentioned FFTs, which do the job nicely, and "matrices." I'm
not sure what technique uses matrices to do fast multiplication of
scalars, except maybe as a technique for parallelization...

It's in the book whose isbn I posted. I couldn't find it again with a
quick vgrep through the index, though.

A Web search turns up
http://mathworld.wolfram.com/KaratsubaMultiplication.html which appears
to be the same thing, and has order n log n log log n.

http://citeseer.ist.psu.edu/451468.html has a paper in pdf, ps, and
other formats discussing several bigint multiply algorithms with better
than n^2 performance.

All can presumably be adapted to multiplying bignum reals to a given
precision (the easiest way being to use a bigint under the hood, and
fixed-point arithmetic, with double the precision used in the internal
bigints). In fact, if a BigDecimal in Java is just a movable-point
wrapper around a BigInteger, an analogous wrapper around a MyBigInteger
that uses FFT (or whatever) would do the same job as a BigDecimal, but
faster for very, very big mantissas.

Re: BigInteger vs. int comparison -- was this using BigIntegers that
could fit in a normal int, or bigger? And was it with hotspot and
-server, which seems to give the fastest math performance relative to
native code?
 
O

Oliver Wong

Twisted said:
Re: BigInteger vs. int comparison -- was this using BigIntegers that
could fit in a normal int, or bigger? And was it with hotspot and
-server, which seems to give the fastest math performance relative to
native code?

If this is addressed to me, it'd probably less confusing to the message
hierarchy if you posted it as a seperately, and as a reply to my message,
instead of a reply to Jeffrey Schwabs message. If it isn't addressed to me,
ignore the rest of this post.

The benchmark provided the exact same input to BigInteger and int, so it
would be BigIntegers that could fit in normal int. I don't remember what
settings I had the JVM running under, and this might have been under 1.4. I
was originally planning on writing several benchmarks (not just the Prime
Sieve test), getting some initial results, and then writing my own custom
BigInteger class to see if I could optimize for speed a bit, and publish the
results. The intent was to see if I could make a blanket recommendation of
"always use BigInteger in preference to int so that you never have to worry
about errors overflow again". I sort of got too busy in the rest of my life
though, so I never finished the research.

That being said, in a lot of business applications, the bottleneck is
probably not integer arithmatic, so it may make a lot of sense to use
BigInteger "by default", and to fall back on int (or some other "data
structure") in the special cases where performance is critical.

- Oliver
 
T

Twisted

Oliver said:
That being said, in a lot of business applications, the bottleneck is
probably not integer arithmatic, so it may make a lot of sense to use
BigInteger "by default", and to fall back on int (or some other "data
structure") in the special cases where performance is critical.

The bottleneck could be method calls, and it might actually be worse if
typical deployments make every BigInteger add or similar go down
through JNI than if they let the JIT worry about speed or the user put
whole, much bigger algorithms in native code instead of each add and
multiply separately.

I'm frankly dubious if, with modern JITs, there's any use for JNI
whatsoever anymore, except for the few core library calls that need to
invoke system-dependent OS APIs at a low level, mainly to get drawing
surfaces and window handles for JFrames and so forth.

There is another problem with blanket use of BigIntegers though: the
things are inherently clunkier to use. Since most of the Java community
foams violently at the mouth as soon as anyone mentions the *other* OO
(op*r*t*r ov*rlo*ding, if I must risk using what might be considered by
some here an obscenity :)) this isn't likely to change anytime soon.

Then again, there is a possible compromise. Give the same special
status to BigInteger (and BigDecimal) as given to String ...

Also, how about letting us intern the things?
 
O

Oliver Wong

Twisted said:
There is another problem with blanket use of BigIntegers though: the
things are inherently clunkier to use. Since most of the Java community
foams violently at the mouth as soon as anyone mentions the *other* OO
(op*r*t*r ov*rlo*ding, if I must risk using what might be considered by
some here an obscenity :)) this isn't likely to change anytime soon.

Then again, there is a possible compromise. Give the same special
status to BigInteger (and BigDecimal) as given to String ...

Yeah, the "final stages" of my research, had I had the time, would be to
devise a new language based on Java which would not have any primitive types
at all, and which would use BigInteger for integer-literals appearing in the
code (I hadn't yet decided how to handle decimal numbers -- e.g. should I
use floating point, or fixed point?).

So in my language, the code:

<code>
int i = 1000000000000000000000000000000000000;
</code>

would translate to something like this in Java:

<code>
BigInteger i = new BigInteger("1000000000000000000000000000000000000");
</code>

This was assuming I did not run into any show-stopping problems in my
blanket-recommendation during the earlier research phases.

- Oliver
 
T

Twisted

So, does anyone have any benchmark results of int vs. small BigInteger
with a) -server or that expensive java-to-native-code-compiler that
proved to be no faster and b) a throwaway run to let the JIT do its job
followed by a run that's actually timed, in each case? (Floating point
performance of java doubles vs. native doubles came up in this
newsgroup a month or two back and extensive data got posted.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,754
Messages
2,569,525
Members
44,997
Latest member
mileyka

Latest Threads

Top