division problem

J

jamesonang

Supposed unsigned int(32 bits) is the largest number that computer can
represent with a single variable.

Now, i have a big integer ( less than 64 bit, but great than 32 bit) .
i represent it by this way:

unsigned int dividend[2] :
divident[0] store low 32 bits of big integer, dividend[1] store high 32
bits of big integer.

my problem is how make a division, like this

quotient = big integer/ divisor, remainder = big integer mod divisor
(divisor is 32 bit unsigned integer);

how can i get quotient, and remainder ?

thanks your help!
 
R

Richard Bos

Supposed unsigned int(32 bits) is the largest number that computer can
represent with a single variable.

Now, i have a big integer ( less than 64 bit, but great than 32 bit) .
i represent it by this way:

unsigned int dividend[2] :
divident[0] store low 32 bits of big integer, dividend[1] store high 32
bits of big integer.

my problem is how make a division, like this

quotient = big integer/ divisor, remainder = big integer mod divisor
(divisor is 32 bit unsigned integer);

how can i get quotient, and remainder ?

Long division.

HTH; HAND; DYODH.

Richard
 
R

Richard Heathfield

(e-mail address removed) said:
Supposed unsigned int(32 bits) is the largest number that computer can
represent with a single variable.

Now, i have a big integer ( less than 64 bit, but great than 32 bit) .
i represent it by this way:

unsigned int dividend[2] :
divident[0] store low 32 bits of big integer, dividend[1] store high 32
bits of big integer.

my problem is how make a division, like this

quotient = big integer/ divisor, remainder = big integer mod divisor
(divisor is 32 bit unsigned integer);

how can i get quotient, and remainder ?

Let the big number be N:

1100111010011111100110100100101011101011010110101011010110101010

Let the divisor be D:

10111011101100111101010101010101

Let S0 be the difference between the top bits of N and the whole of D.

Q: 1
-------------------------------------------------------------------
N: 1100111010011111100110100100101011101011010110101011010110101010
D: 10111011101100111101010101010101
-----------------------------------
S0:00010010111010111100010011110101

Get the next bit from N:

Q: 10
-------------------------------------------------------------------
N: 1100111010011111100110100100101011101011010110101011010110101010
D: 10111011101100111101010101010101
-----------------------------------
00100101110101111000100111101011
D: 10111011101100111101010101010101

Get the next bit from N:

Q: 100
-------------------------------------------------------------------
N: 1100111010011111100110100100101011101011010110101011010110101010
D: 10111011101100111101010101010101
-----------------------------------
01001011101011110001001111010111
D: 10111011101100111101010101010101

Get the next bit from N:

Q: 1000
-------------------------------------------------------------------
N: 1100111010011111100110100100101011101011010110101011010110101010
D: 10111011101100111101010101010101
-----------------------------------
10010111010111100010011110101111
D: 10111011101100111101010101010101

Get the next bit from N:

Q: 10001
-------------------------------------------------------------------
N: 1100111010011111100110100100101011101011010110101011010110101010
D: 10111011101100111101010101010101
-----------------------------------
100101110101111000100111101011110
D: 10111011101100111101010101010101
-------------------------------
S1: 1110011000010000111101000001000

Get the next bit from N:

Q: 100011
-------------------------------------------------------------------
N: 1100111010011111100110100100101011101011010110101011010110101010
D: 10111011101100111101010101010101
-----------------------------------
100101110101111000100111101011110
D: 10111011101100111101010101010101
-------------------------------
11100110000100001111010000010001
D: 10111011101100111101010101010101

etc etc etc. (I might have got some of the bits mixed up because I did this
by hand, but you'll be doing this automatically.)

When you run out of bits, Q is your quotient and whatever is left over is,
strangely enough, your remainder.

Division is much simpler in binary than in any other number base, since your
program only has to know your 1-times table. Either D is greater than the
set of bits you're currently comparing with (in which case the Q-bit is 0),
or it isn't (in which case the Q-bit is 1). Subtract and move on.
 
D

David T. Ashley

Supposed unsigned int(32 bits) is the largest number that computer can
represent with a single variable.

Now, i have a big integer ( less than 64 bit, but great than 32 bit) .
i represent it by this way:

unsigned int dividend[2] :
divident[0] store low 32 bits of big integer, dividend[1] store high 32
bits of big integer.

my problem is how make a division, like this

quotient = big integer/ divisor, remainder = big integer mod divisor
(divisor is 32 bit unsigned integer);

how can i get quotient, and remainder ?

Richard Heathfield suggested bit-shifting, and for your problem this will
work just fine. However, it is [very!] suboptimal, even in the case you
proposed.

The key issue -- and it is the same situation for addition, subtraction, and
multiplication -- is that the processor inherently has integer addition,
subtraction, multiplication and division instructions; but they handle
operands smaller than you are interested in.

The key question is whether it is possible to use these "small" instructions
multiple times to deal with larger operands.

In the case of addition and subtraction, the answer is clearly YES. Even if
one is programming in 'C' and doesn't have direct access to the CARRY bit of
the processor, if the result of an addition is smaller than either of the
input arguments (which must be unsigned), then a carry occurred. With a
little thought, one can code multi-precision integer addition that doesn't
perform badly (although it won't be as efficient as assembly-language).

For example:

unsigned int input1[2]; /* LSI first for all of these */
unsigned int input2[2];
unsigned int output[3]; /* Result has one more int to hold carry out. */
unsigned int carryout[1];

output[0] = input1[0] + input2[0];

if ((output[0] < input1[0]) || (output[0] < input2[0]))
carryout[0] = 1;
else
carryout[0] = 0;

output[1] = input1[1] + input2[1];

if ((output[1] < input1[1]) || (output[1] < input2[1]))
output[2] = 1;
else
output[2] = 0;

/* Now, process the carry out of the LSI */
if (carryout[0])
{
output[1] ++;
if (! output[1])
output[2] ++;
}

I don't claim that I didn't make some kind of mistake in the above (I'm
doing this from scratch). The bottom line is that one can do addition and
subtraction of large integers in 'C' fairly effectively.

Note that the solution above uses the inherent ability of the processor to
add using native machine instructions.

The solution suggested by Richard Heathfield is just as crude as doing
addition one bit at a time (compared to the technique above).

I won't get into multiplication ... but there is a way to do that using the
processor's multiplication ability, too. You'll figure it out quickly if
you think about it.

And finally, division ... which is the toughest case.

If you try to do the algebra and figure out if you can use "small" machine
division instructions to accomplish a larger division, you'll rapidly come
to the conculsion that you can't. (Fortunately, Donald Knuth and his peers
are just a bit more experienced than you or I. It ends up there is a way to
do it.)

The process is very similar to the way people do longhand division. In
longhand division, people estimate one digit at a time, then multiply and
subtract, then go on to the next digit. Occasionally, one guesses wrong and
has to increase or decrease the quotient digit and re-do that digit.

In the algorithm, a "digit" is 16-32 bits; and the result of estimation may
be off by as much as 2 counts in one direction only (there is a simple and
cheap correction procedure).

The classic algorithm assumes that the machine has a division instruction
that takes a dividend of bitsize 2w, divides it by a divisor of bitsize w,
and produces a quotient of bitsize w and a remainder of bitsize w, with the
possibility of overflow (which is deliberately avoided by the algorithm).

For a typical desktop processor, w is 32 bits, so that in one machine
instruction you can do a 64/32 division. However, typically compilers will
only allow you to do a 32/32 division, so you have to use w=16 and apply the
standard algorithm.

The algorithm essentially will produce 16 bits of the result at a time
(versus 1 bit at a time from the bit-shifting approach).

If you have access to the assembly-language of the machine, normally you can
get [at least] 32 bits at a time.

The standard integer division algorithm is a bit awkward to code in 'C', but
when the data sizes are known in advance (as they are in your case), it gets
less awkward.

I won't include the code here (too much thought would be required).

Here are the resources you should look up.

a)Knuth covers this in Volume 2 of his classic work:

http://www.amazon.com/Art-Computer-..._bbs_sr_1/104-3393570-9901529?ie=UTF8&s=books

b)The GMP has some division code that is compiled in the event
assembly-language isn't available for the specific processor. This will use
the "digit estimation" technique I described.

c)Also, this URL should be helpful. It also cites Knuth.
http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic

Heathfield's suggestion will work just fine. However, in the general case,
you don't want to do that.

Dave.
 
R

Richard Heathfield

David T. Ashley said:

Richard Heathfield suggested bit-shifting, and for your problem this will
work just fine. However, it is [very!] suboptimal, even in the case you
proposed.

I agree. I was about to suggest TAOCP 2 to him (which is what I used), but
at the last minute I figured he'd prefer simple. :)

<good stuff snipped>
 
J

jamesonang

Thanks very much.

Richard Heathfield suggest to make a divsion bit-one-bit ( divison is
replaced by many substract )£¬ it works. thanks. however i guess this
method can work better in hardware than software implementent.

Also thank you ,David T.Ashely. i will turn to TAOCP 2 for more
help.
 
D

David T. Ashley

Richard Heathfield said:
David T. Ashley said:

Richard Heathfield suggested bit-shifting, and for your problem this will
work just fine. However, it is [very!] suboptimal, even in the case you
proposed.

I agree. I was about to suggest TAOCP 2 to him (which is what I used), but
at the last minute I figured he'd prefer simple. :)

My natural assumption was that you were unaware of the classic division
algorithm (which was apparently not the case). I agree that simpler is
better for someone doing it the first time.

I've had this same discussion multiple times with people who _should_ know
the best algorithms. In one case a compiler vendor was not aware of the
existence of the better algorithm (their 32/32 divide routine was suboptimal
on a small micro that has a 16/8=8 division instruction). It seems to be
the norm that about 99% of people know the classic ADD/ADC algorithms for
addition and subtraction, 85% know or can figure out how to use the
processor's MUL instruction to handle long integer multiplication, but only
about 25% are aware of the classic division algorithm. 75% bit-shift. It
is logically correct -- just depends how efficient you need to be.
 
D

dcorbit

David said:
Richard Heathfield said:
David T. Ashley said:

Richard Heathfield suggested bit-shifting, and for your problem this will
work just fine. However, it is [very!] suboptimal, even in the case you
proposed.

I agree. I was about to suggest TAOCP 2 to him (which is what I used), but
at the last minute I figured he'd prefer simple. :)

My natural assumption was that you were unaware of the classic division
algorithm (which was apparently not the case). I agree that simpler is
better for someone doing it the first time.

I've had this same discussion multiple times with people who _should_ know
the best algorithms. In one case a compiler vendor was not aware of the
existence of the better algorithm (their 32/32 divide routine was suboptimal
on a small micro that has a 16/8=8 division instruction). It seems to be
the norm that about 99% of people know the classic ADD/ADC algorithms for
addition and subtraction, 85% know or can figure out how to use the
processor's MUL instruction to handle long integer multiplication, but only
about 25% are aware of the classic division algorithm. 75% bit-shift. It
is logically correct -- just depends how efficient you need to be.

To do division efficently, the classical solution is to use Newton's
methd.
http://en.wikipedia.org/wiki/Division_(digital)
It might be a bit crunchy to use it on integers, so why not just
perform a double division and then assign it? I guess that it is a
lot faster than emulating hardware.

The IBM Jikes Compiler has a 64 bit number class for 32 bit machines.
It is C++ but it would not be too hard to convert it to be C function
calls.
 
J

jamesonang

To do division efficently, the classical solution is to use Newton's
methd.http://en.wikipedia.org/wiki/Division_(digital)
It might be a bit crunchy to use it on integers, so why not just
perform a double division and then assign it? I guess that it is a
lot faster than emulating hardware.

The IBM Jikes Compiler has a 64 bit number class for 32 bit machines.
It is C++ but it would not be too hard to convert it to be C function
calls.

I want to know the general method to do division. if the divident is
1024 bits , and divisor 256 bit, division instructions of computer
hardly do the job directly.

According to Richard Heathfield 's suggestion , i write a fuction .
however, its efficiency is poor when integer grow larger.

============================================================

#define BITS_PER_INTEGER 32

/* a help fuction , fetch a bit from a integer */

unsigned int get_bit(unsigned int dividend[], int pos)
/* pos : position of requested bit, start from 0 */
{
unsigned int bit;
unsigned int mask = 0;
int i=0;
int j = 0;

i = pos / BITS_PER_INTEGER ;
j = pos % BITS_PER_INTEGER ;

mask = (1U) << j;
bit = (dividend & mask) >> j ;

return bit ;
}

unsigned int div_binary( unsigned int dividend[],
unsigned int divisor,
unsigned int quotient[])
/* 64 bits integer dividend div 32 bits integer divsior
* quotient[] is quotient , return value is remainder
*/
{
unsigned int bit, shifted_out;
unsigned int num_bits = 64;
unsigned int part = 0;
unsigned int mask;
int i, j;


if (divisor == 0)
return 0;

mask = (1U)<< (BITS_PER_INTEGER-1) ;

for (i= num_bits-1; i>= 0; i--){
/* one round handle a bit */
bit = get_bit(dividend,i);
shifted_out = (part&mask)>>(BITS_PER_INTEGER-1);
part = (part << 1)|bit ;

if (part >= divisor) {
part = part - divisor;
j = i / BITS_PER_INTEGER;
quotient[j] = (quotient[j] << 1)| (1U);
}
else {
if (shifted_out == 1) {
part = divisor - part ;
j = i / BITS_PER_INTEGER;
quotient[j] = (quotient[j] << 1)| (1U);
}
else {
j = i / BITS_PER_INTEGER;
quotient[j] = (quotient[j] << 1)| (0U);
}
}
}

return part ;

}

============================================================

i'm reading TAOCP 2 , Knuth method is very attractive and powerful. I
want to write a divison function that can handle divison between any
size of integer according Knuth method (as an exercise). As soon as i
finish , i will post here.
 
D

dcorbit

I want to know the general method to do division. if the divident is
1024 bits , and divisor 256 bit, division instructions of computer
hardly do the job directly.
[snip]

The answer is Newton's method. If you use a binary schoolbook method,
it's going to suck eggs.

You might need extended precision number functions like GMP
http://www.swox.com/gmp/

Or HPALIB:
http://www.nongnu.org/hpalib/

Or NTL:
http://www.cnr.berkeley.edu/~casterln/ntl/tour.html

etc.

This also might be worth a look:
http://www.mindspring.com/~pate/

P.S.
This problem has already been solved by the above mentioned software
sets. But if you want to do it yourself it will be a worthwhile
exercise.
 
R

Richard Heathfield

(e-mail address removed) said:

I want to know the general method to do division. if the divident is
1024 bits , and divisor 256 bit, division instructions of computer
hardly do the job directly.

According to Richard Heathfield 's suggestion , i write a fuction .
however, its efficiency is poor when integer grow larger.

You specifically asked in your original article for advice on the case where
your largest value had fewer than 64 bits but more than 32, and so you were
using an array of exactly two 32-bit values (low-32 in one element, and the
remaining high bits in the other) as your dividend.

Under those circumstances, the simple algorithm I described works reasonably
well.

When you change the spec, do not be surprised if the advice you were given
for the simpler case is no longer reasonable.

i'm reading TAOCP 2 , Knuth method is very attractive and powerful.

Good.
 
D

David T. Ashley

To do division efficently, the classical solution is to use Newton's
methd.
http://en.wikipedia.org/wiki/Division_(digital)
It might be a bit crunchy to use it on integers, so why not just
perform a double division and then assign it? I guess that it is a
lot faster than emulating hardware.

The IBM Jikes Compiler has a 64 bit number class for 32 bit machines.
It is C++ but it would not be too hard to convert it to be C function
calls.

It appears that the method won't pay off over classic division until the
operands are fairly large:

http://www.swox.com/list-archives/gmp-discuss/2006-November/002549.html

For the kind of work I do, 64 bits is a HUGE integer (eight bytes, wow!). I
need to read up on this.
 
D

dcorbit

David said:
It appears that the method won't pay off over classic division until the
operands are fairly large:

http://www.swox.com/list-archives/gmp-discuss/2006-November/002549.html

For the kind of work I do, 64 bits is a HUGE integer (eight bytes, wow!). I
need to read up on this.

They are not comparing Newton's method to classic method.
See:
http://swox.com/gmp/devel/
in the section: "New division code"

If you use the floating point hardware to approximate the answer, and
then use the approximation given (it will be good to about 15 digits on
most machines) then one iteration of Newton's method would give you 30
digits of precision, two iterations of Newton's method would give you
60 digits, three iterations 120 digits, etc.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,048
Latest member
verona

Latest Threads

Top