inverse of a matrix with Fraction entries

D

Daniel Fetchinson

I guess this is a question to folks with some numpy background (but
not necessarily).

I'm using fractions.Fraction as entries in a matrix because I need to
have very high precision and fractions.Fraction provides infinite
precision (as I've learned from advice from this list). Now I need to
calculate its inverse. Can numpy help in this regard? Can I tell numpy
that the inverse matrix should also have entries in
fractions.Fraction? Or numpy can only do floating point calculations?

Probably it doesn't matter but the matrix has all components non-zero
and is about a thousand by thousand in size.

Cheers,
Daniel
 
P

Peter Otten

Daniel said:
I guess this is a question to folks with some numpy background (but
not necessarily).

I'm using fractions.Fraction as entries in a matrix because I need to
have very high precision and fractions.Fraction provides infinite
precision (as I've learned from advice from this list).

"Infinite" precision only for values that can be expressed as a fraction:
Now I need to
calculate its inverse. Can numpy help in this regard? Can I tell numpy
that the inverse matrix should also have entries in
fractions.Fraction? Or numpy can only do floating point calculations?

I tried it, and numpy.linalg.inv() converted the Fraction values to float.
If you aren't concerned about efficiency it should be easy to do it
yourself, in pure python.

You could also ask on the numpy mailing list.
Probably it doesn't matter but the matrix has all components non-zero
and is about a thousand by thousand in size.

Hmm, where did you get that million of exact fractions from? If some real
world data is involved the error introduced by the floating point
calculation may be negligable in comparison with the initial measurement
uncertainty.

Peter
 
D

Daniel Fetchinson

I guess this is a question to folks with some numpy background (but
"Infinite" precision only for values that can be expressed as a fraction:

<type 'float'>

True, but I only need to add, multiply and divide my fractions in
order to end up with the entries in the matrix.
I tried it, and numpy.linalg.inv() converted the Fraction values to float.
If you aren't concerned about efficiency it should be easy to do it
yourself, in pure python.

If there is no other way, that's what I'll try to do.
You could also ask on the numpy mailing list.


Hmm, where did you get that million of exact fractions from?

The million of exact fractions are coming from a recursion relation.
This recursion relation only adds, multiplies and divides numbers so
the end result is always a rational number.
If some real
world data is involved the error introduced by the floating point
calculation may be negligable in comparison with the initial measurement
uncertainty.

It's a mathematical problem so no uncertainty is present in the
initial values. And even if there was, if there are many orders of
magnitude differences between the entries in the matrix floating point
does not suffice for various things like eigenvalue calculation and
stuff like that.

Cheers,
Daniel
 
P

Peter Otten

Daniel said:
True, but I only need to add, multiply and divide my fractions in
order to end up with the entries in the matrix.


If there is no other way, that's what I'll try to do.


The million of exact fractions are coming from a recursion relation.
This recursion relation only adds, multiplies and divides numbers so
the end result is always a rational number.


It's a mathematical problem so no uncertainty is present in the
initial values. And even if there was, if there are many orders of
magnitude differences between the entries in the matrix floating point
does not suffice for various things like eigenvalue calculation and
stuff like that.

It may be worthwhile to have a look at http://www.sagemath.org/

sage: Matrix([[1,2],[3,4]])**-1

[ -2 1]
[ 3/2 -1/2]
sage: a = Matrix([[1,2],[3,4]])
sage: b = Matrix([[1,2],[3,4]])**-1
sage: a*b

[1 0]
[0 1]
sage: type(b[1,1])
<type 'sage.rings.rational.Rational'>

Peter
 
P

Peter Pearson

On Wed, 24 Nov 2010 14:02:21 +0100, Daniel Fetchinson wrote:
[snip]
I'm using fractions.Fraction as entries in a matrix because I need to
have very high precision and fractions.Fraction provides infinite
precision . . . [snip]

Probably it doesn't matter but the matrix has all components non-zero
and is about a thousand by thousand in size.

I wonder how big the numerators and denominators in those
fractions are going to get during the matrix inversion. Would
it be surprising if the elements of the inverse matrix had
numerators and denominators a million times longer than the
original matrix?
 
R

Robert Kern

It's a mathematical problem so no uncertainty is present in the
initial values. And even if there was, if there are many orders of
magnitude differences between the entries in the matrix floating point
does not suffice for various things like eigenvalue calculation and
stuff like that.

Well, if you want to do eigenvalue calculations, you are going to have to start
doing numerical approximations anyways. There is no analytical solution for
matrices larger than 4x4.

Sympy will do inverses of matrices over rationals for you, though:

|4> from sympy import *

|6> m = Matrix([[S(1)/2, S(1)/3], [S(1)/4, S(1)/5]])

|7> m
[1/2, 1/3]
[1/4, 1/5]

|8> m.inv()
[ 12, -20]
[-15, 30]

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
D

Daniel Fetchinson

I guess this is a question to folks with some numpy background (but
True, but I only need to add, multiply and divide my fractions in
order to end up with the entries in the matrix.


If there is no other way, that's what I'll try to do.


The million of exact fractions are coming from a recursion relation.
This recursion relation only adds, multiplies and divides numbers so
the end result is always a rational number.


It's a mathematical problem so no uncertainty is present in the
initial values. And even if there was, if there are many orders of
magnitude differences between the entries in the matrix floating point
does not suffice for various things like eigenvalue calculation and
stuff like that.

It may be worthwhile to have a look at http://www.sagemath.org/

sage: Matrix([[1,2],[3,4]])**-1

[ -2 1]
[ 3/2 -1/2]
sage: a = Matrix([[1,2],[3,4]])
sage: b = Matrix([[1,2],[3,4]])**-1
sage: a*b

[1 0]
[0 1]
sage: type(b[1,1])
<type 'sage.rings.rational.Rational'>

This sounds like a good idea!

Cheers,
Daniel
 
D

Daniel Fetchinson

I'm using fractions.Fraction as entries in a matrix because I need to
I wonder how big the numerators and denominators in those
fractions are going to get during the matrix inversion. Would
it be surprising if the elements of the inverse matrix had
numerators and denominators a million times longer than the
original matrix?

I've checked this with Maple for matrix size 150 x 150 and the
numerators and denominators do get pretty long. But that's okay as
long as it is kept exact.

The whole story is that I have a matrix A and matrix B both of which
have rational entries and they both have pretty crazy entries too.
Their magnitude spans many orders of magnitude, but inverse(A)*B is an
okay matrix and I can deal with it using floating point numbers. I
only need this exact fraction business for inverse(A)*B (yes, a
preconditioner would be useful :))

And I wouldn't want to write the whole matrix into a file, call Maple
on it, parse the result, etc.

So after all I might just code the inversion via Gauss elimination
myself in a way that can deal with fractions, shouldn't be that hard.

Cheers,
Daniel
 
D

Daniel Fetchinson

It's a mathematical problem so no uncertainty is present in the
Well, if you want to do eigenvalue calculations, you are going to have to
start
doing numerical approximations anyways. There is no analytical solution for
matrices larger than 4x4.

Sure! (I didn't explain the whole thing yet, see the other reply where
I actually do.)
Sympy will do inverses of matrices over rationals for you, though:

|4> from sympy import *

|6> m = Matrix([[S(1)/2, S(1)/3], [S(1)/4, S(1)/5]])

|7> m
[1/2, 1/3]
[1/4, 1/5]

|8> m.inv()
[ 12, -20]
[-15, 30]

Thanks a lot! This sounds like the simplest solution so far.
I don't need to call Maple after all :)

Cheers,
Daniel
 
R

Robert Kern

The whole story is that I have a matrix A and matrix B both of which
have rational entries and they both have pretty crazy entries too.
Their magnitude spans many orders of magnitude, but inverse(A)*B is an
okay matrix and I can deal with it using floating point numbers. I
only need this exact fraction business for inverse(A)*B (yes, a
preconditioner would be useful :))

And I wouldn't want to write the whole matrix into a file, call Maple
on it, parse the result, etc.

So after all I might just code the inversion via Gauss elimination
myself in a way that can deal with fractions, shouldn't be that hard.

+1000. This is almost always the right thing to do whether you have floats or
rationals.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
M

Mark Wooding

Daniel Fetchinson said:
So after all I might just code the inversion via Gauss elimination
myself in a way that can deal with fractions, shouldn't be that hard.

I wouldn't do it that way. Let M be your matrix. Work out the LCM l of
the denominators, and multiply the matrix by that to make it an integer
matrix N = l M. Then work out the determinant d of that integer matrix.
Next, the big step: use Gaussian elimination to find a matrix A (the
`adjugate matrix') such that A N = d I. This should be doable entirely
using integer arithmetic, and I think without needing any divisions.
Finally, we have l A M = d I, so (l/d A) M = I and l/d A is the inverse
you seek.

Does that make sense?

-- [mdw]
 
D

Daniel Fetchinson

So after all I might just code the inversion via Gauss elimination
I wouldn't do it that way. Let M be your matrix. Work out the LCM l of
the denominators, and multiply the matrix by that to make it an integer
matrix N = l M. Then work out the determinant d of that integer matrix.
Next, the big step: use Gaussian elimination to find a matrix A (the
`adjugate matrix') such that A N = d I. This should be doable entirely
using integer arithmetic, and I think without needing any divisions.
Finally, we have l A M = d I, so (l/d A) M = I and l/d A is the inverse
you seek.

Does that make sense?

Absolutely! But there is nothing wrong with working out the inverse
directly using fractions.Fraction arithmetic, I'd think.

Cheers,
Daniel
 
R

Robert Kern

+1000. This is almost always the right thing to do whether you have floats or
rationals.

By this I meant that you should using Gaussian elimination to *solve* the
problem A^-1*B is the right thing to do rather than explicitly forming the
inverse of A (no matter which algorithm you use). I hope that's what you meant too.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
M

Mark Wooding

Daniel Fetchinson said:
Absolutely! But there is nothing wrong with working out the inverse
directly using fractions.Fraction arithmetic, I'd think.

It'll work, certainly; but the Fraction implementation will have to do a
buttload of GCD computations that it wouldn't need to do if you worked
with plain integers. And GCDs are relatively hard, as arithmetical
computations go: the usual algorithms require either a number of
divisions (which are themselves rather costly) or a bitwise traversal of
one of the operands.

A million entries seems nontrivial for a matrix, and Gaussian
elimination has cubic running time if I remember rightly; I suspect that
the transformations I describe would reduce the running time by a fair
amount. Of course, I might be drastically underestimating the
performance of modern hardware -- I often do -- so this may not be
especially relevant. Anyway, the possibility's there.

-- [mdw]
 
D

Daniel Fetchinson

I wouldn't do it that way. Let M be your matrix. Work out the LCM l of
It'll work, certainly; but the Fraction implementation will have to do a
buttload of GCD computations that it wouldn't need to do if you worked
with plain integers. And GCDs are relatively hard, as arithmetical
computations go: the usual algorithms require either a number of
divisions (which are themselves rather costly) or a bitwise traversal of
one of the operands.

A million entries seems nontrivial for a matrix, and Gaussian
elimination has cubic running time if I remember rightly; I suspect that
the transformations I describe would reduce the running time by a fair
amount. Of course, I might be drastically underestimating the
performance of modern hardware -- I often do -- so this may not be
especially relevant. Anyway, the possibility's there.

Okay, I see your point and I completely agree.
Surely it will be faster to do it with integers, will give it a shot.

Cheers,
Daniel
 
C

casevh

Okay, I see your point and I completely agree.
Surely it will be faster to do it with integers, will give it a shot.

Cheers,
Daniel

You may want to look at using GMPY. GMPY wraps the Gnu Multiple-
precision library and includes support for rational arithmetic. I just
tested a few calculations involving rational numbers with hundreds to
thousands of digits and GMPY's mpq type was between 10x and 100x
faster than fractions.Fractions.

GMPY is availabe at http://code.google.com/p/gmpy/

casevh

Disclaimer: I'm the current maintainer of GMPY.
 
J

John Nagle

+1000. This is almost always the right thing to do whether you have
floats or rationals.

Right.

Inverting a matrix of rationals works fine. There's even a standard
use for that, in theorem proving. See

G. Nelson and D. C. Oppen. Simplification by cooperating
decision procedures. ACM Transactions on Programming
Languages and Systems, 1(2):245–257, 1979.

It turns out that theorem proving for problems in
rational arithmetic which use only addition, subtraction,
multiplication by constants, comparisons for equal, greater and
less, and the boolean operators is completely decidable.
(This covers most of what programs do, especially in the
parts that affect control flow.) There's a neat
algorithm for solving such problems; you either get "True"
or a counterexample. The algorithm uses the simplex method
from linear programming, but with rational arithmetic.

But that's an exotic application. For ordinary number crunching,
rational arithmetic is completely inappropriate.

John Nagle
 
C

casevh

As you perform repeated calculations with rationals, the size of the
values (usually) keep growing. (Where size is measured as the length
of numerator and denominator.) The speed and memory requirements are
no longer constant.

I coded a quick matrix inversion function and measured running times
using GMPY2 rational and floating point types. For the floating point
tests, I used a precision of 1000 bits. With floating point values,
the running time grew as n^3. With rational values, the running time
grew as n^4*ln(n).

On my system, inverting a 1000x1000 matrix with 1000-bit precision
floating point would take about 30 minutes. Inverting the same matrix
using rationals would take a month or longer and require much more
memory. But the rational solution would be exact.

casevh
 
M

Mark Wooding

casevh said:
I coded a quick matrix inversion function and measured running times
using GMPY2 rational and floating point types. For the floating point
tests, I used a precision of 1000 bits. With floating point values,
the running time grew as n^3. With rational values, the running time
grew as n^4*ln(n).

Did you clear the denominators before you started?

-- [mdw]
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top