arbitrary precision linear algebra

Discussion in 'Python' started by Ben123, Mar 2, 2011.

  1. Ben123

    Ben123 Guest

    Hello. I have a written Python program which currently uses numpy to
    perform linear algebra operations. Specifically, I do matrix*matrix,
    matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    operations. Now I am interested in allowing arbitrary precision. I
    have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    to easily implement any with my current program. I suspect I have to
    change some commands but I am unsure what.

    My question is which of the arbitrary precision implementations will
    most easily handle linear algebra? I don't care about speed, just ease
    of use. Online tutorials for arbitrary precision linear algebra
    operations would be useful.

    For example, it looks like mpmath can handle matrix operations
    http://fredrik-j.blogspot.com/search?q=matrix
    but I was unable to find a clear tutorial. The tutorials for most of
    the arbitrary precision implementations demonstrate simple scalar
    examples.

    Thanks in advance
    Ben123, Mar 2, 2011
    #1
    1. Advertising

  2. Ben123

    Ben123 Guest

    On Mar 2, 8:42 am, Ben123 <> wrote:
    > Hello. I have a written Python program which currently uses numpy to
    > perform linear algebra operations. Specifically, I do matrix*matrix,
    > matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    > operations. Now I am interested in allowing arbitrary precision. I
    > have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    > to easily implement any with my current program. I suspect I have to
    > change some commands but I am unsure what.
    >
    > My question is which of the arbitrary precision implementations will
    > most easily handle linear algebra? I don't care about speed, just ease
    > of use. Online tutorials for arbitrary precision linear algebra
    > operations would be useful.
    >
    > For example, it looks like mpmath can handle matrix operationshttp://fredrik-j.blogspot.com/search?q=matrix
    > but I was unable to find a clear tutorial. The tutorials for most of
    > the arbitrary precision implementations demonstrate simple scalar
    > examples.
    >
    > Thanks in advance


    Hello again. I forgot to mention I'm using
    Python 2.6.4
    mpmath 0.17
    bigfloat 0.2.1
    gmp 5.01
    gmpy2 2.0.0a1
    mpfr 3.0.0
    all on Ubuntu x64
    Ben123, Mar 2, 2011
    #2
    1. Advertising

  3. What do you mean by "arbitrary precision" ? Each method of calculating
    of something has its own precision...
    Arthur Mc Coy, Mar 2, 2011
    #3
  4. Ben123

    Ben123 Guest

    On Mar 2, 9:04 am, Arthur Mc Coy <> wrote:
    > What do you mean by "arbitrary precision" ? Each method of calculating
    > of something has its own precision...


    If you are unfamiliar with arbitrary precision, I'm referring to
    http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic

    Suppose I find the eigenvalues of a matrix and the eigenvalues range
    from 1 to 0.0001. This can be handled by numpy in Python because the
    smallest eigenvalue is larger than then numerical precision of 1E-19.
    However, if the range of eigenvalues is 1 to 1E-40, then I will need
    to increase the precision of all calculations leading up to finding
    the eigenvalues.

    I am working with complex valued matrices and I expect to get real
    eigenvalues back (based on the physics of the system). The precision
    of numpy is apparent from the imaginary component of the eigenvalues I
    find, currently 1E-19 or 1E-20. I need better precision for small
    eigenvalues.

    In case you are curious, the complex-valued matrices are 20x20.

    Thanks
    Ben123, Mar 2, 2011
    #4
  5. On Mar 2, 5:26 pm, Ben123 <> wrote:
    > On Mar 2, 9:04 am, Arthur Mc Coy <> wrote:
    >
    > > What do you mean by "arbitrary precision" ? Each method of calculating
    > > of something has its own precision...

    >
    > If you are unfamiliar with arbitrary precision, I'm referring tohttp://en..wikipedia.org/wiki/Arbitrary-precision_arithmetic
    >
    > Suppose I find the eigenvalues of a matrix and the eigenvalues range
    > from 1 to 0.0001. This can be handled by numpy in Python because the
    > smallest eigenvalue is larger than then numerical precision of 1E-19.
    > However, if the range of eigenvalues is 1 to 1E-40, then I will need
    > to increase the precision of all calculations leading up to finding
    > the eigenvalues.
    >
    > I am working with complex valued matrices and I expect to get real
    > eigenvalues back (based on the physics of the system). The precision
    > of numpy is apparent from the imaginary component of the eigenvalues I
    > find, currently 1E-19 or 1E-20. I need better precision for small
    > eigenvalues.
    >
    > In case you are curious, the complex-valued matrices are 20x20.
    >
    > Thanks


    You probably have to change the method of finding eigenvalues.
    Which one do you use? Power or algebraic ?
    Do you use Gaussian method to simplify matrices ?

    Languages can't support infinitely large or small numbers, so try to
    multiply the inner variables by 10^n to increase their values if this
    will not involve on the method. For example, I did this when was
    calculating geometric means of computer benchmarks.
    In such way you will be storing the number of zeros as n.

    Yes, interesting what are you calculating.
    Arthur Mc Coy, Mar 2, 2011
    #5
  6. Ben123

    Ben123 Guest

    On Mar 2, 10:21 am, Arthur Mc Coy <> wrote:
    > On Mar 2, 5:26 pm, Ben123 <> wrote:
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > > On Mar 2, 9:04 am, Arthur Mc Coy <> wrote:

    >
    > > > What do you mean by "arbitrary precision" ? Each method of calculating
    > > > of something has its own precision...

    >
    > > If you are unfamiliar with arbitrary precision, I'm referring tohttp://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic

    >
    > > Suppose I find the eigenvalues of a matrix and the eigenvalues range
    > > from 1 to 0.0001. This can be handled by numpy in Python because the
    > > smallest eigenvalue is larger than then numerical precision of 1E-19.
    > > However, if the range of eigenvalues is 1 to 1E-40, then I will need
    > > to increase the precision of all calculations leading up to finding
    > > the eigenvalues.

    >
    > > I am working with complex valued matrices and I expect to get real
    > > eigenvalues back (based on the physics of the system). The precision
    > > of numpy is apparent from the imaginary component of the eigenvalues I
    > > find, currently 1E-19 or 1E-20. I need better precision for small
    > > eigenvalues.

    >
    > > In case you are curious, the complex-valued matrices are 20x20.

    >
    > > Thanks

    >
    > You probably have to change the method of finding eigenvalues.
    > Which one do you use? Power or algebraic ?


    I'm not sure what you mean by this. As I mentioned, in Python I am
    using linalg.eig() from numpy on complex matrices. I have not
    investigated how this is implemented.

    > Do you use Gaussian method to simplify matrices ?


    No

    >
    > Languages can't support infinitely large or small numbers, so try to
    > multiply the inner variables by 10^n to increase their values if this
    > will not involve on the method. For example, I did this when was
    > calculating geometric means of computer benchmarks.


    Currently I have values between 1 and 1E-300 (not infinitely small). I
    don't see how scaling by powers of 10 will increase precision.

    > In such way you will be storing the number of zeros as n.


    Are you saying python cares whether I express a number as 0.001 or
    scaled by 10^5 to read 100? If this is the case, I'm still stuck. I
    need the full range of eigenvalues from 1 to 1E-300, so the entire
    range could be scaled by 1E300 but I would still need better precision
    than 1E19

    >
    > Yes, interesting what are you calculating.
    Ben123, Mar 2, 2011
    #6
  7. > Are you saying python cares whether I express a number as 0.001 or
    > scaled by 10^5 to read 100? If this is the case, I'm still stuck. I
    > need the full range of eigenvalues from 1 to 1E-300, so the entire
    > range could be scaled by 1E300 but I would still need better precision
    > than 1E19


    If python package can't compute 1E-300, then you can't use it.
    You can try to split your task onto small subtasks, which can be
    handled by specific python package.

    Then you need to think of the way to do that. Mathematically. If no
    way, then you can leave your task alone.
    Programming is limiting, that is true.

    I have heard about grid computing. You may find scientists working on
    grid and ask how do they split their tasks.
    But I think they do not an answer as well.

    Also google uses its matrix to rank web pages. It computes the maximum
    eigenvalue from the matrix which contain near zero entries too. Maybe
    you can find how do they store those values.

    Sorry, can't help anymore. I also have computing problems which I
    can't yet solve :)
    Arthur Mc Coy, Mar 2, 2011
    #7
  8. Ben123

    Robin Becker Guest

    On 02/03/2011 16:39, Ben123 wrote:
    ............
    >> Languages can't support infinitely large or small numbers, so try to
    >> multiply the inner variables by 10^n to increase their values if this
    >> will not involve on the method. For example, I did this when was
    >> calculating geometric means of computer benchmarks.

    >
    > Currently I have values between 1 and 1E-300 (not infinitely small). I
    > don't see how scaling by powers of 10 will increase precision.
    >
    >> In such way you will be storing the number of zeros as n.

    >
    > Are you saying python cares whether I express a number as 0.001 or
    > scaled by 10^5 to read 100? If this is the case, I'm still stuck. I
    > need the full range of eigenvalues from 1 to 1E-300, so the entire
    > range could be scaled by 1E300 but I would still need better precision
    > than 1E19
    >

    ........

    If you enter a number as 1e-19 then python will treat as a float; by default I
    think that float is IEEE double precision so you're getting a 48 bit mantissa
    (others may have better details). That means you've already lost any idea of
    arbitrary precision.

    When you say you have numbers like 1E-300 are those actually numerically zero or
    have you some valid inputs that vary over a huge range. It should be possible to
    compute determinants/inverses etc to arbitrary precision as those are known to
    be polynomial functions of the input elements. However, eigenvalues are not.

    eg

    [0 2]
    [1 0]

    has eigenvalues +/- sqrt(2) so even though you can represent the matrix in
    finite precision the eigenvalues require infinite precision.

    Eigenvalues are roots of a polynomial in the elements and root solving may
    require an infinite number of steps so it will be difficult with arbitrary
    matrices to keep arbitrary precision.
    --
    Robin Becker
    Robin Becker, Mar 2, 2011
    #8
  9. Ben123

    Ben123 Guest

    On Mar 2, 11:28 am, Robin Becker <> wrote:
    > On 02/03/2011 16:39, Ben123 wrote:
    > ...........
    >
    >
    >
    >
    >
    >
    >
    > >> Languages can't support infinitely large or small numbers, so try to
    > >> multiply the inner variables by 10^n to increase their values if this
    > >> will not involve on the method. For example, I did this when was
    > >> calculating geometric means of computer benchmarks.

    >
    > > Currently I have values between 1 and 1E-300 (not infinitely small). I
    > > don't see how scaling by powers of 10 will increase precision.

    >
    > >> In such way you will be storing the number of zeros as n.

    >
    > > Are you saying python cares whether I express a number as 0.001 or
    > > scaled by 10^5 to read 100? If this is the case, I'm still stuck. I
    > > need the full range of eigenvalues from 1 to 1E-300, so the entire
    > > range could be scaled by 1E300 but I would still need better precision
    > > than 1E19

    >
    > .......
    >
    > If you enter a number as 1e-19 then python will treat as a float; by default I
    > think that float is IEEE double precision so you're getting a 48 bit mantissa
    > (others may have better details). That means you've already lost any ideaof
    > arbitrary precision.
    >
    > When you say you have numbers like 1E-300 are those actually numerically zero or
    > have you some valid inputs that vary over a huge range. It should be possible to
    > compute determinants/inverses etc to arbitrary precision as those are known to
    > be polynomial functions of the input elements. However, eigenvalues are not.


    I meant that I expect the eigenvalues to be in a range from 1 to
    1E-300. The values in the matrix are from sine and cosine values and
    have range [-10,10] for the real and imaginary parts. Thus, I could
    specify the matrix elements to arbitrary precision prior to
    diagonalization. However, I'm looking for the resulting eigenvalues to
    have precision to 1E-300

    >
    > eg
    >
    > [0 2]
    > [1 0]
    >
    > has eigenvalues +/- sqrt(2) so even though you can represent the matrix in
    > finite precision the eigenvalues require infinite precision.


    Here's an example of a matrix I am diagonalizing:
    from numpy import *
    exmpl=array([[ 9.39582700e-04 +1.21760986e-21j, 3.32958159e-04
    -6.03359588e-04j, \
    -3.71551527e-04 -1.77143102e-04j, 2.87113322e-04
    -9.84374562e-04j], \
    [ 3.32958159e-04 +6.03359588e-04j, 5.05441331e-04
    +6.80604208e-21j, \
    -1.79123354e-05 -3.01368276e-04j, 7.33866807e-04
    -1.64459142e-04j], \
    [ -3.71551527e-04 +1.77143102e-04j, -1.79123354e-05
    +3.01368276e-04j, \
    1.80324968e-04 -2.63622461e-21j, 7.20508934e-05
    +4.43394732e-04j], \
    [ 2.87113322e-04 +9.84374562e-04j, 7.33866807e-04
    +1.64459142e-04j, \
    7.20508934e-05 -4.43394732e-04j, 1.11903651e-03
    -6.61744490e-21j]])

    The eigenvalues I get back using linalg.eig(exmpl)[0] are

    2.74438550e-03 +7.67536143e-20j
    6.54324852e-12 +2.03438929e-20j
    1.39471366e-13 +4.68335525e-20j
    1.76591707e-12 +2.20243218e-20j])

    Here I would want to have better precision in the eigenvalues, a
    smaller imaginary component.

    To use your example, I'd like to increase the number of digits shown
    in the eigenvalue:
    from numpy import *
    test=array([[0, 2],[1, 0]])
    linalg.eig(test)[0]

    but using

    import mpmath
    from mpmath import *
    mp.dps = 50
    from numpy import *
    test=array([[0, 2],[1, 0]])
    linalg.eig(test)[0]

    does not help

    >
    > Eigenvalues are roots of a polynomial in the elements and root solving may
    > require an infinite number of steps so it will be difficult with arbitrary
    > matrices to keep arbitrary precision.
    > --
    > Robin Becker
    Ben123, Mar 2, 2011
    #9
  10. On Wed, Mar 2, 2011 at 6:42 AM, Ben123 <> wrote:
    > Hello. I have a written Python program which currently uses numpy to
    > perform linear algebra operations. Specifically, I do matrix*matrix,
    > matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    > operations. Now I am interested in allowing arbitrary precision. I
    > have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    > to easily implement any with my current program. I suspect I have to
    > change some commands but I am unsure what.
    >
    > My question is which of the arbitrary precision implementations will
    > most easily handle linear algebra? I don't care about speed, just ease
    > of use. Online tutorials for arbitrary precision linear algebra
    > operations would be useful.
    >
    > For example, it looks like mpmath can handle matrix operations
    > http://fredrik-j.blogspot.com/search?q=matrix
    > but I was unable to find a clear tutorial. The tutorials for most of
    > the arbitrary precision implementations demonstrate simple scalar
    > examples.
    >
    > Thanks in advance


    Have you looked at Sage[0]? I don't know for a fact, but you should be
    able to define a matrix over RealField(precision_in_bits) and then
    take the eigenvalue of it. I don't know if it will actually produce
    the precision you need though.

    Geremy Condra
    geremy condra, Mar 2, 2011
    #10
  11. On Wed, Mar 2, 2011 at 10:21 AM, geremy condra <> wrote:
    > On Wed, Mar 2, 2011 at 6:42 AM, Ben123 <> wrote:
    >> Hello. I have a written Python program which currently uses numpy to
    >> perform linear algebra operations. Specifically, I do matrix*matrix,
    >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    >> operations. Now I am interested in allowing arbitrary precision. I
    >> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    >> to easily implement any with my current program. I suspect I have to
    >> change some commands but I am unsure what.
    >>
    >> My question is which of the arbitrary precision implementations will
    >> most easily handle linear algebra? I don't care about speed, just ease
    >> of use. Online tutorials for arbitrary precision linear algebra
    >> operations would be useful.
    >>
    >> For example, it looks like mpmath can handle matrix operations
    >> http://fredrik-j.blogspot.com/search?q=matrix
    >> but I was unable to find a clear tutorial. The tutorials for most of
    >> the arbitrary precision implementations demonstrate simple scalar
    >> examples.
    >>
    >> Thanks in advance

    >
    > Have you looked at Sage[0]? I don't know for a fact, but you should be
    > able to define a matrix over RealField(precision_in_bits) and then
    > take the eigenvalue of it. I don't know if it will actually produce
    > the precision you need though.
    >
    > Geremy Condra
    >


    Apologies, forgot the links:

    http://www.sagemath.org/doc/constructions/linear_algebra.html
    http://www.sagemath.org/doc/reference/sage/rings/complex_field.html

    Geremy Condra
    geremy condra, Mar 2, 2011
    #11
  12. Ben123

    Ben123 Guest

    On Mar 2, 12:22 pm, geremy condra <> wrote:
    > On Wed, Mar 2, 2011 at 10:21 AM, geremy condra <> wrote:
    > > On Wed, Mar 2, 2011 at 6:42 AM, Ben123 <> wrote:
    > >> Hello. I have a written Python program which currently uses numpy to
    > >> perform linear algebra operations. Specifically, I do matrix*matrix,
    > >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    > >> operations. Now I am interested in allowing arbitrary precision. I
    > >> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    > >> to easily implement any with my current program. I suspect I have to
    > >> change some commands but I am unsure what.

    >
    > >> My question is which of the arbitrary precision implementations will
    > >> most easily handle linear algebra? I don't care about speed, just ease
    > >> of use. Online tutorials for arbitrary precision linear algebra
    > >> operations would be useful.

    >
    > >> For example, it looks like mpmath can handle matrix operations
    > >>http://fredrik-j.blogspot.com/search?q=matrix
    > >> but I was unable to find a clear tutorial. The tutorials for most of
    > >> the arbitrary precision implementations demonstrate simple scalar
    > >> examples.

    >
    > >> Thanks in advance

    >
    > > Have you looked at Sage[0]? I don't know for a fact, but you should be
    > > able to define a matrix over RealField(precision_in_bits) and then
    > > take the eigenvalue of it. I don't know if it will actually produce
    > > the precision you need though.

    >
    > > Geremy Condra

    >
    > Apologies, forgot the links:
    >
    > http://www.sagemath.org/doc/constru...g/doc/reference/sage/rings/complex_field.html
    >
    > Geremy Condra


    I've already implemented the algorithm in Mathematica for exactly this
    reason - increasing precision there is trivial. I was interested in
    Python because it is free and more portable. I knew of Sage but wasn't
    sure if it was more appropriate than Python.
    Ben123, Mar 2, 2011
    #12
  13. Ben123

    Ben123 Guest

    On Mar 2, 12:22 pm, geremy condra <> wrote:
    > On Wed, Mar 2, 2011 at 10:21 AM, geremy condra <> wrote:
    > > On Wed, Mar 2, 2011 at 6:42 AM, Ben123 <> wrote:
    > >> Hello. I have a written Python program which currently uses numpy to
    > >> perform linear algebra operations. Specifically, I do matrix*matrix,
    > >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    > >> operations. Now I am interested in allowing arbitrary precision. I
    > >> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    > >> to easily implement any with my current program. I suspect I have to
    > >> change some commands but I am unsure what.

    >
    > >> My question is which of the arbitrary precision implementations will
    > >> most easily handle linear algebra? I don't care about speed, just ease
    > >> of use. Online tutorials for arbitrary precision linear algebra
    > >> operations would be useful.

    >
    > >> For example, it looks like mpmath can handle matrix operations
    > >>http://fredrik-j.blogspot.com/search?q=matrix
    > >> but I was unable to find a clear tutorial. The tutorials for most of
    > >> the arbitrary precision implementations demonstrate simple scalar
    > >> examples.

    >
    > >> Thanks in advance

    >
    > > Have you looked at Sage[0]? I don't know for a fact, but you should be
    > > able to define a matrix over RealField(precision_in_bits) and then
    > > take the eigenvalue of it. I don't know if it will actually produce
    > > the precision you need though.

    >
    > > Geremy Condra

    >
    > Apologies, forgot the links:
    >
    > http://www.sagemath.org/doc/constru...g/doc/reference/sage/rings/complex_field.html
    >
    > Geremy Condra


    I'm not sufficiently familiar with Sage, but from
    http://www.sagemath.org/doc/constructions/linear_algebra.html

    "currently Sage does not implement multiprecision numerical
    eigenvalues/eigenvectors"

    I'll ask on the Sage forums about this. In the mean time, I'm still
    trying to get arbitrary precision linear algebra in Python
    Ben123, Mar 2, 2011
    #13
  14. Ben123

    Paul Rubin Guest

    Ben123 <> writes:
    > I'll ask on the Sage forums about this. In the mean time, I'm still
    > trying to get arbitrary precision linear algebra in Python


    You probably have to use something like gmpy.mpq to implement your
    favorite eigenvalue computation algorithm. Maxima might be able to do
    it out of the box, but is perhaps more hassle to use than you want to
    deal with. See: http://en.wikipedia.org/wiki/Maxima_(software)
    Paul Rubin, Mar 2, 2011
    #14
  15. On Wed, Mar 2, 2011 at 10:47 AM, Ben123 <> wrote:
    > On Mar 2, 12:22 pm, geremy condra <> wrote:
    >> On Wed, Mar 2, 2011 at 10:21 AM, geremy condra <> wrote:
    >> > On Wed, Mar 2, 2011 at 6:42 AM, Ben123 <> wrote:
    >> >> Hello. I have a written Python program which currently uses numpy to
    >> >> perform linear algebra operations. Specifically, I do matrix*matrix,
    >> >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    >> >> operations. Now I am interested in allowing arbitrary precision. I
    >> >> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    >> >> to easily implement any with my current program. I suspect I have to
    >> >> change some commands but I am unsure what.

    >>
    >> >> My question is which of the arbitrary precision implementations will
    >> >> most easily handle linear algebra? I don't care about speed, just ease
    >> >> of use. Online tutorials for arbitrary precision linear algebra
    >> >> operations would be useful.

    >>
    >> >> For example, it looks like mpmath can handle matrix operations
    >> >>http://fredrik-j.blogspot.com/search?q=matrix
    >> >> but I was unable to find a clear tutorial. The tutorials for most of
    >> >> the arbitrary precision implementations demonstrate simple scalar
    >> >> examples.

    >>
    >> >> Thanks in advance

    >>
    >> > Have you looked at Sage[0]? I don't know for a fact, but you should be
    >> > able to define a matrix over RealField(precision_in_bits) and then
    >> > take the eigenvalue of it. I don't know if it will actually produce
    >> > the precision you need though.

    >>
    >> > Geremy Condra

    >>
    >> Apologies, forgot the links:
    >>
    >> http://www.sagemath.org/doc/constru...g/doc/reference/sage/rings/complex_field.html
    >>
    >> Geremy Condra

    >
    > I'm not sufficiently familiar with Sage, but from
    > http://www.sagemath.org/doc/constructions/linear_algebra.html
    >
    > "currently Sage does not implement multiprecision numerical
    > eigenvalues/eigenvectors"
    >
    > I'll ask on the Sage forums about this. In the mean time, I'm still
    > trying to get arbitrary precision linear algebra in Python


    I'd suggest you read that slightly more carefully. It's speaking
    specifically of RR and CC, ie, double-width reals and complex values.
    Using RealField and ComplexField- the arbitrary precision real and
    complex fields- seems to be working. Using the earlier example:

    sage: M1 = Matrix(RealField(1000), [[0, 2], [1, 0]])
    sage: M2 = Matrix(RR, [[0, 2], [1, 0]])
    sage: M1.eigenvalues()
    [1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702492483605585073721264412149709993583141322266592750559275579995050115278206057147010955997160597027453459686201472851741864088919860955232923048430871432145083976260362799525140799,
    -1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702492483605585073721264412149709993583141322266592750559275579995050115278206057147010955997160597027453459686201472851741864088919860955232923048430871432145083976260362799525140799]
    sage: M2.eigenvalues()
    [1.41421356237310, -1.41421356237310]

    Converting the first of the latter values to an element of
    RealField(1000) yields much what I would expect from higher precision
    arithmetic:

    R = RealField(1000)
    sage: x = M1.eigenvalues()[0]
    sage: y = R(M2.eigenvalues()[0])
    sage: x
    1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702492483605585073721264412149709993583141322266592750559275579995050115278206057147010955997160597027453459686201472851741864088919860955232923048430871432145083976260362799525140799
    sage: y
    1.41421356237309514547462185873882845044136047363281250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

    So, while I don't know for a fact that it's using the precision you
    need, it certainly does seem to be using high precision arithmetic
    here. Furthermore, repeating it for various precisions seems to
    increase the difference, as would be expected from better
    approximations, and the number of digits in the result is consistent
    with the interpretation that it is using the specified precision.

    All of this to say that it seems to be doing what you want it to do.

    Geremy Condra
    geremy condra, Mar 2, 2011
    #15
  16. Ben123

    Nobody Guest

    On Wed, 02 Mar 2011 06:42:22 -0800, Ben123 wrote:

    > Hello. I have a written Python program which currently uses numpy to
    > perform linear algebra operations. Specifically, I do matrix*matrix,
    > matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    > operations. Now I am interested in allowing arbitrary precision. I
    > have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    > to easily implement any with my current program. I suspect I have to
    > change some commands but I am unsure what.


    numpy is implemented in C, and is limited to the language's primitive
    types. It provides a "float96" type which I would guess is actually a
    "long double" (80 bits on x86). You can't extend numpy's precision by
    using a separate arbitary-precision library.
    Nobody, Mar 2, 2011
    #16
  17. Ben123

    Ben123 Guest

    On Mar 2, 4:48 pm, Nobody <> wrote:
    > On Wed, 02 Mar 2011 06:42:22 -0800, Ben123 wrote:
    > > Hello. I have a written Python program which currently uses numpy to
    > > perform linear algebra operations. Specifically, I do matrix*matrix,
    > > matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    > > operations. Now I am interested in allowing arbitrary precision. I
    > > have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    > > to easily implement any with my current program. I suspect I have to
    > > change some commands but I am unsure what.

    >
    > numpy is implemented in C, and is limited to the language's primitive
    > types. It provides a "float96" type which I would guess is actually a
    > "long double" (80 bits on x86). You can't extend numpy's precision by
    > using a separate arbitary-precision library.


    Hello. To clarify, I don't need numpy explicitly. I'm looking for an
    arbitrary precision library which can also do linear algebra
    operations: matrix*matrix, matrix*vector, matrix inverse, and
    eigenvalues of matrix.
    Ben123, Mar 3, 2011
    #17
  18. Ben123

    Ben123 Guest

    On Mar 2, 1:34 pm, geremy condra <> wrote:
    > On Wed, Mar 2, 2011 at 10:47 AM, Ben123 <> wrote:
    > > On Mar 2, 12:22 pm, geremy condra <> wrote:
    > >> On Wed, Mar 2, 2011 at 10:21 AM, geremy condra <> wrote:
    > >> > On Wed, Mar 2, 2011 at 6:42 AM, Ben123 <> wrote:
    > >> >> Hello. I have a written Python program which currently uses numpy to
    > >> >> perform linear algebra operations. Specifically, I do matrix*matrix,
    > >> >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
    > >> >> operations. Now I am interested in allowing arbitrary precision. I
    > >> >> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
    > >> >> to easily implement any with my current program. I suspect I have to
    > >> >> change some commands but I am unsure what.

    >
    > >> >> My question is which of the arbitrary precision implementations will
    > >> >> most easily handle linear algebra? I don't care about speed, just ease
    > >> >> of use. Online tutorials for arbitrary precision linear algebra
    > >> >> operations would be useful.

    >
    > >> >> For example, it looks like mpmath can handle matrix operations
    > >> >>http://fredrik-j.blogspot.com/search?q=matrix
    > >> >> but I was unable to find a clear tutorial. The tutorials for most of
    > >> >> the arbitrary precision implementations demonstrate simple scalar
    > >> >> examples.

    >
    > >> >> Thanks in advance

    >
    > >> > Have you looked at Sage[0]? I don't know for a fact, but you should be
    > >> > able to define a matrix over RealField(precision_in_bits) and then
    > >> > take the eigenvalue of it. I don't know if it will actually produce
    > >> > the precision you need though.

    >
    > >> > Geremy Condra

    >
    > >> Apologies, forgot the links:

    >
    > >>http://www.sagemath.org/doc/constructions/linear_algebra.htmlhttp://w....

    >
    > >> Geremy Condra

    >
    > > I'm not sufficiently familiar with Sage, but from
    > >http://www.sagemath.org/doc/constructions/linear_algebra.html

    >
    > > "currently Sage does not implement multiprecision numerical
    > > eigenvalues/eigenvectors"

    >
    > > I'll ask on the Sage forums about this. In the mean time, I'm still
    > > trying to get arbitrary precision linear algebra in Python

    >
    > I'd suggest you read that slightly more carefully. It's speaking
    > specifically of RR and CC, ie, double-width reals and complex values.
    > Using RealField and ComplexField- the arbitrary precision real and
    > complex fields- seems to be working. Using the earlier example:
    >
    > sage: M1 = Matrix(RealField(1000), [[0, 2], [1, 0]])
    > sage: M2 = Matrix(RR, [[0, 2], [1, 0]])
    > sage: M1.eigenvalues()
    > [1.414213562373095048801688724209698078569671875376948073176679737990732478 462107038850387534327641572735013846230912297024924836055850737212644121497 099935831413222665927505592755799950501152782060571470109559971605970274534 596862014728517418640889198609552329230484308714321450839762603627995251407 99,
    > -1.414213562373095048801688724209698078569671875376948073176679737990732478 462107038850387534327641572735013846230912297024924836055850737212644121497 099935831413222665927505592755799950501152782060571470109559971605970274534 596862014728517418640889198609552329230484308714321450839762603627995251407 99]
    > sage: M2.eigenvalues()
    > [1.41421356237310, -1.41421356237310]
    >
    > Converting the first of the latter values to an element of
    > RealField(1000) yields much what I would expect from higher precision
    > arithmetic:
    >
    >  R = RealField(1000)
    > sage: x = M1.eigenvalues()[0]
    > sage: y = R(M2.eigenvalues()[0])
    > sage: x
    > 1.4142135623730950488016887242096980785696718753769480731766797379907324784 621070388503875343276415727350138462309122970249248360558507372126441214970 999358314132226659275055927557999505011527820605714701095599716059702745345 968620147285174186408891986095523292304843087143214508397626036279952514079 9
    > sage: y
    > 1.4142135623730951454746218587388284504413604736328125000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000000 0
    >
    > So, while I don't know for a fact that it's using the precision you
    > need, it certainly does seem to be using high precision arithmetic
    > here. Furthermore, repeating it for various precisions seems to
    > increase the difference, as would be expected from better
    > approximations, and the number of digits in the result is consistent
    > with the interpretation that it is using the specified precision.
    >
    > All of this to say that it seems to be doing what you want it to do.
    >
    > Geremy Condra


    Hello. I was able to install Sage 4.6.1 and get your example to work.
    I will update this thread once I get my program to work with Sage.
    Ben123, Mar 3, 2011
    #18
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. ckumar
    Replies:
    2
    Views:
    447
    ckumar
    Jan 17, 2005
  2. Bernard Xhumga
    Replies:
    0
    Views:
    459
    Bernard Xhumga
    Nov 24, 2003
  3. C. Barnes
    Replies:
    5
    Views:
    532
    Szabolcs Nagy
    Sep 11, 2005
  4. Replies:
    0
    Views:
    293
  5. Ken
    Replies:
    2
    Views:
    240
    Albert van der Horst
    Feb 27, 2012
Loading...

Share This Page