# numpy (matrix solver) - python vs. matlab

Discussion in 'Python' started by someone, Apr 29, 2012.

1. ### someoneGuest

Hi,

Notice cross-post, I hope you bear over with me for doing that (and I
imagine that some of you also like python in the matlab-group like
myself)...

------------------------------------------
Python vs. Matlab:
------------------------------------------

Python:
========
from numpy import matrix
from numpy import linalg
A = matrix( [[1,2,3],[11,12,13],[21,22,23]] )
print "A="
print A
print "A.I (inverse of A)="
print A.I

A.I (inverse of A)=
[[ 2.81466387e+14 -5.62932774e+14 2.81466387e+14]
[ -5.62932774e+14 1.12586555e+15 -5.62932774e+14]
[ 2.81466387e+14 -5.62932774e+14 2.81466387e+14]]

Matlab:
========
>> A=[1 2 3; 11 12 13; 21 22 23]

A =

1 2 3
11 12 13
21 22 23

>> inv(A)

Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 1.067522e-17.

ans =

1.0e+15 *

0.3002 -0.6005 0.3002
-0.6005 1.2010 -0.6005
0.3002 -0.6005 0.3002

------------------------------------------
Python vs. Matlab:
------------------------------------------

So Matlab at least warns about "Matrix is close to singular or badly
scaled", which python (and I guess most other languages) does not...

Which is the most accurate/best, even for such a bad matrix? Is it
possible to say something about that? Looks like python has a lot more
digits but maybe that's just a random result... I mean.... Element 1,1 =
2.81e14 in Python, but something like 3e14 in Matlab and so forth -
there's a small difference in the results...

With python, I would also kindly ask about how to avoid this problem in
the future, I mean, this maybe means that I have to check the condition
number at all times before doing anything at all ? How to do that?

I hope you matlabticians like this topic, at least I myself find it
interesting and many of you probably also program in some other language

someone, Apr 29, 2012

2. ### someoneGuest

On 04/30/2012 12:39 AM, Kiuhnm wrote:

>> So Matlab at least warns about "Matrix is close to singular or badly
>> scaled", which python (and I guess most other languages) does not...

>
> A is not just close to singular: it's singular!

Ok. When do you define it to be singular, btw?

>> Which is the most accurate/best, even for such a bad matrix? Is it
>> possible to say something about that? Looks like python has a lot more
>> digits but maybe that's just a random result... I mean.... Element 1,1 =
>> 2.81e14 in Python, but something like 3e14 in Matlab and so forth -
>> there's a small difference in the results...

>
> Both results are *wrong*: no inverse exists.

What's the best solution of the two wrong ones? Best least-squares
solution or whatever?

>> With python, I would also kindly ask about how to avoid this problem in
>> the future, I mean, this maybe means that I have to check the condition
>> number at all times before doing anything at all ? How to do that?

>
> If cond(A) is high, you're trying to solve your problem the wrong way.

So you're saying that in another language (python) I should check the
condition number, before solving anything?

> You should try to avoid matrix inversion altogether if that's the case.
> For instance you shouldn't invert a matrix just to solve a linear system.

What then?

Cramer's rule?

someone, Apr 30, 2012

3. ### Nasser M. AbbasiGuest

On 04/29/2012 05:17 PM, someone wrote:

> I would also kindly ask about how to avoid this problem in
> the future, I mean, this maybe means that I have to check the condition
> number at all times before doing anything at all ? How to do that?
>

I hope you'll check the condition number all the time.

You could be designing a building where people will live in it.

If do not check the condition number, you'll end up with a building that
will fall down when a small wind hits it and many people will die all
because you did not bother to check the condition number when you solved
the equations you used in your design.

Also, as was said, do not use INV(A) directly to solve equations.

--Nasser

Nasser M. Abbasi, Apr 30, 2012
4. ### Paul RubinGuest

someone <> writes:
>> A is not just close to singular: it's singular!

> Ok. When do you define it to be singular, btw?

Singular means the determinant is zero, i.e. the rows or columns
are not linearly independent. Let's give names to the three rows:

a = [1 2 3]; b = [11 12 13]; c = [21 22 23].

Then notice that c = 2*b - a. So c is linearly dependent on a and b.
Geometrically this means the three vectors are in the same plane,
so the matrix doesn't have an inverse.

>>> Which is the most accurate/best, even for such a bad matrix?

What are you trying to do? If you are trying to calculate stuff
with matrices, you really should know some basic linear algebra.

Paul Rubin, Apr 30, 2012
5. ### someoneGuest

On 04/30/2012 02:38 AM, Nasser M. Abbasi wrote:
> On 04/29/2012 05:17 PM, someone wrote:
>
>> I would also kindly ask about how to avoid this problem in
>> the future, I mean, this maybe means that I have to check the condition
>> number at all times before doing anything at all ? How to do that?
>>

>
> I hope you'll check the condition number all the time.

So how big can it (cond-number) be before I should do something else?
And what to do then? Cramers rule or pseudoinverse or something else?

> You could be designing a building where people will live in it.
>
> If do not check the condition number, you'll end up with a building that
> will fall down when a small wind hits it and many people will die all
> because you did not bother to check the condition number when you solved
> the equations you used in your design.
>
> Also, as was said, do not use INV(A) directly to solve equations.

In Matlab I used x=A\b.

I used inv(A) in python. Should I use some kind of pseudo-inverse or
what do you suggest?

someone, Apr 30, 2012
6. ### Nasser M. AbbasiGuest

On 04/29/2012 07:59 PM, someone wrote:

>>
>> Also, as was said, do not use INV(A) directly to solve equations.

>
> In Matlab I used x=A\b.
>

good.

> I used inv(A) in python. Should I use some kind of pseudo-inverse or
> what do you suggest?
>

I do not use python much myself, but a quick google showed that pyhton
scipy has API for linalg, so use, which is from the documentation, the
following code example

X = scipy.linalg.solve(A, B)

But you still need to check the cond(). If it is too large, not good.
How large and all that, depends on the problem itself. But the rule of
thumb, the lower the better. Less than 100 can be good in general, but I
really can't give you a fixed number to use, as I am not an expert in
this subjects, others who know more about it might have better
recommendations.

--Nasser

Nasser M. Abbasi, Apr 30, 2012
7. ### Nasser M. AbbasiGuest

On 04/29/2012 07:17 PM, someone wrote:

> Ok. When do you define it to be singular, btw?
>

There are things you can see right away about a matrix A being singular
without doing any computation. By just looking at it.

For example, If you see a column (or row) being a linear combination of
other column(s) (or row(s)) then this is a no no.

1 2 3
11 12 13
21 22 23

You can see right away that if you multiply the second row by 2, and
subtract from that one times the first row, then you obtain the third row.

Hence the third row is a linear combination of the first row and the
second row. no good.

When you get a row (or a column) being a linear combination of others
rows (or columns), then this means the matrix is singular.

--Nasser

Nasser M. Abbasi, Apr 30, 2012
8. ### Russ P.Guest

On Apr 29, 5:17 pm, someone <> wrote:
> On 04/30/2012 12:39 AM, Kiuhnm wrote:
>
> >> So Matlab at least warns about "Matrix is close to singular or badly
> >> scaled", which python (and I guess most other languages) does not...

>
> > A is not just close to singular: it's singular!

>
> Ok. When do you define it to be singular, btw?
>
> >> Which is the most accurate/best, even for such a bad matrix? Is it
> >> possible to say something about that? Looks like python has a lot more
> >> digits but maybe that's just a random result... I mean.... Element 1,1=
> >> 2.81e14 in Python, but something like 3e14 in Matlab and so forth -
> >> there's a small difference in the results...

>
> > Both results are *wrong*: no inverse exists.

>
> What's the best solution of the two wrong ones? Best least-squares
> solution or whatever?
>
> >> With python, I would also kindly ask about how to avoid this problem in
> >> the future, I mean, this maybe means that I have to check the condition
> >> number at all times before doing anything at all ? How to do that?

>
> > If cond(A) is high, you're trying to solve your problem the wrong way.

>
> So you're saying that in another language (python) I should check the
> condition number, before solving anything?
>
> > You should try to avoid matrix inversion altogether if that's the case.
> > For instance you shouldn't invert a matrix just to solve a linear system.

>
> What then?
>
> Cramer's rule?

If you really want to know just about everything there is to know
about a matrix, take a look at its Singular Value Decomposition (SVD).
I've never used numpy, but I assume it can compute an SVD.

Russ P., May 1, 2012
9. ### EelcoGuest

There is linalg.pinv, which computes a pseudoinverse based on SVD that
works on all matrices, regardless of the rank of the matrix. It merely
approximates A*A.I = I as well as A permits though, rather than being
a true inverse, which may not exist.

Anyway, there are no general answers for this kind of thing. In all
non-textbook problems I can think of, the properties of your matrix
are highly constrained by the problem you are working on; which
additional tests are required to check for corner cases thus depends
on the problem. Often, if you have found an elegant solution to your
problem, no such corner cases exist. In that case, MATLAB is just
wasting your time with its automated checks.

Eelco, May 1, 2012
10. ### someoneGuest

On 04/30/2012 02:57 AM, Paul Rubin wrote:
> someone<> writes:
>>> A is not just close to singular: it's singular!

>> Ok. When do you define it to be singular, btw?

>
> Singular means the determinant is zero, i.e. the rows or columns
> are not linearly independent. Let's give names to the three rows:
>
> a = [1 2 3]; b = [11 12 13]; c = [21 22 23].
>
> Then notice that c = 2*b - a. So c is linearly dependent on a and b.
> Geometrically this means the three vectors are in the same plane,
> so the matrix doesn't have an inverse.

Oh, thak you very much for a good explanation.

>>>> Which is the most accurate/best, even for such a bad matrix?

>
> What are you trying to do? If you are trying to calculate stuff
> with matrices, you really should know some basic linear algebra.

Actually I know some... I just didn't think so much about, before
writing the question this as I should, I know theres also something like
singular value decomposition that I think can help solve otherwise
illposed problems, although I'm not an expert like others in this forum,
I know for sure

someone, May 1, 2012
11. ### someoneGuest

On 05/01/2012 08:56 AM, Russ P. wrote:
> On Apr 29, 5:17 pm, someone<> wrote:
>> On 04/30/2012 12:39 AM, Kiuhnm wrote:
>>> You should try to avoid matrix inversion altogether if that's the case.
>>> For instance you shouldn't invert a matrix just to solve a linear system.

>>
>> What then?
>>
>> Cramer's rule?

>
> If you really want to know just about everything there is to know
> about a matrix, take a look at its Singular Value Decomposition (SVD).

I know a bit about SVD - I used it for a short period of time in Matlab,
though I'm definately not an expert in it and I don't understand the
whole theory with orthogality behind making it work so elegant as it
is/does work out.

> I've never used numpy, but I assume it can compute an SVD.

I'm making my first steps now with numpy, so there's a lot I don't know
and haven't tried with numpy...

someone, May 1, 2012
12. ### someoneGuest

On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:
> On 04/29/2012 07:59 PM, someone wrote:
> I do not use python much myself, but a quick google showed that pyhton
> scipy has API for linalg, so use, which is from the documentation, the
> following code example
>
> X = scipy.linalg.solve(A, B)
>
> But you still need to check the cond(). If it is too large, not good.
> How large and all that, depends on the problem itself. But the rule of
> thumb, the lower the better. Less than 100 can be good in general, but I
> really can't give you a fixed number to use, as I am not an expert in
> this subjects, others who know more about it might have better
> recommendations.

Ok, that's a number...

Anyone wants to participate and do I hear something better than "less
than 100 can be good in general" ?

If I don't hear anything better, the limit is now 100...

What's the limit in matlab (on the condition number of the matrices), by
the way, before it comes up with a warning ???

someone, May 1, 2012
13. ### Colin J. WilliamsGuest

On 01/05/2012 2:43 PM, someone wrote:
[snip]
>> a = [1 2 3]; b = [11 12 13]; c = [21 22 23].
>>
>> Then notice that c = 2*b - a. So c is linearly dependent on a and b.
>> Geometrically this means the three vectors are in the same plane,
>> so the matrix doesn't have an inverse.

>

Does it not mean that there are three parallel planes?

Consider the example in two dimensional space.

Colin W.
[snip]

Colin J. Williams, May 1, 2012
14. ### Russ P.Guest

On May 1, 11:52 am, someone <> wrote:
> On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:
>
> > On 04/29/2012 07:59 PM, someone wrote:
> > I do not use python much myself, but a quick google showed that pyhton
> > scipy has API for linalg, so use, which is from the documentation, the
> > following code example

>
> > X = scipy.linalg.solve(A, B)

>
> > But you still need to check the cond(). If it is too large, not good.
> > How large and all that, depends on the problem itself. But the rule of
> > thumb, the lower the better. Less than 100 can be good in general, but I
> > really can't give you a fixed number to use, as I am not an expert in
> > this subjects, others who know more about it might have better
> > recommendations.

>
> Ok, that's a number...
>
> Anyone wants to participate and do I hear something better than "less
> than 100 can be good in general" ?
>
> If I don't hear anything better, the limit is now 100...
>
> What's the limit in matlab (on the condition number of the matrices), by
> the way, before it comes up with a warning ???

The threshold of acceptability really depends on the problem you are
trying to solve. I haven't solved linear equations for a long time,
but off hand, I would say that a condition number over 10 is
questionable.

A high condition number suggests that the selection of independent
variables for the linear function you are trying to fit is not quite
right. For a poorly conditioned matrix, your modeling function will be
very sensitive to measurement noise and other sources of error, if
applicable. If the condition number is 100, then any input on one
particular axis gets magnified 100 times more than other inputs.
Unless your inputs are very precise, that is probably not what you
want.

Or something like that.

Russ P., May 1, 2012
15. ### someoneGuest

On 05/01/2012 09:59 PM, Colin J. Williams wrote:
> On 01/05/2012 2:43 PM, someone wrote:
> [snip]
>>> a = [1 2 3]; b = [11 12 13]; c = [21 22 23].
>>>
>>> Then notice that c = 2*b - a. So c is linearly dependent on a and b.
>>> Geometrically this means the three vectors are in the same plane,
>>> so the matrix doesn't have an inverse.

>>

>
> Does it not mean that there are three parallel planes?
>
> Consider the example in two dimensional space.

I actually drawed it and saw it... It means that you can construct a 2D
plane and all 3 vectors are in this 2D-plane...

someone, May 1, 2012
16. ### someoneGuest

On 05/01/2012 10:54 PM, Russ P. wrote:
> On May 1, 11:52 am, someone<> wrote:
>> On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:

>> What's the limit in matlab (on the condition number of the matrices), by
>> the way, before it comes up with a warning ???

>
> The threshold of acceptability really depends on the problem you are
> trying to solve. I haven't solved linear equations for a long time,
> but off hand, I would say that a condition number over 10 is
> questionable.

Anyone knows the threshold for Matlab for warning when solving x=A\b ? I
tried "edit slash" but this seems to be internal so I cannot see what
criteria the warning is based upon...

> A high condition number suggests that the selection of independent
> variables for the linear function you are trying to fit is not quite
> right. For a poorly conditioned matrix, your modeling function will be
> very sensitive to measurement noise and other sources of error, if
> applicable. If the condition number is 100, then any input on one
> particular axis gets magnified 100 times more than other inputs.
> Unless your inputs are very precise, that is probably not what you
> want.
>
> Or something like that.

Ok. So it's like a frequency-response-function, output divided by input...

someone, May 1, 2012
17. ### Paul RubinGuest

someone <> writes:
> Actually I know some... I just didn't think so much about, before
> writing the question this as I should, I know theres also something
> like singular value decomposition that I think can help solve
> otherwise illposed problems,

You will probably get better advice if you are able to describe what
problem (ill-posed or otherwise) you are actually trying to solve. SVD
just separates out the orthogonal and scaling parts of the
transformation induced by a matrix. Whether that is of any use to you
is unclear since you don't say what you're trying to do.

Paul Rubin, May 2, 2012
18. ### Russ P.Guest

On May 1, 4:05 pm, Paul Rubin <> wrote:
> someone <> writes:
> > Actually I know some... I just didn't think so much about, before
> > writing the question this as I should, I know theres also something
> > like singular value decomposition that I think can help solve
> > otherwise illposed problems,

>
> You will probably get better advice if you are able to describe what
> problem (ill-posed or otherwise) you are actually trying to solve.  SVD
> just separates out the orthogonal and scaling parts of the
> transformation induced by a matrix.  Whether that is of any use to you
> is unclear since you don't say what you're trying to do.

I agree with the first sentence, but I take slight issue with the word
"just" in the second. The "orthogonal" part of the transformation is
non-distorting, but the "scaling" part essentially distorts the space.
At least that's how I think about it. The larger the ratio between the
largest and smallest singular value, the more distortion there is. SVD
may or may not be the best choice for the final algorithm, but it is
useful for visualizing the transformation you are applying. It can
provide clues about the quality of the selection of independent
variables, state variables, or inputs.

Russ P., May 2, 2012
19. ### someoneGuest

On 05/02/2012 01:05 AM, Paul Rubin wrote:
> someone<> writes:
>> Actually I know some... I just didn't think so much about, before
>> writing the question this as I should, I know theres also something
>> like singular value decomposition that I think can help solve
>> otherwise illposed problems,

>
> You will probably get better advice if you are able to describe what
> problem (ill-posed or otherwise) you are actually trying to solve. SVD

I don't understand what else I should write. I gave the singular matrix
would be nice to learn some things for future use (for instance
understanding SVD more - perhaps someone geometrically can explain SVD,
that'll be really nice, I hope)...

> just separates out the orthogonal and scaling parts of the
> transformation induced by a matrix. Whether that is of any use to you
> is unclear since you don't say what you're trying to do.

Still, I dont think I completely understand SVD. SVD (at least in
Matlab) returns 3 matrices, one is a diagonal matrix I think. I think I
would better understand it with geometric examples, if one would be so
kind to maybe write something about that... I can plot 3D vectors in
matlab, very easily so maybe I better understand SVD if I hear/read the
geometric explanation (references to textbook/similar is also appreciated).

someone, May 2, 2012
20. ### someoneGuest

On 05/02/2012 01:38 AM, Russ P. wrote:
> On May 1, 4:05 pm, Paul Rubin<> wrote:
>> someone<> writes:
>>> Actually I know some... I just didn't think so much about, before
>>> writing the question this as I should, I know theres also something
>>> like singular value decomposition that I think can help solve
>>> otherwise illposed problems,

>>
>> You will probably get better advice if you are able to describe what
>> problem (ill-posed or otherwise) you are actually trying to solve. SVD
>> just separates out the orthogonal and scaling parts of the
>> transformation induced by a matrix. Whether that is of any use to you
>> is unclear since you don't say what you're trying to do.

>
> I agree with the first sentence, but I take slight issue with the word
> "just" in the second. The "orthogonal" part of the transformation is
> non-distorting, but the "scaling" part essentially distorts the space.
> At least that's how I think about it. The larger the ratio between the
> largest and smallest singular value, the more distortion there is. SVD
> may or may not be the best choice for the final algorithm, but it is
> useful for visualizing the transformation you are applying. It can
> provide clues about the quality of the selection of independent
> variables, state variables, or inputs.

Me would like to hear more!

It would really appreciate if anyone could maybe post a simple SVD
example and tell what the vectors from the SVD represents geometrically
/ visually, because I don't understand it good enough and I'm sure it's
very important, when it comes to solving matrix systems...

someone, May 2, 2012