Numerical Recipes Rational Approximation Algorithm

L

lcw1964

Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.

I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here,
http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).

I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.

Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.

I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.

I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.

Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!

This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.

Many thanks in advance,

Les
 
S

S. Janssens

lcw1964 said:
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.

I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here,
http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).

I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.

Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.

I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.

I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.

Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!

This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.

Many thanks in advance,

Les

I had similar problems "converting" NR C code to double precision.
Nowadays when I use NR (not very often, find their code hard to follow)
it's the C++ code which is in double precision by default.

My problem could be traced back to all kinds of TINY and EPS variables
whose values had been hard-coded somewhere in their source, but by what
you tell I'm not so sure that's the reason in your case. Perhaps you
could install a more up-to-date compiler like gcc under mingw (I assume
you're using Windows.) Then you could try to run your executable under
valgrind (valgrind.org) to see where in your source the allocation goes
wrong. That may give you a clue.

Hope that helps,
S.
 
S

S. Janssens

lcw1964 said:
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.

I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here,
http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).

I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.

Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.

I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.

I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.

Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!

This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.

Many thanks in advance,

Les

Forgot to mention that you might also try posting on the nr.com forum.
Likely you're not the only victim.
 
H

Hans Mittelmann

Hi,
what about netlib/cephes/remes?
Hans M
--------------------------------------------------------------------------------------------------------------------
 
D

Damien

It's been a while since I used NRC, but check to make sure that in your
float to double conversion that you are now allocating double arrays as
well.

I also seem to remember that NRC supplies double-precision versions in
the archive they supply.

Damien
 
G

~Glynne

lcw1964 said:
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.

I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here,
http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).

I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.

Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.

I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.

I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.

Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!

This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.

Many thanks in advance,

Les


While your tests are probably not too taxing for any modern operating
system, they probably ARE too taxing for an old C/C++ compiler which
employs a segmented memory architecture. What you need is a compiler
that supports a flat memory model....and uses disk caching when the
size of your problem exceeds available RAM.

Use the GNU gcc compiler and these run-time errors will disappear.

~Glynne
 
L

lcw1964

Thanks, this is something I should try.

I must confess that I am Unix/Linux naive and am enslaved to the
Windows OS, so wrapping my brain around an "emulator" interface like
Cygwin or MinGW is a first step. I have been looking into the former,
but must confess at being utterly perplexed about what to do with the
various .gz files that get downloaded from whatever mirror I have been
lucky to connect with.

On the other hand, the routine that interests me is readily recast in
either Maple or Mathematica. (I am more experienced with the former.)
The computation would be slower, but I would have the advantage of
arbitrary floating point precision and the ability to output
mathematical results in a visually meaningul way--matrices, formulae,
etc.--and be able to manipulate such things easily in the CAS
environment.

I will let you know how far I get.

Les
 
L

lcw1964

Damien said:
It's been a while since I used NRC, but check to make sure that in your
float to double conversion that you are now allocating double arrays as
well.

I thought I had the issue covered but it is feasible I missed
something.
I also seem to remember that NRC supplies double-precision versions in
the archive they supply.

Regretably, not in the archive I have, version 2.08h which I got in the
late 1990s. Headers to double-precision code versions are in nr.h, but
not coverted code itself.

In NRC++ all code is DP by default, but there is a whole extra level of
interdependcies and objects and type definitions, etc.

This may be scurrilous to say here, but I really wish the NR people had
released a 2nd of the Pascal edition.

Les
 
L

lcw1964

lcw1964 said:
I will let you know how far I get.

I gave up on cygwin, but a barebones download and installation of MinGW
has been a good place to start.

I have been able to compile the sample program in the documentation and
once I learn how to compile and link multifile projects I will try to
crack this nut.

Thanks for the tip. As a rank novice I have avoid command line
compilers like the place, but if this is free and it works, it is about
time I learned!

Les
 
L

lcw1964

~Glynne said:
Use the GNU gcc compiler and these run-time errors will disappear.


I gave up on deciphering the pseudo-Unix interface of MSYW with MinGW
and tried Dev C++ instead. Acts and looks a lot like BC++ or MSVC++,
with which I am familiar.

Code compiles fine. Same runtime allocation error occurs.

So at least now I am more satisfied this something within the code and
my attempted "tweaking" of it, and not just an artifact of an ancient
compiler.

Thanks for the tip. What I may need to do is cut and paste only those
headers and utility routines the code actually uses, rather than just
linking in the whole darn files. This has helped me get rid of compile
error in the past, but this is the first time I have ever had to deal
with a runtime error.

Les
 
J

jasen

Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.

I too have been working on least-squares code
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here,
http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).

I did mine from scratch pre-allocating memory in blocks for each matrix in
the computation and making extensive use of pointer arithmetic

The client said it had to be fast, and it is.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.

It sounds to me like you've messed up somewhere and left float where it
should now say double, or someone put '4' where they should have put
'sizeof(float)' and that's giving you problems.

It's also possible that you've got an error somewhere else that's corrupting
the heap.

Bye.
Jasen
 
E

Evgenii Rudnyi

I gave up on cygwin, but a barebones download and installation of MinGW
has been a good place to start.

You can find a small tutorial on how to install cygwin at

http://www.imtek.uni-freiburg.de/simulation/mor4ansys/lab/compiling/

and in the same directory a simple example on how to compile a simple
application with gcc.
Thanks for the tip. As a rank novice I have avoid command line
compilers like the place, but if this is free and it works, it is about
time I learned!

Command line is not too difficult. Its advantage over GUI is that the
expressive power of the scripting language is much higher than that of
GUI. You have to learn it, no doubt, but then many things can be done
much more easier.

It also makes sense to learn make and be able to read/write makefiles.
Again, it is not too difficult.

Best wishes,

Evgenii
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,571
Members
45,045
Latest member
DRCM

Latest Threads

Top