64 bit Python

M

Mathias Waack

Hi,

one of my colleagues got some trouble with a program handling large
amounts of data. I figured out that a 32 bit application on HP-UX
cannot address more than 1 GB of memory. In fact (I think due to the
overhead of memory management done by python) a python application
cannot use much more than 500 MB of "real" data. For this reason
I've created a 64 bit version of python 2.3.5. I've tested a simple
C program before to make sure its able to address the whole
available memory (8 GB main memory and about 12 GB swap). The simple
example in C works, but the python script didn't. I was able to
create only 2 GB of python objects. The application occupied
approximately 2.2 GB of memory. After that, python failed with a
memory error.

Is there any internal restriction on the size of the heap?

Mathias
 
I

Ivan Voras

Mathias said:
amounts of data. I figured out that a 32 bit application on HP-UX
cannot address more than 1 GB of memory. In fact (I think due to the
overhead of memory management done by python) a python application
cannot use much more than 500 MB of "real" data. For this reason

I don't thik this is likely. Don't know about HP-UX but on some
platforms, FreeBSD for example, there is a soft memory-cap for
applications. By default, a single application on FreeBSD cannot use
more than 512MB of memory, period. The limit can be modified by root
(probably involves rebooting).
 
M

Mathias Waack

Ivan said:
I don't thik this is likely. Don't know about HP-UX but on some
platforms, FreeBSD for example, there is a soft memory-cap for
applications. By default, a single application on FreeBSD cannot
use more than 512MB of memory, period. The limit can be modified by
root (probably involves rebooting).

As I stated I wrote a simple C-program before. The c-program was able
to allocate a bit more than 900MB in 32 bit mode.

My python script allocates a bunch of strings each of 1024 characters
and writes it in a cStringIO. And it fails after writing 512K of
strings. Don't know how python restricts the heap size - but I'm
fairly sure its not a restriction of the OS.

But thats not the point, I don't want to talk about one or two
megabytes - I'm just missing some GB of heap;)

Mathias
 
I

Ivan Voras

Mathias said:
As I stated I wrote a simple C-program before. The c-program was able
to allocate a bit more than 900MB in 32 bit mode.

Sorry, I should've paid more attention :)
 
M

Mike C. Fletcher

Mathias Waack wrote:
....
My python script allocates a bunch of strings each of 1024 characters
and writes it in a cStringIO. And it fails after writing 512K of
strings. Don't know how python restricts the heap size - but I'm
fairly sure its not a restriction of the OS.
Does cStringIO require contiguous blocks of memory? I'm wondering if
maybe there's something going on where you just can't allocate more than
a GB of contiguous space? What happens if you just store the strings in
a list (or use StringIO instead of cStringIO)? I'd imagine there might
be problems with creating an individual string of multiple GB as well,
for the same reason.

Just an idea,
Mike

________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
http://www.vrplumber.com
http://blog.vrplumber.com
PyCon is coming...
 
J

Jeff Epler

There's not enough information to guess the "real problem", but it could
be this:

"variable size" objects (declared with PyObject_VAR_HEAD) are limited to
INT_MAX items since the ob_size field is declared as 'int'.

This means that a Python string, tuple, or list (among other types) may
be limited to about 2 billion items on ILP32 and LP64 architectures.

Dicts and probably sets are also limited to INT_MAX elements, because
the "ma_fill" and "ma_used" fields are ints.

If you don't mind recompiling all your extensions, you could change the
type of ob_size to long in the "#define PyObject_VAR_HEAD". I don't
know what breaks when you do this, but maybe a few google or google
groups searches could help you find others who have tried this.

Jeff
PS the limit of 500MB of "real data" in the 32-bit system may be because
a realloc may temporarily require (old size + new size) storage when it
does the equivalent of
new_ptr = malloc(new_size)
memcpy(new_ptr, old_ptr, old_size)
free(old_size)
which will temporarily use >900MB of data when resizing a ~450MB object.
Python frequently uses realloc() to grow structures like lists. If you
are working with strings, then
s = s + t
doesn't ever use realloc (in 2.3.x anyway) but always allocates and
fills the result 's+t', only freeing the old value of s later when it is
no longer reacahable (which could be as soon as the assignment statement
completes)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFCEUIuJd01MZaTXX0RAjCiAJ9wyUbVdbYnE5JkpA+dnT4uSlqv4QCfewic
3NtW70UiL/7X4lWgBMmvWhQ=
=Sldc
-----END PGP SIGNATURE-----
 
M

Mathias Waack

Mike said:
Mathias Waack wrote:
...

Does cStringIO require contiguous blocks of memory? I'm wondering
if maybe there's something going on where you just can't allocate
more than
a GB of contiguous space?

That was the problem. I've used cStringIO to reduce the overhead by
the python memory management. But now I know - that was a bad idea
in this case.
What happens if you just store the
strings in
a list (or use StringIO instead of cStringIO)?

It consumed 10 GB really fast, another GB really slow, some hundred
MB much slower and then it died;)

Mathias
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,218
Latest member
JolieDenha

Latest Threads

Top