perl maximum memory

Discussion in 'Perl Misc' started by david, Nov 5, 2008.

  1. david

    david Guest

    Hi all,

    Is there a maximum memory limit for a perl process ?
    Is there a difference between 32 and 64 bit perl ?

    Thanks,
    David
    david, Nov 5, 2008
    #1
    1. Advertising

  2. On 2008-11-05 12:47, david <> wrote:
    > Is there a maximum memory limit for a perl process ?


    The same as for any other process.

    > Is there a difference between 32 and 64 bit perl ?


    Yes.

    32-bit processes are limited to 4 GB (theoretically, in
    practice the limit is more likely to be 2 GB (Linux/i386 is rather weird
    with a 3 GB limit)).

    For 64-bit processes the limit is theoretically 16 Exabytes, but that's
    well beyond the capabilities of current hardware.

    hp
    Peter J. Holzer, Nov 5, 2008
    #2
    1. Advertising

  3. [A complimentary Cc of this posting was NOT [per weedlist] sent to
    Peter J. Holzer
    <>], who wrote in article <>:
    > For 64-bit processes the limit is theoretically 16 Exabytes, but that's
    > well beyond the capabilities of current hardware.


    ??? Well *within* capabilities of current hardware. One ethernet
    card, and you have *a possibility* of unlimited (read: limited only by
    software) expansion of available disk space; which may be
    memory-mapped.

    So it is a question of money only. Last time I checked, 2^32 bytes of
    read/write storage costed about 40cents; the overhead to network it is
    not large.... So it is less than $2B to get 2^64 bytes...

    It may be much more cost-effective to have robotic CD/changer; do not
    know price//relibility, though...

    Yours,
    Ilya
    Ilya Zakharevich, Nov 5, 2008
    #3
  4. On 2008-11-05 21:13, Ilya Zakharevich <> wrote:
    ><>], who wrote in article <>:
    >> For 64-bit processes the limit is theoretically 16 Exabytes, but that's
    >> well beyond the capabilities of current hardware.

    >
    > ??? Well *within* capabilities of current hardware. One ethernet
    > card, and you have *a possibility* of unlimited (read: limited only by
    > software) expansion of available disk space; which may be
    > memory-mapped.


    Last time I looked there was no processor which would actually use all
    64 bits in the MMU. The usable number of bits is typically somewhere
    between 36 and 48, which limits the usable virtual memory (including
    memory-mapped files, etc.) to 2^36 to 2^48 bytes.

    > So it is a question of money only.


    If you have enough money to develop a new MMU for your CPU, you are
    right ;-).

    hp
    Peter J. Holzer, Nov 8, 2008
    #4
  5. [OT]: maximum memory

    [A complimentary Cc of this posting was NOT [per weedlist] sent to
    Peter J. Holzer
    <>], who wrote in article <>:
    > On 2008-11-05 21:13, Ilya Zakharevich <> wrote:
    > ><>], who wrote in article <>:
    > >> For 64-bit processes the limit is theoretically 16 Exabytes, but that's
    > >> well beyond the capabilities of current hardware.

    > >
    > > ??? Well *within* capabilities of current hardware. One ethernet
    > > card, and you have *a possibility* of unlimited (read: limited only by
    > > software) expansion of available disk space; which may be
    > > memory-mapped.

    >
    > Last time I looked there was no processor which would actually use all
    > 64 bits in the MMU. The usable number of bits is typically somewhere
    > between 36 and 48, which limits the usable virtual memory (including
    > memory-mapped files, etc.) to 2^36 to 2^48 bytes.


    So, IIUC, I misinterpreted your remark. I thought that you say that
    currently, one can't get enough MEMORY to overflow 64bit. And now you
    say that one can't get enough MEMORY ADDRESS SPACE to overflow 64bit.

    > > So it is a question of money only.


    > If you have enough money to develop a new MMU for your CPU, you are
    > right ;-).


    About 10 years ago I looked through notes for a hardware design 101
    class, and one of the first homeworks was to design a MMU, simple, but
    good enough to bootstrap a processor via (a hard disk/whatever)
    sitting on a bus. They needed to catch memory accesses to a segment
    in memory, and translate them to bus access commands; and I think the
    requirement was to design this in terms of discrete components
    (transistors). So IIRC, I think even I have enough money for such a
    design. ;-)

    Yours,
    Ilya

    P.S. Thinking about it more: the price estimate I gave is in ballpark
    of a price of a particle physics detector (LHC, Tevatron).
    Given that current design is to through away 99.99999% (or
    whatever) of information as early as possible, any money spent
    on larger storage and memory throughput has a probability to
    improve a chance the data from experiments may be (later) used
    for unrelated purposes...

    P.P.S. I tried to imagine other scenarios which may quickly produce
    much more than 2^64 bytes of info. First I thought of LLST
    (https://www.llnl.gov/str/November05/Brase.html), but it is
    only 2^55 B/year. The only other "realistic" scenario I found
    is a very anxious bigbrother: a "good" video camera (I'm
    thinking about IMAX-like quality, 4K x 3K x 3 x 50p; maybe not
    available this year, but RSN) can easily saturate 10Gb-BASE
    connection (in RAW stream with minimal compression).

    So if London authorities decide to replace their spycams by
    such beasts, AND would like to preserve RAW streams, they
    would generate 10TB/sec. This is 25e18 B/month, which is
    >2^64 B/month. Viva the bigbrother!
    Ilya Zakharevich, Nov 8, 2008
    #5
  6. david

    Guest

    Re: [OT]: maximum memory

    On Sat, 8 Nov 2008 23:54:47 +0000 (UTC), Ilya Zakharevich <> wrote:

    >[A complimentary Cc of this posting was NOT [per weedlist] sent to
    >Peter J. Holzer
    ><>], who wrote in article <>:
    >> On 2008-11-05 21:13, Ilya Zakharevich <> wrote:
    >> ><>], who wrote in article <>:
    >> >> For 64-bit processes the limit is theoretically 16 Exabytes, but that's
    >> >> well beyond the capabilities of current hardware.
    >> >
    >> > ??? Well *within* capabilities of current hardware. One ethernet
    >> > card, and you have *a possibility* of unlimited (read: limited only by
    >> > software) expansion of available disk space; which may be
    >> > memory-mapped.

    >>
    >> Last time I looked there was no processor which would actually use all
    >> 64 bits in the MMU. The usable number of bits is typically somewhere
    >> between 36 and 48, which limits the usable virtual memory (including
    >> memory-mapped files, etc.) to 2^36 to 2^48 bytes.

    >
    >So, IIUC, I misinterpreted your remark. I thought that you say that
    >currently, one can't get enough MEMORY to overflow 64bit. And now you
    >say that one can't get enough MEMORY ADDRESS SPACE to overflow 64bit.
    >
    >> > So it is a question of money only.

    >
    >> If you have enough money to develop a new MMU for your CPU, you are
    >> right ;-).

    >
    >About 10 years ago I looked through notes for a hardware design 101
    >class, and one of the first homeworks was to design a MMU, simple, but
    >good enough to bootstrap a processor via (a hard disk/whatever)
    >sitting on a bus. They needed to catch memory accesses to a segment
    >in memory, and translate them to bus access commands; and I think the
    >requirement was to design this in terms of discrete components
    >(transistors). So IIRC, I think even I have enough money for such a
    >design. ;-)
    >
    >Yours,
    >Ilya
    >
    >P.S. Thinking about it more: the price estimate I gave is in ballpark
    > of a price of a particle physics detector (LHC, Tevatron).
    > Given that current design is to through away 99.99999% (or
    > whatever) of information as early as possible, any money spent
    > on larger storage and memory throughput has a probability to
    > improve a chance the data from experiments may be (later) used
    > for unrelated purposes...
    >
    >P.P.S. I tried to imagine other scenarios which may quickly produce
    > much more than 2^64 bytes of info. First I thought of LLST
    > (https://www.llnl.gov/str/November05/Brase.html), but it is
    > only 2^55 B/year. The only other "realistic" scenario I found
    > is a very anxious bigbrother: a "good" video camera (I'm
    > thinking about IMAX-like quality, 4K x 3K x 3 x 50p; maybe not
    > available this year, but RSN) can easily saturate 10Gb-BASE
    > connection (in RAW stream with minimal compression).
    >
    > So if London authorities decide to replace their spycams by
    > such beasts, AND would like to preserve RAW streams, they
    > would generate 10TB/sec. This is 25e18 B/month, which is
    > >2^64 B/month. Viva the bigbrother!


    I think this falls under the category of navel research

    sln
    , Nov 9, 2008
    #6
  7. Re: [OT]: maximum memory

    [A complimentary Cc of this posting was sent to
    <>], who wrote in article <>:
    > > is a very anxious bigbrother: a "good" video camera (I'm
    > > thinking about IMAX-like quality, 4K x 3K x 3 x 50p; maybe not
    > > available this year, but RSN) can easily saturate 10Gb-BASE
    > > connection (in RAW stream with minimal compression).


    > I think this falls under the category of navel research


    Hmm, all navel researches I saw were done in (super?) 35mm; never in
    (15-sprocket) IMAX format. Do you have some particular in mind?

    Yours,
    Ilya
    Ilya Zakharevich, Nov 9, 2008
    #7
  8. Re: [OT]: maximum memory

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > <>], who wrote in article <>:
    >>> is a very anxious bigbrother: a "good" video camera (I'm
    >>> thinking about IMAX-like quality, 4K x 3K x 3 x 50p; maybe not
    >>> available this year, but RSN) can easily saturate 10Gb-BASE
    >>> connection (in RAW stream with minimal compression).

    >
    >> I think this falls under the category of navel research

    >
    > Hmm, all navel researches I saw were done in (super?) 35mm;


    Maybe you are thinking of Super 8 (8mm)?

    > never in (15-sprocket) IMAX format.


    AFAIK IMAX is in 70mm, but I don't know how many sprockets the projector
    has, or if that matters. (Although a video camera wouldn't have
    sprockets [unless you're thinking of Sprockets on SNL but that only
    appeared 14 times.])


    John
    --
    Perl isn't a toolbox, but a small machine shop where you
    can special-order certain sorts of tools at low cost and
    in short order. -- Larry Wall
    John W. Krahn, Nov 10, 2008
    #8
  9. Re: [OT]: maximum memory

    [A complimentary Cc of this posting was sent to
    John W. Krahn
    <>], who wrote in article <JmQRk.408$>:
    > > Hmm, all navel researches I saw were done in (super?) 35mm;


    > Maybe you are thinking of Super 8 (8mm)?


    Nope, in the years I was using Super 8, navels were not the tops on my
    priority lists.

    For 35mm, see, e.g., http://www.imdb.com/title/tt0114134/technical

    > > never in (15-sprocket) IMAX format.

    >
    > AFAIK IMAX is in 70mm, but I don't know how many sprockets the projector
    > has, or if that matters.


    With 70mm, the stuff is tricky. First, it is 65mm when shot, 70mm
    when projected. Then the usual mess strikes: how you put the actual
    frames on a strip of film [*].

    In fact, the mess is much nicer than with 35mm film. IIRC, during
    last 40 or so years, mostly two formats were used: 65/15sprockets
    (=IMAX), and 65/5sprockets. (One is 3times larger than the other! [**])
    This is the same difference as landscape/portrait printing: on one
    film travels vertically, so you need to fit width of frame into 65mm -
    performation, and on another film travels horizontally, so you fit the
    height of the frame.

    Yours,
    Ilya

    [*] For 35mm, yesterday I found these gems:

    http://www.cinematography.net/edited-pages/Shooting35mm_Super35mmFor_11.85.htm
    http://www.arri.de/infodown/cam/ti/format_guide.pdf

    [**] I expect that for most applications, current digicams (e.g.,
    Red One) exceed capacities of 65/5 [***]. However, 65/15 at
    60fps should be significantly better than what Red One gives.
    I think one must have something like 4K x 3K x 3ccd x 48fps to
    get a comparable experience. (Which led to bandwidth I
    mentioned before. ;-)

    [***] On the other hand, this calculation is based on interpolation
    from digital photos. With photos, the very high extinction
    resolution of the film does not enter the equation (since eye
    does not care about *extinction* resolution, only about
    noise-limited resolution - one where S/N ratio drops below
    about 3 - and digital has an order of magnitude better noise).

    But with movies, eye averages out the noise very effectively;
    so the higher extinction resolution of film may have some
    relevance... Do not think I saw it investigated anywhere; I
    may want to create some digital simulations...

    On the gripping hand, some people in the film industry believe
    that HD video (2K x 1K) is comparable to (digitized) Super 35,
    and this ratio is quite similar to what one gets from photos.
    With photos, the best-processed color 36mm x 24mm is
    approximately equivalent in resolution to a digital shot
    rescaled down to 4 or 5MPix (only that the digital shot would
    have practically no noise).
    Ilya Zakharevich, Nov 10, 2008
    #9
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. NetKev

    maximum memory limit?

    NetKev, Oct 3, 2006, in forum: Java
    Replies:
    1
    Views:
    867
    Baby Lion
    Oct 4, 2006
  2. Replies:
    0
    Views:
    493
  3. Jerry
    Replies:
    19
    Views:
    2,451
  4. Replies:
    3
    Views:
    463
    red floyd
    Aug 29, 2006
  5. phanhuyich
    Replies:
    4
    Views:
    254
Loading...

Share This Page