IPC::Shareable Problem with multidimentional hash

Discussion in 'Perl Misc' started by Sébastien Cottalorda, Sep 9, 2005.

  1. Hi all,

    I need to use that kind of shared hash:
    %shared_hash = (
    '123456' => {
    'bla' => 123,
    'bli' => 465,
    ect ...
    },
    ect ...
    );

    with about 500 primary keys.
    When I "feed" the %shared_hash, I note that the system create as many
    shared segment and semaphore than there are primary keys.

    As you can imagine, my process died with "Could not create semaphore
    set: No space left on device at line ..."

    If I create small complex structure (10 primary keys), the program works
    perfectly.

    If I well understand the perldoc IPC::Shareable documentation, I can
    create as complex shared structure I want, I don't understand why it
    crashes.

    Here is a simple code that ever crash:

    #!/usr/bin/perl -w
    use strict;
    use IPC::Shareable;
    my $glue = 'data';
    my %options = (
    create => 1,
    exclusive => 1,
    mode => 0644,
    destroy => 1,
    );
    my %colours;
    tie %colours, 'IPC::Shareable', $glue, { %options } or
    die "server: tie failed\n";
    my ($i,$j);
    for $i (1 .. 300){
    for $j (1..3){
    $colours{$i}{$j} = "fall";
    print " 500 ... \n" if ($i==500);
    print "1000 ... \n" if ($i==1000);
    print "1500 ... \n" if ($i==1500);
    print "2000 ... \n" if ($i==2000);
    }
    }
    print "Charged - Press a key to continue -\n";
    my $e=<STDIN>;
    foreach $i ( keys %colours){
    print "$i => \n";
    foreach $j (keys %{$colours{$i}}){
    print " $colours{$i}{$j}\n";
    }
    }
    exit;


    If someone as a clue ?
    Thanks in advance for any help.

    Sebastien
    Sébastien Cottalorda, Sep 9, 2005
    #1
    1. Advertising

  2. Sébastien Cottalorda

    Guest

    Sébastien_Cottalorda <> wrote:
    > Hi all,
    >
    > I need to use that kind of shared hash:
    > %shared_hash = (
    > '123456' => {
    > 'bla' => 123,
    > 'bli' => 465,
    > ect ...
    > },
    > ect ...
    > );
    >
    > with about 500 primary keys.
    > When I "feed" the %shared_hash, I note that the system create as many
    > shared segment and semaphore than there are primary keys.


    Yes. This fact is documented.


    > As you can imagine, my process died with "Could not create semaphore
    > set: No space left on device at line ..."
    >
    > If I create small complex structure (10 primary keys), the program works
    > perfectly.
    >
    > If I well understand the perldoc IPC::Shareable documentation, I can
    > create as complex shared structure I want,


    As you said yourself, a small "complex" structure works perfectly. It is
    not the complexity, but the cardinality, that is the problem.

    > I don't understand why it
    > crashes.


    What is there to not understand?

    ....
    [2] IPC::Shareable provides no pre-set limits, but the system does. Namely,
    there are limits on the number of shared memory segments that can be
    allocated and the total amount of memory usable by shared memory. ...

    You could try using Storable or Data::Dumper to collapse the inner
    structures into flat strings for storage.

    Xho

    --
    -------------------- http://NewsReader.Com/ --------------------
    Usenet Newsgroup Service $9.95/Month 30GB
    , Sep 10, 2005
    #2
    1. Advertising

  3. wrote:
    > Sébastien_Cottalorda <> wrote:
    >
    >>Hi all,
    >>
    >>I need to use that kind of shared hash:
    >>%shared_hash = (
    >> '123456' => {
    >> 'bla' => 123,
    >> 'bli' => 465,
    >> ect ...
    >> },
    >> ect ...
    >>);
    >>
    >>with about 500 primary keys.
    >>When I "feed" the %shared_hash, I note that the system create as many
    >>shared segment and semaphore than there are primary keys.

    >
    >
    > Yes. This fact is documented.


    Where ... ?
    I understand limits but not the manner it stores datas like hashes.

    >
    >>As you can imagine, my process died with "Could not create semaphore
    >>set: No space left on device at line ..."
    >>
    >>If I create small complex structure (10 primary keys), the program works
    >>perfectly.
    >>
    >>If I well understand the perldoc IPC::Shareable documentation, I can
    >>create as complex shared structure I want,

    >
    >
    > As you said yourself, a small "complex" structure works perfectly. It is
    > not the complexity, but the cardinality, that is the problem.
    >
    >
    >>I don't understand why it
    >>crashes.

    >
    >
    > What is there to not understand?


    No, I know why it crashes.
    In fact, I just want to find a way to share a complex hash without that
    limit.


    > ...
    > [2] IPC::Shareable provides no pre-set limits, but the system does. Namely,
    > there are limits on the number of shared memory segments that can be
    > allocated and the total amount of memory usable by shared memory. ...
    >
    > You could try using Storable or Data::Dumper to collapse the inner
    > structures into flat strings for storage.
    >
    > Xho


    I've read, re-read, re-re-read the perldoc IPC::Shareable : It seems to
    already use Storable to store hash structure.

    Concerning Data::Dumper, I don't understand anything (maybe it's me ;-) )
    I don't understand how I can use that module to store and manipulate
    hash of hash in a share structure.
    If you have an exemple, I'll apreciate a lot.


    Sebastien
    Sébastien Cottalorda, Sep 12, 2005
    #3
  4. Sébastien Cottalorda

    Guest

    Sébastien_Cottalorda <> wrote:
    > wrote:
    > > Sébastien_Cottalorda <> wrote:
    > >
    > >>Hi all,
    > >>
    > >>I need to use that kind of shared hash:
    > >>%shared_hash = (
    > >> '123456' => {
    > >> 'bla' => 123,
    > >> 'bli' => 465,
    > >> ect ...
    > >> },
    > >> ect ...
    > >>);
    > >>
    > >>with about 500 primary keys.
    > >>When I "feed" the %shared_hash, I note that the system create as many
    > >>shared segment and semaphore than there are primary keys.

    > >
    > >
    > > Yes. This fact is documented.

    >
    > Where ... ?
    > I understand limits but not the manner it stores datas like hashes.


    It is not the hashes, it is the references to the hashes:

    REFERENCES

    When a reference to a non-tied scalar, hash, or array is assigned to a
    tie()d variable, IPC::Shareable will attempt to tie() the thingy being
    referenced[4] ....
    Secondly, since a new shared memory segment is created for each thingy
    being referenced, the liberal use of references could cause the system to
    approach its limit for the total number of shared memory segments allowed.



    > >
    > > You could try using Storable or Data::Dumper to collapse the inner
    > > structures into flat strings for storage.
    > >
    > > Xho

    >
    > I've read, re-read, re-re-read the perldoc IPC::Shareable : It seems to
    > already use Storable to store hash structure.


    It uses Storable to store the top level hash structure. It also uses
    Storable to store each sub-hash too, but it does each of those in a
    different shmem segment. If you preemptively use Storable on the inner
    hashes, so that they are just strings and not hashrefs by the time they get
    stuck in the main hash, then IPC::Shareable would just store them as
    strings in the main shmem segment.

    >
    > Concerning Data::Dumper, I don't understand anything (maybe it's me ;-) )
    > I don't understand how I can use that module to store and manipulate
    > hash of hash in a share structure.


    You use Data::Dumper the same way you use Storable, but the thawing is
    more awkward.

    > If you have an exemple, I'll apreciate a lot.


    It ain't pretty, but I don't think using shared memory ever is pretty.


    my ($i,$j);
    for $i (1 .. 300){
    my %x;
    for $j (1..3){
    $x{$j} = "fall";
    }
    $colours{$i} = freeze \%x;
    }
    print "Charged - Press a key to continue -\n";
    my $e=<STDIN>;
    foreach $i ( keys %colours){
    print "$i => \n";
    my $x=thaw $colurs{$i};
    foreach $j (keys %$x){
    print " $x{$j}\n";
    }
    }
    exit;

    Note that this won't be very fast, because it is freezing and thawing the
    entire data structure just to access one part of it. I suspect that that
    is why Shareable was implemented the way it was (using seperate segments
    for each piece) in the first place.

    Shared memory is not one of the hard things which Perl has made easy.

    Xho

    --
    -------------------- http://NewsReader.Com/ --------------------
    Usenet Newsgroup Service $9.95/Month 30GB
    , Sep 12, 2005
    #4
  5. a écrit :
    > Sébastien_Cottalorda <> wrote:
    >
    >> wrote:
    >>
    >>>Sébastien_Cottalorda <> wrote:
    >>>
    >>>
    >>>>Hi all,
    >>>>
    >>>>I need to use that kind of shared hash:
    >>>>%shared_hash = (
    >>>> '123456' => {
    >>>> 'bla' => 123,
    >>>> 'bli' => 465,
    >>>> ect ...
    >>>> },
    >>>> ect ...
    >>>>);
    >>>>
    >>>>with about 500 primary keys.
    >>>>When I "feed" the %shared_hash, I note that the system create as many
    >>>>shared segment and semaphore than there are primary keys.
    >>>
    >>>
    >>>Yes. This fact is documented.

    >>
    >>Where ... ?
    >>I understand limits but not the manner it stores datas like hashes.

    >
    >
    > It is not the hashes, it is the references to the hashes:
    >
    > REFERENCES
    >
    > When a reference to a non-tied scalar, hash, or array is assigned to a
    > tie()d variable, IPC::Shareable will attempt to tie() the thingy being
    > referenced[4] ....
    > Secondly, since a new shared memory segment is created for each thingy
    > being referenced, the liberal use of references could cause the system to
    > approach its limit for the total number of shared memory segments allowed.
    >


    OK, I understand now.

    >
    >>>You could try using Storable or Data::Dumper to collapse the inner
    >>>structures into flat strings for storage.
    >>>
    >>>Xho

    >>
    >>I've read, re-read, re-re-read the perldoc IPC::Shareable : It seems to
    >>already use Storable to store hash structure.

    >
    >
    > It uses Storable to store the top level hash structure. It also uses
    > Storable to store each sub-hash too, but it does each of those in a
    > different shmem segment. If you preemptively use Storable on the inner
    > hashes, so that they are just strings and not hashrefs by the time they get
    > stuck in the main hash, then IPC::Shareable would just store them as
    > strings in the main shmem segment.
    >
    >
    >>Concerning Data::Dumper, I don't understand anything (maybe it's me ;-) )
    >>I don't understand how I can use that module to store and manipulate
    >>hash of hash in a share structure.

    >
    >
    > You use Data::Dumper the same way you use Storable, but the thawing is
    > more awkward.
    >
    >
    >>If you have an exemple, I'll apreciate a lot.

    >
    >
    > It ain't pretty, but I don't think using shared memory ever is pretty.
    >
    >
    > my ($i,$j);
    > for $i (1 .. 300){
    > my %x;
    > for $j (1..3){
    > $x{$j} = "fall";
    > }
    > $colours{$i} = freeze \%x;
    > }
    > print "Charged - Press a key to continue -\n";
    > my $e=<STDIN>;
    > foreach $i ( keys %colours){
    > print "$i => \n";
    > my $x=thaw $colurs{$i};
    > foreach $j (keys %$x){
    > print " $x{$j}\n";
    > }
    > }
    > exit;
    >
    > Note that this won't be very fast, because it is freezing and thawing the
    > entire data structure just to access one part of it. I suspect that that
    > is why Shareable was implemented the way it was (using seperate segments
    > for each piece) in the first place.
    >
    > Shared memory is not one of the hard things which Perl has made easy.
    >
    > Xho
    >

    Thanks a lot for your help.
    I'll try your solution.

    Maybe I've another one:
    My complex hash %complex_hash is particular, it's like this:

    %complex_hash =(
    'code1' => {
    "Startdate" => ...,
    "Enddate" => ...,
    ...
    total = about 20 keys
    },
    'code2' => {
    "Startdate" => ...,
    "Enddate" => ...,
    ...
    total = about 20 keys
    },
    ...
    'code300' => {
    "Startdate" => ...,
    "Enddate" => ...,
    ...
    total = about 20 keys
    }
    );

    If I change it like this:

    %complex_hash = (
    "StarDate" => {
    'code1' => ...,
    'code2' => ...,
    ...
    'code300' => ...
    },
    "Enddate" => {
    'code1' => ...,
    'code2' => ...,
    ...
    'code300' => ...
    },
    ...
    about 20 primary keys
    );

    I'll solve a few my problem because now, it will create about 20-21
    segments each of them containing all codes.

    What do you think of that ?

    Sebastien
    Sébastien Cottalorda, Sep 12, 2005
    #5
  6. Sébastien Cottalorda

    Guest

    Sébastien_Cottalorda <> wrote:
    ....
    > >
    > > Note that this won't be very fast, because it is freezing and thawing
    > > the entire data structure just to access one part of it. I suspect
    > > that that is why Shareable was implemented the way it was (using
    > > seperate segments for each piece) in the first place.
    > >
    > > Shared memory is not one of the hard things which Perl has made easy.
    > >
    > > Xho
    > >

    > Thanks a lot for your help.
    > I'll try your solution.
    >
    > Maybe I've another one:
    > My complex hash %complex_hash is particular, it's like this:
    >
    > %complex_hash =(
    > 'code1' => {
    > "Startdate" => ...,
    > "Enddate" => ...,
    > ...
    > total = about 20 keys
    > },
    > 'code2' => {
    > "Startdate" => ...,
    > "Enddate" => ...,
    > ...
    > total = about 20 keys
    > },
    > ...
    > 'code300' => {
    > "Startdate" => ...,
    > "Enddate" => ...,
    > ...
    > total = about 20 keys
    > }
    > );
    >
    > If I change it like this:
    >
    > %complex_hash = (
    > "StarDate" => {
    > 'code1' => ...,
    > 'code2' => ...,
    > ...
    > 'code300' => ...
    > },
    > "Enddate" => {
    > 'code1' => ...,
    > 'code2' => ...,
    > ...
    > 'code300' => ...
    > },
    > ...
    > about 20 primary keys
    > );
    >
    > I'll solve a few my problem because now, it will create about 20-21
    > segments each of them containing all codes.
    >
    > What do you think of that ?


    I think that could work. Of course, it makes deleting an entry harder,
    because might have to be deleted from all ~20 parallel hashes. And it is
    still freezing-thawing a lot of data just to access one small element.

    In fact, I was going to make exactly this "dimension swap" as a second
    suggestion, but then I decided that if I were going to do that it would be
    better to rear back one more step, ask what problem you are trying to use
    shared memory to solve, and see if there was another solution that didn't
    use shared memeory at all, or only used it for flags rather than the entire
    data set. Then I decided I might not be ambitious enough to dig into so
    many details, so I probably shouldn't ask for them.


    Xho

    --
    -------------------- http://NewsReader.Com/ --------------------
    Usenet Newsgroup Service $9.95/Month 30GB
    , Sep 12, 2005
    #6
  7. [snip]
    >>If I change it like this:
    >>
    >>%complex_hash = (
    >> "StarDate" => {
    >> 'code1' => ...,
    >> 'code2' => ...,
    >> ...
    >> 'code300' => ...
    >> },
    >> "Enddate" => {
    >> 'code1' => ...,
    >> 'code2' => ...,
    >> ...
    >> 'code300' => ...
    >> },
    >> ...
    >> about 20 primary keys
    >>);
    >>
    >>I'll solve a few my problem because now, it will create about 20-21
    >>segments each of them containing all codes.
    >>
    >>What do you think of that ?

    >
    >
    > I think that could work. Of course, it makes deleting an entry harder,
    > because might have to be deleted from all ~20 parallel hashes. And it is
    > still freezing-thawing a lot of data just to access one small element.
    >
    > In fact, I was going to make exactly this "dimension swap" as a second
    > suggestion, but then I decided that if I were going to do that it would be
    > better to rear back one more step, ask what problem you are trying to use
    > shared memory to solve, and see if there was another solution that didn't
    > use shared memeory at all, or only used it for flags rather than the entire
    > data set. Then I decided I might not be ambitious enough to dig into so
    > many details, so I probably shouldn't ask for them.
    >
    >
    > Xho
    >

    I need to share datas for some cooperatives programs.
    I'm developping a car park access control based on internet reservation.
    My customers reserves their car park, and then comes and see me.
    In the share memory, I need to share all informations concerning the
    customers reservations, but i need too to share the equipments status,
    the real time counters (how free are all of my car parks , ect ...) ect ...
    I plan to use the shared memory because the access is faster than on a
    flat file (I think).
    Certains API screens (the main one in fact) contain the real time
    counters and last equipments errors => I need to display them very fast.

    That's why I plan to use shared memory.

    I found another solution of my problem:
    Creating a one level hash like this, and process with string replacement :

    %hash = (
    'code1' => '2005010112345620050101232323...',
    'code2' => '...'
    ...
    )
    I only need another hash table to make the mapping between 'key name'
    and (position-length) like this:
    %detail_hash = (
    'startdate' => [ 0 , 14 ],
    'enddate' => [ 14 , 14 ],
    ...
    )
    It's more complicate because I need to check every field before setting
    them or replacing them, but the sharedmemory part is quite little.
    The advantage: One segment per hash.
    The Inconvenient: It's hard to manage.

    Sebastien
    Sébastien Cottalorda, Sep 12, 2005
    #7
  8. Sébastien Cottalorda

    Guest

    Sébastien_Cottalorda <> wrote:

    > I need to share datas for some cooperatives programs.
    > I'm developping a car park access control based on internet reservation.
    > My customers reserves their car park, and then comes and see me.
    > In the share memory, I need to share all informations concerning the
    > customers reservations, but i need too to share the equipments status,
    > the real time counters (how free are all of my car parks , ect ...) ect
    > ... I plan to use the shared memory because the access is faster than on
    > a flat file (I think).


    You should almost certainly use a database (like MySQL, for example).

    > Certains API screens (the main one in fact) contain the real time
    > counters and last equipments errors => I need to display them very fast.
    >
    > That's why I plan to use shared memory.


    Using a database will be easier, less buggy, and will most likely be
    faster, too, because of the finer granularity of the semaphores.

    Xho

    --
    -------------------- http://NewsReader.Com/ --------------------
    Usenet Newsgroup Service $9.95/Month 30GB
    , Sep 13, 2005
    #8
  9. wrote:
    > Sébastien_Cottalorda <> wrote:
    >
    >
    >>I need to share datas for some cooperatives programs.
    >>I'm developping a car park access control based on internet reservation.
    >>My customers reserves their car park, and then comes and see me.
    >>In the share memory, I need to share all informations concerning the
    >>customers reservations, but i need too to share the equipments status,
    >>the real time counters (how free are all of my car parks , ect ...) ect
    >>... I plan to use the shared memory because the access is faster than on
    >>a flat file (I think).

    >
    >
    > You should almost certainly use a database (like MySQL, for example).


    Of course, PostgreSQL for all of my datas.

    >>Certains API screens (the main one in fact) contain the real time
    >>counters and last equipments errors => I need to display them very fast.
    >>
    >>That's why I plan to use shared memory.

    >
    >
    > Using a database will be easier, less buggy, and will most likely be
    > faster, too, because of the finer granularity of the semaphores.
    >
    > Xho
    >

    You're true, but I think that with a number of concurrent access to the
    database (Intranet API, billing process, SGBD synchronization between
    Intranet and Internet, ect ...) the response time will be affect.

    That's why, I think, it's preferable, for low level jobs, to use the
    shared memory.
    Of course, the shared memory need to be synchronized with database.
    I've already made such a process that could take the time he need to
    update the database : It contains buffers.

    I've made some estimate of the shared memory size I need ~ 500 ko for
    all of my shared datas.

    At his time, I've yet developped about 70%.

    I ask me a question without having time to check immediately, may be you
    could answer : What about DB_File flat file concerning concurent access,
    lock, unlock, and tie with complex hashes (like I need)?

    Sebastien.

    PS: Thanks you, Xho, for all of your time, you help me a lot.
    Sébastien Cottalorda, Sep 13, 2005
    #9
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. sridevi

    IPC::Shareable

    sridevi, Dec 8, 2006, in forum: Perl Misc
    Replies:
    2
    Views:
    91
    Sisyphus
    Dec 8, 2006
  2. lsyx
    Replies:
    2
    Views:
    165
  3. David Jacobowitz

    Net::Server + IPC::Shareable == decent performance ?

    David Jacobowitz, Nov 8, 2007, in forum: Perl Misc
    Replies:
    1
    Views:
    137
  4. Snorik

    IPC:Shareable

    Snorik, Sep 18, 2008, in forum: Perl Misc
    Replies:
    19
    Views:
    508
    Snorik
    Sep 29, 2008
  5. kath
    Replies:
    1
    Views:
    333
Loading...

Share This Page