Sorted Hash

P

palexvs

I filled hash and then printed it (sorted by key):
my %hs = (
key10 => 5,
key5 => b,
aey9 => 7,
)
foreach my $k (sort keys %hs) { print "$k $hs{$k}\n"; }

key - string ([0-9A-F]{72}), 50K records.
How do it more effective?
 
B

Ben Morrow

Quoth "[email protected] said:
I filled hash and then printed it (sorted by key):
my %hs = (
key10 => 5,
key5 => b,
aey9 => 7,
)
foreach my $k (sort keys %hs) { print "$k $hs{$k}\n"; }

key - string ([0-9A-F]{72}), 50K records.
How do it more effective?

Tie::IxHash, or maintain an array of keys yourself. See perldoc -q
sorted.

Ben
 
A

A. Sinan Unur

I filled hash and then printed it (sorted by key):
my %hs = (
key10 => 5,
key5 => b,
aey9 => 7,
)
foreach my $k (sort keys %hs) { print "$k $hs{$k}\n"; }

key - string ([0-9A-F]{72}), 50K records.
How do it more effective?

Well, depends on what you mean by 'more effective'.

If you are talking about speed, note that the only place where you can
really get any meaningful improvement is IO.

I am using Windows XP SP2 with ActiveState Perl 5.8.8.822 on an Intel Core
Duo 1.66 Mhz with 1 GB physical memory with a whole bunch of other apps
open.

Let's start with:

#!/usr/bin/perl

use strict;
use warnings;

my %hash;

for my $k ( 1 .. 50_000 ) {
$hash{ make_key() } = $k;
}

### printing code here

sub make_key {
join('', map { sprintf( '%.16f', $_ ) } (rand, rand, rand, rand) );
}

__END__
C:\DOCUME~1\asu1\LOCALS~1\Temp\t> timethis t

TimeThis : Command Line : t
TimeThis : Start Time : Thu Nov 29 19:32:35 2007
TimeThis : End Time : Thu Nov 29 19:32:36 2007
TimeThis : Elapsed Time : 00:00:01.109

So, let's add printing to this script:

my @keys = sort keys %hash;
print "$_ $hash{$_}\n" for @keys;

TimeThis : Command Line : t
TimeThis : Start Time : Thu Nov 29 19:37:11 2007
TimeThis : End Time : Thu Nov 29 19:37:14 2007
TimeThis : Elapsed Time : 00:00:03.156

So, we are talking about 2.047 seconds added by printing the hash.

Assuming 80 bytes per print statement and 50,000 print statements, that's
3906.25 K of output in about two seconds.

That's not bad and does not leave a lot of room for improvement.

Following the printing late strategy and trying to trade-off memory for
speed, let's accumulate all output before printing:

my $out;
for ( @keys ) {
$out .= "$_ $hash{$_}\n";
}

{
local $| = 0;
print $out;
}

TimeThis : Command Line : t
TimeThis : Start Time : Thu Nov 29 19:56:33 2007
TimeThis : End Time : Thu Nov 29 19:56:39 2007
TimeThis : Elapsed Time : 00:00:05.859

Well, that wasn't good (and I really did not expect it to be). But we don't
have to accumulate all output. We can just print after every 9K or so. (I
came up with that magic number on my system after some trial and error ...
Time program, go do something in Word, load cnn.com on Firefox, go back and
re-time this script ... poor man's cache clearing).

my $out;
for ( @keys ) {
$out .= "$_ $hash{$_}\n";
if ( length $out > 9_000 ) {
print $out;
$out = q{};
}
}

TimeThis : Command Line : t
TimeThis : Start Time : Thu Nov 29 20:04:57 2007
TimeThis : End Time : Thu Nov 29 20:05:00 2007
TimeThis : Elapsed Time : 00:00:03.062

So, after all that tinkering, I got an improvement of 0.094 seconds, that
is about 3%.

Now, my against my gut feeling, I did try:

print "$_ $hash{$_}\n" for sort keys %hash;

TimeThis : Command Line : t
TimeThis : Start Time : Thu Nov 29 20:11:01 2007
TimeThis : End Time : Thu Nov 29 20:11:04 2007
TimeThis : Elapsed Time : 00:00:02.953

Hmmm ... It does not look like I can beat the simplest alternative by
trying to be clever.

That took 1.844 seconds to output about 3906.25K.

of course, I probably wasted my time because I misunderstood your vaguely
phrased question.

Sinan
 
J

Jürgen Exner

I filled hash and then printed it (sorted by key):
my %hs = (
key10 => 5,
key5 => b,
aey9 => 7,
)
foreach my $k (sort keys %hs) { print "$k $hs{$k}\n"; }

key - string ([0-9A-F]{72}), 50K records.
How do it more effective?
 
J

Jürgen Exner

I filled hash and then printed it (sorted by key): [...]
foreach my $k (sort keys %hs) { print "$k $hs{$k}\n"; }

key - string ([0-9A-F]{72}), 50K records.
How do it more effective?

Your algorithm as posted should be 100% effective or are you observing any
unsorted keys?

jue
 
X

xhoster

I filled hash and then printed it (sorted by key):
my %hs = (
key10 => 5,
key5 => b,
aey9 => 7,
)
foreach my $k (sort keys %hs) { print "$k $hs{$k}\n"; }

key - string ([0-9A-F]{72}), 50K records.
How do it more effective?

What is ineffective about the current way?

You could keep the structure sorted throughout its lifetime by using
a tied hash or a non-hash structure, but the overhead of doing so
is almost certainly going to be greater than a one-time sort.

Xho

--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
 
S

Salvador Fandino

I filled hash and then printed it (sorted by key):
my %hs = (
key10 => 5,
key5 => b,
aey9 => 7,
)
foreach my $k (sort keys %hs) { print "$k $hs{$k}\n"; }

key - string ([0-9A-F]{72}), 50K records.
How do it more effective?

yu can use the radix sort implementation available from Sort::Key::Radix
that is usually faster for this kind of data that the merge sort used
internally by perl:


#!/usr/bin/perl

use strict;
use warnings;

use Benchmark qw(cmpthese);
use Sort::Key::Radix qw(ssort);

my @l = ('a'..'z','A'..'Z','1'..'9');

sub genkey { join '', map $l[rand @l], 0..71 }

my @keys = map genkey, 0..50_000;


sub psort { my @sorted = sort @keys }
sub rsort { my @sorted = ssort @keys }

cmpthese (-1, { psort => \&psort,
rsort => \&rsort } );
 
A

A. Sinan Unur

I filled hash and then printed it (sorted by key):
my %hs = (
key10 => 5,
key5 => b,
aey9 => 7,
)
foreach my $k (sort keys %hs) { print "$k $hs{$k}\n"; }

key - string ([0-9A-F]{72}), 50K records.
How do it more effective?

yu can use the radix sort implementation available from
Sort::Key::Radix
that is usually faster for this kind of data that the merge sort used
internally by perl:

Does sorting speed matter when most of the time is spent printing?

Sinan
 
D

Damian Lukowski

A. Sinan Unur said:
Does sorting speed matter when most of the time is spent printing?

You never know whether he's not redirecting stdout to a file on a
ramdisk. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,051
Latest member
CarleyMcCr

Latest Threads

Top