Ruby on Unix vs. Windows

R

Rick Nooner

Yesterday at work we took an analysis program written in ruby that we had been
running on a Solaris box (Sunblade 1500, 1 Gig RAM, 1.5 Ghz Sparc) and moved
it to a windows box (HP D530, 1 Gig RAM, 2.8 Ghz Pentium) to do performance
comparisons.

The analysis builds a profile in memory of over 3.6 GB of data on disk. On
the Solaris box, it takes about 35 mins and uses about 700 MB of RAM. It
would not complete on the windows box using the full data set, bombing with
"failed to allocate memory (NoMemoryError)". There was nearly 800 MB of
RAM free on the windows box as well as having a 4 Gig swap available.

Is windows that inefficient with memory allocation or is this a ruby
implementation issue on windows?

Before anyone asks, I would have done the comparison on a linux box if I
had one available since what I really wanted to know was the speed boost
that a Intel processor would give me. I didn't have one readly available,
though.

Thanks,
Rick
 
B

Bill Kelly

From: "Rick Nooner said:
Yesterday at work we took an analysis program written in ruby that we had been
running on a Solaris box (Sunblade 1500, 1 Gig RAM, 1.5 Ghz Sparc) and moved
it to a windows box (HP D530, 1 Gig RAM, 2.8 Ghz Pentium) to do performance
comparisons.

The analysis builds a profile in memory of over 3.6 GB of data on disk. On
the Solaris box, it takes about 35 mins and uses about 700 MB of RAM. It
would not complete on the windows box using the full data set, bombing with
"failed to allocate memory (NoMemoryError)". There was nearly 800 MB of
RAM free on the windows box as well as having a 4 Gig swap available.

Is windows that inefficient with memory allocation or is this a ruby
implementation issue on windows?

Any chance the program does any large block allocations?
Like more than 250 MB in one chunk? I've noticed windows
performs poorly when a program churns lots of small
memory allocations, then occasionally wants to grab a big
block of memory. I have a test program (two, actually -
one in ruby, the other in C using malloc) that doesn't
fail on Linux and Darwin, and does fail on Windows. Even
though the Linux and Darwin systems had less physical
RAM and less swap than the Windows system with its 2GB
ram and 4GB swap. When it fails on Windows, there's
plenty of system memory still available - but the windows
heap management has allowed the process virtual memory
space to become so fragmented that there's no room to map
the large block allocation into the process' virtual
address space.


Regards,

Bill
 
R

Rick Nooner

Any chance the program does any large block allocations?
Like more than 250 MB in one chunk? I've noticed windows
performs poorly when a program churns lots of small
memory allocations, then occasionally wants to grab a big
block of memory. I have a test program (two, actually -
one in ruby, the other in C using malloc) that doesn't
fail on Linux and Darwin, and does fail on Windows. Even
though the Linux and Darwin systems had less physical
RAM and less swap than the Windows system with its 2GB
ram and 4GB swap. When it fails on Windows, there's
plenty of system memory still available - but the windows
heap management has allowed the process virtual memory
space to become so fragmented that there's no room to map
the large block allocation into the process' virtual
address space.

You are describing exactly the behavior that I'm seeing.

My program does a lot of small block allocations. How that's handled by Ruby under the
hood, I don't know. If the allocator see's lots of small allocations so starts asking
the OS for larger and larger chunks of memory, what you are describing could happen.

If this has happened to you using malloc, it's probably beyond Ruby's control, unless
Ruby replaces the default memory allocation mechanism.

I'll write a test program to test this theory.

Thanks,
Rick
 
J

Joel VanderWerf

Rick said:
If this has happened to you using malloc, it's probably beyond Ruby's control, unless
Ruby replaces the default memory allocation mechanism.

What about testing with cygwin or mingw32 ruby? I'm guessing those use
different allocators than win32 ruby.
 
T

TLOlczyk

Yesterday at work we took an analysis program written in ruby that we had been
running on a Solaris box (Sunblade 1500, 1 Gig RAM, 1.5 Ghz Sparc) and moved
it to a windows box (HP D530, 1 Gig RAM, 2.8 Ghz Pentium) to do performance
comparisons.

The analysis builds a profile in memory of over 3.6 GB of data on disk. On
the Solaris box, it takes about 35 mins and uses about 700 MB of RAM. It
would not complete on the windows box using the full data set, bombing with
"failed to allocate memory (NoMemoryError)". There was nearly 800 MB of
RAM free on the windows box as well as having a 4 Gig swap available.

Is windows that inefficient with memory allocation or is this a ruby
implementation issue on windows?

Before anyone asks, I would have done the comparison on a linux box if I
had one available since what I really wanted to know was the speed boost
that a Intel processor would give me. I didn't have one readly available,
though.
Look around for a Live CD distribution like Knoppix. But find one
without ruby. Set aside a partition for Linux ( it can be Fat32 )
mount the partition somewhere and install ruby ( you DON'T want
ruby running of the CD for benchmarks even hand waiving ones ).
The hard part will be finding a Live CD without ruby.

It's been a while, but I seem to remember that Intel CPUs make memory
management for things like GC much harder, so it would be a good idea
to compare OSs running on the same CPU. If not maybe you can scrounge
an Alpha running NT and see if you get any different results on it.


The reply-to email address is (e-mail address removed).
This is an address I ignore.
To reply via email, remove 2002 and change yahoo to
interaccess,

**
Thaddeus L. Olczyk, PhD

There is a difference between
*thinking* you know something,
and *knowing* you know something.
 
B

Bill Kelly

From: "Robert said:
As a Windows user this is
a worrisome thing to me. If it is a "known" problem then is it going to
be fixed at some point in time?

I encountered this problem a year or so ago working
on my employer's application which is all C / C++.
Our customers sometimes browse very large images,
upwards of 200 or 300 megs, which our software will
attempt to load entirely into memory if requested to
do so.

When we contacted Mircosoft about it, they referred us
to their Low Fragmentation Heap:
http://support.microsoft.com/kb/816542

I haven't tried it yet - and I'd also be interested to
learn more about any "drop in replacements" that Lothar
hinted at.


Regards,

Bill
 
A

Alex Verhovsky

Lothar said:
Hello Bill,


BK> I haven't tried it yet - and I'd also be interested to
BK> learn more about any "drop in replacements" that Lothar
BK> hinted at.

For example take this one
http://www.nedprod.com/programs/Win32/ptmalloc2/index.html
Oh my! That's ptmalloc on Windows, isn't it?! That little thing
seriously rocks in multi-threaded systems with a lot of dynamic memory
allocation. It can resolve nasty scalability bottlenecks there by
getting rid of a mutex in malloc/free.

Not sure if it will do much good for Ruby though - Ruby threads are
green, not native.

If you get some results, please share with the list.

Best regards,
Alexey Verkhovsky
 
L

Lothar Scholz

Hello Alex,

AV> Oh my! That's ptmalloc on Windows, isn't it?! That little thing
AV> seriously rocks in multi-threaded systems with a lot of dynamic memory
AV> allocation. It can resolve nasty scalability bottlenecks there by
AV> getting rid of a mutex in malloc/free.

AV> Not sure if it will do much good for Ruby though - Ruby threads are
AV> green, not native.

Yes it will improve the runtime speed but it also has lower
fragmentation then the original msvcrt.dll implementation.
 
L

Lothar Scholz

Hello Lothar,


LS> Yes it will improve the runtime speed but it also has lower

Sorry, there is a "not" missing in this sentence.
I meant: "Yes it will not improve..."
 
T

Tesla

Rick said:
Yesterday at work we took an analysis program written in ruby that we had been
running on a Solaris box (Sunblade 1500, 1 Gig RAM, 1.5 Ghz Sparc) and moved
it to a windows box (HP D530, 1 Gig RAM, 2.8 Ghz Pentium) to do performance
comparisons.

The analysis builds a profile in memory of over 3.6 GB of data on disk. On
the Solaris box, it takes about 35 mins and uses about 700 MB of RAM. It
would not complete on the windows box using the full data set, bombing with
"failed to allocate memory (NoMemoryError)". There was nearly 800 MB of
RAM free on the windows box as well as having a 4 Gig swap available.

Is windows that inefficient with memory allocation or is this a ruby
implementation issue on windows?

Before anyone asks, I would have done the comparison on a linux box if I
had one available since what I really wanted to know was the speed boost
that a Intel processor would give me. I didn't have one readly available,
though.

Thanks,
Rick

My only experience with this is through SQL Server 2000. Large processes
were not completing even though the machince had plenty of memory. Our
theory was that it was a combination of NTFS and deframentation of the
swap file. It seemed as though NTFS security was trying to keep track of
everything and using even more memory. The swap file also could not
respond fast enough to the calls because SQL Server was busy on the same
disks. What we did was take and old 9gb scsi disk with FAT32 and set the
swap file on to it. Not a perfect solution but it worked.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top