/var/adm/wtmp, malloc, and ENOMEM

M

Maja

Greetings,

/var/adm/wtmp files were getting out of hand
and I wanted to filter the file rather than zeroing it out.
I found some code that would only keep the last x days
but it required creating a temporary file in /tmp
and then copying it back to /var/adm. I rewrote the code
to load the contents of the wtmp file into memory and then
write the filtered records in memory directly back to /var/adm/wtmp,
eliminating the need for a temporary file.

This worked great until I encountered a wtmp file that was
370 MB. malloc fails setting errno to ENOMEM (not enough
storage space). My system has 16GB of memory and 9GB of paging
(33% used). So, I should have plenty of memory.

My question: is there a workaround for this type of malloc error.
Thanks,
Maja
 
D

dandelion

This worked great until I encountered a wtmp file that was
370 MB. malloc fails setting errno to ENOMEM (not enough
storage space). My system has 16GB of memory and 9GB of paging
(33% used). So, I should have plenty of memory.

Hmmm... By the sound of it (you do not provide the actual call to malloc
which fails), i'd say you're right and this is a bug somewhere. However, in
order to make a specific diagnosis, i'd need to know which implementation of
'malloc' you are using (there are quite a few around) and check it out.
Something i'm not planning to do.

The call *should* succeed, AFAICT, so if it doesn't, it's an implementation
issue,you can best pass on to the vendor of your compiler or ask this
question again on a newsgroup that is dedicated to your OS/compiler.

HTH (a tiny bit)

regards,

dandelion.
 
D

David Resnick

Maja said:
Greetings,

/var/adm/wtmp files were getting out of hand
and I wanted to filter the file rather than zeroing it out.
I found some code that would only keep the last x days
but it required creating a temporary file in /tmp
and then copying it back to /var/adm. I rewrote the code
to load the contents of the wtmp file into memory and then
write the filtered records in memory directly back to /var/adm/wtmp,
eliminating the need for a temporary file.

This worked great until I encountered a wtmp file that was
370 MB. malloc fails setting errno to ENOMEM (not enough
storage space). My system has 16GB of memory and 9GB of paging
(33% used). So, I should have plenty of memory.

My question: is there a workaround for this type of malloc error.
Thanks,
Maja

Your answer is system specific. You probably would get a better answer
in (looking at what you have) comp.unix.programmer. That said, I think
the temporary file is good, because you may some day hit a file big
enough (say, > 2 or 4 GB) that you'd likely have problems with, at least
on a 32 bit machine.

[OT] -- I can reproduce the behavior on linux if I set "ulimit -v" to
something less than the amount I'm trying to malloc. Perhaps
you have some limits imposed that way... If that doesn't help,
perhaps you have some issues in your standard library malloc...

-David
 
L

Lawrence Kirby

Greetings,

/var/adm/wtmp files were getting out of hand
and I wanted to filter the file rather than zeroing it out.
I found some code that would only keep the last x days
but it required creating a temporary file in /tmp
and then copying it back to /var/adm. I rewrote the code
to load the contents of the wtmp file into memory and then
write the filtered records in memory directly back to /var/adm/wtmp,
eliminating the need for a temporary file.

This worked great until I encountered a wtmp file that was
370 MB. malloc fails setting errno to ENOMEM (not enough
storage space). My system has 16GB of memory and 9GB of paging
(33% used). So, I should have plenty of memory.

The limitations on what malloc() can allocate are down to your platform,
well beyond the scope of the C language itself. Perhaps there are
configurable per-process limits etc. Also, if you are only storing
filtered records in memory the memory used would most likely be less,
perhaps a lot less, than 370MB. Perhaps you have a bug in your code and
you are requesting much more memory than you thought. That's easy to log.
My question: is there a workaround for this type of malloc error.

Request less memory or configure your system to allow more memory to be
requested.

However I question the whole approach of storing the data in memory. If
your program has a problem after it starts to write data back to wtmp the
data not written will be lost. Writing to a temporary file is a much
better approach. Just write to a temporary file in /var/adm and rename the
file back to wtmp when complete. This could well be MORE efficient than
storing large amounts of data in memory before writing, if efficiency is
your concern.

Lawrence
 
K

Keith Thompson

Maja said:
This worked great until I encountered a wtmp file that was
370 MB. malloc fails setting errno to ENOMEM (not enough
storage space). My system has 16GB of memory and 9GB of paging
(33% used). So, I should have plenty of memory.

The amount of memory available on your system isn't necessarily the
same as the amount of memory available to your program.
 
M

Maja

It should have occured to me I might be up against
a system limit. There's a -bmaxdata compile flag
I used to allow the program to allocate memory over
256 meg.

Many thanks for pointing me in the right direction.
Maja
 
M

Michael Wojcik

Hmmm... By the sound of it (you do not provide the actual call to malloc
which fails), i'd say you're right and this is a bug somewhere.

Nonsense. malloc is allowed to fail for any reason. Assuming a bug
here is like assuming that someone stole your car if you can't find
your keys.

Off topic: The reference to wtmp implies this is a POSIX system. Most
POSIX systems (maybe all; I'm not going to check the standard to see
if this is required) provide a mechanism to limit how much memory a
process can allocate. This is a Good Thing, and has been part of Unix
for a long time. Consult the documentation for the setrlimit system
call and the ulimit command (really commands, since various shells
provide various flavors of ulimit builtins).
The call *should* succeed, AFAICT,

You can't tell. None of us can; we don't have nearly enough
information. Certainly the amount of physical and virtual memory in
the system doesn't suffice. (In fact, depending on the OS, it may
not even be relevant.)

And, once again, we have an excellent demonstration why answering OT
questions here is a bad idea.
so if it doesn't, it's an implementation issue,

Or it isn't, since this is a hosted implementation, and it may just
be reporting - correctly - that the hosting OS has denied its request.
you can best pass on to the vendor of your compiler or ask this
question again on a newsgroup that is dedicated to your OS/compiler.

Indeed.

--
Michael Wojcik (e-mail address removed)

The antics which have been drawn together in this book are huddled here
for mutual protection like sheep. If they had half a wit apiece each
would bound off in many directions, to unsimplify the target. -- Walt Kelly
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

malloc and maximum size 56
malloc() size problems 15
When to use automatic variables and when to use malloc 58
malloc and free 11
malloc and realloc 37
malloc trouble 27
malloc and alignment 13
malloc questions in C 9

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,008
Latest member
HaroldDark

Latest Threads

Top