malloc beyond 20MB fails. Why so limited?

I

itsolution

Hi Guru,

I need some data cache (which I want is able to keep tens of thousands
entries) containing DB data in my Unix running process. Each cache
entry is around 0.5k bytes. So, I want to create the Cache area in my
Unix process when the process starts up.
To the contrary to my expectation, once malloc(0.5k) is called
beyond 4000 times, then, trailing malloc() continues to fail.

Isn't it the heap space should be much bigger than 20MB in a Unix
process ?
Why malloc is failing at not big size allocation ?
Any workaround or advice?


thanks
 
M

Malcolm McLean

I need some data cache (which I want is able to keep tens of thousands
entries) containing DB data in my Unix running process. Each cache
entry is around 0.5k bytes. So, I want to create the Cache area in my
Unix process when the process starts up.
To the contrary to my expectation, once malloc(0.5k) is called
beyond 4000 times, then, trailing malloc() continues to fail.

Isn't it the heap space should be much bigger than 20MB in a Unix
process ?
Why malloc is failing at not big size allocation ?
Any workaround or advice?
Firstly check everything else using a program monitor to make sure you don't
have a hog running simultaneously.
You should have much more than 20MB. Either you are passing the wrong size,
or there is a leak somewhere. Blocks of 500 bytes or so are rather
inefficient, but not that inefficient.
 
V

vippstar

Hi Guru,

I need some data cache (which I want is able to keep tens of thousands
entries) containing DB data in my Unix running process. Each cache
entry is around 0.5k bytes. So, I want to create the Cache area in my
Unix process when the process starts up.
To the contrary to my expectation, once malloc(0.5k) is called
beyond 4000 times, then, trailing malloc() continues to fail.

Isn't it the heap space should be much bigger than 20MB in a Unix
process ?
Why malloc is failing at not big size allocation ?
Any workaround or advice?

try <As far as ISO C is concerned, malloc can allocate from 1 to SIZE_MAX.
 
J

jacob navia

Hi Guru,

I need some data cache (which I want is able to keep tens of thousands
entries) containing DB data in my Unix running process. Each cache
entry is around 0.5k bytes. So, I want to create the Cache area in my
Unix process when the process starts up.
To the contrary to my expectation, once malloc(0.5k) is called
beyond 4000 times, then, trailing malloc() continues to fail.

Isn't it the heap space should be much bigger than 20MB in a Unix
process ?
Why malloc is failing at not big size allocation ?
Any workaround or advice?


thanks

Besides checking what Mr McLean told you, check the permissions.
(ulimit). Sometimes administrators limit what a user process can
allocate.
 
W

Wolfgang Draxinger

To the contrary to my expectation, once malloc(0.5k) is
called
beyond 4000 times, then, trailing malloc() continues to fail.

On a modern system you can dynamically allocate less memory than
a certain lower bound, im most cases this are 64kiB, but on some
systems that are up to 256kiB. Some C runtime libraries manage a
set of memory chunks, from which smaller memory allocations are
taken, which keeps you from running into that problem.

Just do the figures:
(4000*65535)/(1024**2) = 249MiB.
I.e. if you've got a dumb malloc implementation you're eating
almost 250MiBs.

If you need a lot of memory better allocate one large chunk at
the start of your program and manage stuff within that.

You might also be interested in a concept calles "Obstacks" which
provide memory management for cases in which a lot of small
size, or objects of similair size are needed.

GLibC implements obstacks, documentation is avaliable through

info obstack

Wolfgang Draxinger
 
J

jacob navia

Wolfgang said:
On a modern system you can dynamically allocate less memory than
a certain lower bound, im most cases this are 64kiB, but on some
systems that are up to 256kiB. Some C runtime libraries manage a
set of memory chunks, from which smaller memory allocations are
taken, which keeps you from running into that problem.

Just do the figures:
(4000*65535)/(1024**2) = 249MiB.
I.e. if you've got a dumb malloc implementation you're eating
almost 250MiBs.

If you need a lot of memory better allocate one large chunk at
the start of your program and manage stuff within that.

You might also be interested in a concept calles "Obstacks" which
provide memory management for cases in which a lot of small
size, or objects of similair size are needed.

GLibC implements obstacks, documentation is avaliable through

info obstack

Wolfgang Draxinger

All that is true but the user is running in a Unix system, that is
unlikely to have such a bad malloc implementation...
 
W

Wolfgang Draxinger

jacob said:
All that is true but the user is running in a Unix system, that
is unlikely to have such a bad malloc implementation...

I wouldn't be so sure about this. Some Unix libc have been
developed for speed and simplicity, at the sacrifice of memory
efficiency.

Wolfgang Draxinger
 
I

itsolution

The issue I have encountered occurs in a network device running BSD.
malloc(1024 * 20000) is ok but malloc(1024 * 30000) always fails.
Looks like due to ulimit issue or similar limit ... right?


Follow-up question (to overcome that limitation)

When implementing a memory pool of having 30MB or so,
using global static memory instead of using heap is the better
way ?


thanks
 
R

Randy Howard

Besides checking what Mr McLean told you, check the permissions.
(ulimit). Sometimes administrators limit what a user process can
allocate.

Yes, and there are also per-process limits, for example 2GB for a
32-bit Linux system is common, which you might be bumping up against
due to the size of other code and data in the process, or by an
unintentional memory leak, the inclusion of a bloatware library, there
could be a lot of potential reasons. You can /usually/ grab via malloc
something quite close to 2GB with a single malloc() call in small apps,
somewhere in the 1700MB range wouldn't surprise me at all.

However, none of this is really a standard C issue, and more an issue
with a particular platform/system setup, or an undetected error in the
code. Taking it up in a forum dedicated to your platform would make a
lot more sense.
 
K

Keith Thompson

The issue I have encountered occurs in a network device running BSD.
malloc(1024 * 20000) is ok but malloc(1024 * 30000) always fails.
Looks like due to ulimit issue or similar limit ... right?

We don't know. "ulimit" isn't defined by the C language. You're
asking an operating system question, not a C question. You'll
probably get a better answer in comp.unix.programmer (though
consulting your system's documentation for "ulimit" would be a good
first step).
Follow-up question (to overcome that limitation)

When implementing a memory pool of having 30MB or so,
using global static memory instead of using heap is the better
way ?

See above.
 
S

Serve Lau

Hi Guru,

I need some data cache (which I want is able to keep tens of thousands
entries) containing DB data in my Unix running process. Each cache
entry is around 0.5k bytes. So, I want to create the Cache area in my
Unix process when the process starts up.
To the contrary to my expectation, once malloc(0.5k) is called
beyond 4000 times, then, trailing malloc() continues to fail.

Isn't it the heap space should be much bigger than 20MB in a Unix
process ?
Why malloc is failing at not big size allocation ?
Any workaround or advice?

If you know you will call malloc 4k times with 512 bytes (that is 2MB by the
way) why not call malloc once with that size and create a function to return
a pointer to the next 512 bytes at every call. This way all that memory will
be contiguous in memory
 
S

SM Ryan

# Isn't it the heap space should be much bigger than 20MB in a Unix
# process ?

No.

Yes.

No.

Depends. Are there resource limits? Do you have manually add
sectors to VM on your unix? Is your boot drive track limitting?

# Why malloc is failing at not big size allocation ?

Is that what the errno is saying?

# Any workaround or advice?

Many unices have performance tools that help you figure out
if you're exhausting VM, but it depends on which unix you've
got.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top