will the memory allocated by malloc get released when program exits?

M

Mark F. Haigh

Malcolm said:
Unfortunately this advice, though well-meaning, is impractical. Errors occur
mid processing, often leaving structures in a corrupted or half-build state.
So freeing everything can be complicated. Then it is even more difficult to
test the freeing code, and it might itself generate errors.

If a user types incorrect input into a program, should it immediately
abort? Not in my book. If a file contains unexpected input, should
the program immediately abort? Again, no. Both are "errors", and may
occur mid-processing.

In cases like this, the program should free its allocated memory and
perform an orderly shutdown.
So unless freeing all memory in all circumstances is an absolute
requirement, which is unusual, it is much better to simply exit with an
error message if it becomes necessary to abort.

It's simply good practice to clean up your own messes. First off,
memory is *not* guaranteed to be freed on program exit. Pretending
like it is doesn't make it so. Secondly, as Christian Bau mentioned,
programs have a way of becoming libraries in yet bigger programs, and
aborting from a library is rarely considered appropritate.

As far as handling programmer error (UB) goes (via signals), anywhere
from dumping stack traces to rebooting may be appropriate. It depends
on the circumstance and the platform.


Mark F. Haigh
(e-mail address removed)
 
M

Mark F. Haigh

Lawrence said:
Really? I can understand this for low level embedded systems, where
termination of the program is a pretty much terminal fault anyway. But
memory reclamation doesn't generate obvious issues for a real-time system,
it can be given a low priority. A RTOS that doesn't reclaim memory
properly risks running out of resources i.e. not being able to guarantee
what a program needs to run. That strikes me as a rather significant flaw
in a RTOS.

RTOSs aren't always built to guarantee proper resource management;
they're built to guarantee bounded response times for certain events or
operations. As long as programs free the resources they allocate
before termination, there are generally no big resource management
problems besides memory fragmentation issues (which are worked around
in several ways).

An example of a significant flaw in a RTOS would be failing to service
a timer interrupt within a specified time bound that causes too much
radiation to be released into a cancer patient's body.

Note that RTOS code to handle programmer laziness or error is more code
to audit for RT issues and is an increase in footprint and overall
system complexity.

As a footnote, using a low-priority task to reclaim memory may never
actually run in a highly loaded system, where you're most likely to
need the extra memory!
For a RTOS it is a fundamental quality issue.

No, it's a design tradeoff. Memory that is malloc-ed may be still in
use by different parts of the system. In many circumstances, the OS
does not know who is using what, and keeps itself out of the way for
safety's sake.

When people write clean, conformant code that frees what it allocates,
it can be easy to port a program to a RTOS. When that's not done, it
can be a real PITA.


Mark F. Haigh
(e-mail address removed)
 
S

S.Tobias

CBFalconer said:
Not necessarily. Especially if you are using a malloc/free package
that has O(N*N) performance when there are many items to free.

(I understand that N is the number of `free's called at a time.)
I have noticed that glibc `free' runs slower in presence of
many allocations, I don't know the exact reason though.
But why should `free' behave _that_ bad? Is it really related
to not handling adjacent blocks properly (I think you suggested
it in your later part, which I already snipped)? I think it's
so basic that every implementation should have that.
This is quite common - the thing that is not so common is programs
that end with large numbers of items allocated and then free them
all.

You can demonstrate the phenomenom easily with the test program for
my hashlib system, which has provision for avoiding the final
frees. I forget the number where the effect became obvious, it
might have been 200,000 or maybe 2,000,000 items. It shows up as a
huge delay between telling the program to exit and getting a prompt
back. I found the O(N*N) effect on DJGPP, LCC_WIN32, and VC6.

I have noticed similar delay in my program (where I allocate large
amounts of memory, but for speed I cache and reuse already allocated
chunks, so probably it's not that huge in number of blocks).
I haven't gone that far to investigate it yet. I have assumed
that the delay is probably related to returning the resources
to the system by a terminated process, although in some cases
it took astonishingly long (process real time was significantly
larger that user time for small test programs - say - by a factor
of 1.5). And it seemed to behave differently on different
computers and OSs (different version glibc, kernel?).

I don't see any reason why an implementation should do anything
to the heap after exit()ing (even if you have unfreed blocks), except
when it has to free it's own resources (close files, etc...).
 
L

Lawrence Kirby

RTOSs aren't always built to guarantee proper resource management;
they're built to guarantee bounded response times for certain events or
operations. As long as programs free the resources they allocate
before termination, there are generally no big resource management
problems besides memory fragmentation issues (which are worked around
in several ways).

An example of a significant flaw in a RTOS would be failing to service
a timer interrupt within a specified time bound that causes too much
radiation to be released into a cancer patient's body.

That would certainly be bad, being a little late in turning off the
radiation is not good. It could be worse though. Failing completely at
that point due to running out of resources such as memory could be
literally fatal. Of course any sane system would have secondary safeguards
and checks against resources getting low, but in terms of ensuring that an
action is performed successfully within a certain time ensuring necessary
resources is just as vital, perhaps even more so, than scheduling.
Note that RTOS code to handle programmer laziness or error is more code
to audit for RT issues and is an increase in footprint and overall
system complexity.

Not really, it is typically not difficult for the system to reclaim memory
for a terminated application and it can take away potentially much larger
complexity from the application. It also makes the overall system MUCH
easier to validate for long term stability.
As a footnote, using a low-priority task to reclaim memory may never
actually run in a highly loaded system, where you're most likely to
need the extra memory!

In which case you've failed from a RT point of view anyway. Even if the
application has to release the resources it still needs to spend the CPU
cycles to do it and if they are all in use then something has been
starved. RT principles can be applied to resource reclamation by the OS
just as much as anything else. If the RT system has insufficient CPU
resources to meet the RT requirements of its components then it has failed.
It makes no difference to this where memory freeing occurs, it still has
to happen somewhere.
No, it's a design tradeoff. Memory that is malloc-ed may be still in
use by different parts of the system. In many circumstances, the OS
does not know who is using what, and keeps itself out of the way for
safety's sake.

I agree that shared memory is different. While malloc() could be used to
allocate shared memory in an unprotected environment it makes more sense
to have a separate mechanism so the the OS can tell the difference. Good
design should keep shared objects small in number and preferably size.
They should not be treated like normal "local" objects.
When people write clean, conformant code that frees what it allocates,
it can be easy to port a program to a RTOS. When that's not done, it
can be a real PITA.

But the flaw that highlights is in the design of the RTOS. I can
understand this sort of thing for really small systems like embedded
systems on 8 bit processors where you have to cut corners, but there is
nothing in RT technology itself that warrants this.

Lawrence
 
L

Lawrence Kirby

Let's examine for a moment how one can come across a SIGSEGV:

1. Through undefined behavior leading to "an invalid access
to storage" (C99 7.14#3), although such conditions are not
required to be detected (C99 7.14#4).

2. As a result of calling raise (or other POSIX or system-
specific functions, e.g. kill).
....

Users of POSIX-compliant systems may be able to differentiate #1 from
#2 via the si_code member of siginfo_t.

What's the point? If something causes a SIGSEGV using raise or POSIX kill
then presumably it wants the code to act as if a segmentation fault (or
whatever on the system causes it) had occurred. If it wanted it to act in
some other way it would make more sense to use a different signal.

Lawrence
 
L

Lawrence Kirby

Not only that. Ok, many OS have garbage collectors for processes that
ended, in one way or another, but don't rely on that.

You HAVE to rely on it for collection of freed memory, program code,
automatic variables, static variables etc.
Moreover,
allocating memory just for fun and not deallocating it makes your
program huge and slower for no reason, so deallocate the memory you
don't use anymore.

For the most part, yes. A tigher wording is to free all memory
because it becomes inaccessible by the program, i.e. don't create memory
leaks. But that doesn't cover everything. Say you have a program whose
purpose is to maintain a large datastructure in memory, some in-memory
database or take for example a document in a text editor. As you edit that
document the program allocates and possibly freed memory as you add and
delete parts of the document. You write the file out, fine, it is still
also in memory. Then you quit the text editor. The question is whether it
is worth the program going through the whole document datastructure in
memory and freeing it when the OS will reclaim the memory whether you do
or not. For some applications this freeing process could take a
SIGNIFICANT time.

Note that there is no memory leak here, the allocated memory is accessible
by the program up until the point it terminates.

Lawrence
 
B

Ben Pfaff

CBFalconer said:
Not necessarily. Especially if you are using a malloc/free package
that has O(N*N) performance when there are many items to free.

Also, with virtual memory freeing every item may force blocks to
be brought in from disk, which is very slow.
 
C

CBFalconer

S.Tobias said:
.... snip ...

I have noticed similar delay in my program (where I allocate large
amounts of memory, but for speed I cache and reuse already allocated
chunks, so probably it's not that huge in number of blocks).
I haven't gone that far to investigate it yet. I have assumed
that the delay is probably related to returning the resources
to the system by a terminated process, although in some cases
it took astonishingly long (process real time was significantly
larger that user time for small test programs - say - by a factor
of 1.5). And it seemed to behave differently on different
computers and OSs (different version glibc, kernel?).

I don't see any reason why an implementation should do anything
to the heap after exit()ing (even if you have unfreed blocks), except
when it has to free it's own resources (close files, etc...).

It is not the amount of allocated memory that counts here, but the
count of allocated blocks. The program post-exit code (in the OS)
doesn't have to discriminate between allocated and free blocks or
anything else, it just takes back the one big mess that the malloc
(and other) system has been divying up during the run, and can be
very fast. You can follow most of it in my nmalloc code to which I
referred earlier.
 
M

Mark F. Haigh

Lawrence said:
What's the point? If something causes a SIGSEGV using raise or POSIX kill
then presumably it wants the code to act as if a segmentation fault (or
whatever on the system causes it) had occurred. If it wanted it to act in
some other way it would make more sense to use a different signal.

No point, really. Since the standard specifies different behaviors
depending on the source of the signal, I thought it may be useful to
point out how to differentiate the two on POSIX systems. YMMV.


Mark F. Haigh
(e-mail address removed)
 
M

Mark F. Haigh

Lawrence Kirby wrote:

That would certainly be bad, being a little late in turning off the
radiation is not good. It could be worse though. Failing completely at
that point due to running out of resources such as memory could be
literally fatal. Of course any sane system would have secondary safeguards
and checks against resources getting low, but in terms of ensuring that an
action is performed successfully within a certain time ensuring necessary
resources is just as vital, perhaps even more so, than scheduling.

Memory is nearly universally pre-allocated for critical paths.
Resource allocation errors are forced to initialization time. Stacks
are pre-allocated as well, and their sizes are based on worst case
scenario analysis multiplied by a large safety factor.
Not really, it is typically not difficult for the system to reclaim memory
for a terminated application and it can take away potentially much larger
complexity from the application. It also makes the overall system MUCH
easier to validate for long term stability.

Maybe in an ideal world. In reality, it is difficult sometimes to
reclaim memory in a safe way, as memory is shared amongst other
programs, ISR's, coprocessors, and other hardware. The NASA Mars probe
made it there and managed to function using such a design (vxWorks).
In which case you've failed from a RT point of view anyway. Even if the
application has to release the resources it still needs to spend the CPU
cycles to do it and if they are all in use then something has been
starved. RT principles can be applied to resource reclamation by the OS
just as much as anything else. If the RT system has insufficient CPU
resources to meet the RT requirements of its components then it has failed.
It makes no difference to this where memory freeing occurs, it still has
to happen somewhere.

Usually malloc and free just supply chunks to special purpose
allocators that meet the necessary execution time bounds. You'd be
surprised how often people try to shoehorn things into low priority
tasks only to grapple with the sometimes bizarre effects of high loads
on such a design.
I agree that shared memory is different. While malloc() could be used to
allocate shared memory in an unprotected environment it makes more sense
to have a separate mechanism so the the OS can tell the difference. Good
design should keep shared objects small in number and preferably size.
They should not be treated like normal "local" objects.

Many people do not agree with you and would argue it's not the business
of a RTOS to do such bookkeeping. Applications can do that for
themselves.
But the flaw that highlights is in the design of the RTOS. I can
understand this sort of thing for really small systems like embedded
systems on 8 bit processors where you have to cut corners, but there is
nothing in RT technology itself that warrants this.

It's only a flaw if your allocations do not match your deallocations.
If you can manage to do that, there's no real problem. Your disdain
for the aesthetics of such a system doesn't mean other people don't
find them useful.


Mark F. Haigh
(e-mail address removed)
 
Joined
Jul 3, 2011
Messages
1
Reaction score
0
yes, memory alloted to program should get freed by OS.

Definitely this depends on how does OS do memory management, so OS specific also.

What I refer by memory management is does it supports Virtual memory management.

Some, OS may not support VM because they may be developed for specific purpose like real time support, so VM would be overhead on them.

Other OS like Linux, Windows and several Unix variants do.

IN VM, when a process is created it is assigned a virtual address space in terms of Virtual Memory pages which are then bound to physical memory (RAM) pages.

So, all the memory alloted for data, text, bss segments gets alloted into these pages.

The dynamic memory alloted from heap segment also comes from these virtual address space.

This dynamic memory is managed by the c library functions (malloc, free etc and they keep track of memory available and freed etc).

Whenever required virtual memory pages are swapped out/in by operating system but program doesn't know about that and need not bother also.

When the program exits, these entire virtual address space where all the memory allocation happened for stack ( stack segment), data , bss , heap will get freed by OS. i.e. this entire virtual address space will be released by OS and hence all the corresponding physic memory will be released.
So, this is the way the program dynamically alloted memory will be released by OS.

I hope this explanation would be helpful to anybody who is trying to understand more about this issue.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,540
Members
45,025
Latest member
KetoRushACVFitness

Latest Threads

Top