Memory Management Question


N

Nephi Immortal

Can you please be kind to answer my question? I know that it is not C+
+ topic on this newsgroups. I did a lot of researches, but I did not
find valuable information.

First question:

After you allocate memory using new function, you accidently call
delete function twice. It causes to have memory leak. The C++
program exit main function before it calls EndProcess function
(probably Win32 API). Can EndProcess deallocate some objects and
memory leaks automatically? Memory leaks will not affect another
applications or resources.
Sometimes, terminate function or abort function do not release some
allocated objects can cause memory leaks prior EndProcess. The client
will require to reboot operating system to clear up all available
memory.
Please confirm if I am correct.

Second question:

When you create free heap object, how many bytes do free heap object
take allocated memory before you use new function to allocate small or
large array.

For example, you allocate 16 bytes. Heap reserves extra 8 bytes.
One is for array pointer and another one is for heap pointer.
Perhaps, extra 16 bytes.

You can allocate multiple 16-byte arrays or 256 16-byte arrays. The
total 256 16-byte arrays is 4,096 bytes. Heap object might reserve
extra 2,048 bytes for all 256 16-byte arrays.

I am interested to know more. If you have information, please
provide me website.
 
Ad

Advertisements

G

Goran

Can you please be kind to answer my question?  I know that it is not C+
+ topic on this newsgroups.  I did a lot of researches, but I did not
find valuable information.

First question:

        After you allocate memory using new function, you accidently call
delete function twice.  It causes to have memory leak.

Not true. This causes undefined behavior.
 The C++
program exit main function before it calls EndProcess function
(probably Win32 API).  Can EndProcess deallocate some objects and
memory leaks automatically?  Memory leaks will not affect another
applications or resources.

Yes, under Windows, Unix variants and other systems where exists a
thing known as a "process isolation". Such systems take on to free any
resources allocated by the program when program terminates. THat goes
not only for memory, but for all kinds of operating system resources
(files, sockets, mutexes...).
        Sometimes, terminate function or abort function do not release some
allocated objects can cause memory leaks prior EndProcess.

Only if there is a bug in the operating system. However, these are
very, very rare occurrences in today's systems.
The client
will require to reboot operating system to clear up all available
memory.

Yes, if indeed there is a bug. However, there is a very, very small
chance of that, compared to the chance that e.g. you are seeing weird
beh
Second question:

        When you create free heap object, how many bytes do free heap object
take allocated memory before you use new function to allocate small or
large array.

        For example, you allocate 16 bytes.  Heap reserves extra 8 bytes.
One is for array pointer and another one is for heap pointer.
Perhaps, extra 16 bytes.

That depends on your system. There's nothing in C++ language that can
tell you this. Extra 16 bytes sounds reasonable on a 32-bit system,
though.
        You can allocate multiple 16-byte arrays or 256 16-byte arrays.  The
total 256 16-byte arrays is 4,096 bytes.  Heap object might reserve
extra 2,048 bytes for all 256 16-byte arrays.

        I am interested to know more.  If you have information,please
provide me website.

You need to know how heap handling is implemented in the library you
use (typically, C runtime that comes with your compiler). Ask there.
However, know this: any information is volatile and subject to change
over time.

Goran.
 
N

Nick Keighley

Can you please be kind to answer my question?  I know that it is not C+
+ topic on this newsgroups.  I did a lot of researches, but I did not
find valuable information.

First question:

        After you allocate memory using new function, you accidently call
delete function twice.

Don't Do This. It's Undefined Behaviour which simply means the
standard places no obligation on the impelmentor, he do what he damn
well pleases. Crashing or trashing the heap are likely possibilities.
 It causes to have memory leak.
unlikely

 The C++
program exit main function before it calls EndProcess function
(probably Win32 API).  Can EndProcess deallocate some objects and
memory leaks automatically?  Memory leaks will not affect another
applications or resources.

as others have stated modern OSs clean up on exit. This includes
Windows XP and anything that came after it, Linux and MacOS.

Second question:

        When you create free heap object, how many bytes do free heap object
take allocated memory [...]

very implementaion dependent. I'd say 4 bytes per allocation but
other's are quoting more. There may be more bytes for alignment. (a 32-
bit integer may have to start on an address divisible by 4).
[...] before you use new function to allocate small or
large array.

don't get this bit
        For example, you allocate 16 bytes.  Heap reserves extra 8 bytes.
One is for array pointer and another one is for heap pointer.

what is this "array pointer" of which you speak?
Perhaps, extra 16 bytes.

sounds a lot
        You can allocate multiple 16-byte arrays or 256 16-byte arrays.
what?

 The
total 256 16-byte arrays is 4,096 bytes.  Heap object might reserve
extra 2,048 bytes for all 256 16-byte arrays.

        I am interested to know more.  If you have information,please
provide me website.

Have you tried your implementaiosn documentaion. I've always found
MSDN quite reasonable
 
J

Juha Nieminen

Nick Keighley said:
as others have stated modern OSs clean up on exit. This includes
Windows XP and anything that came after it, Linux and MacOS.

I don't know why, but I find the notion that an OS would not free the
resources reserved by a process so antiquated that it's almost amusing.

In most systems when you do a "new" (or "malloc()" or whatever), that's
*not* a system call (ie. a call to the operating system). It simply causes
the C runtime environment to allocate a block of memory from the heap.
The heap is something that's not managed by the code that the programmer
writes. It's managed by the C runtime environment (eg. in most unix systems
it would be the libc.so shared library). This runtime interacts with the
operating system, and it happens by the runtime, basically, saying to the
OS "hey, increase my heap by this much" (which is usually a bigger chunk
than what was requested by the code) when the runtime needs more heap.

When the program ends, the runtime usually just tells the OS "I don't
need this heap anymore". It doesn't matter whether the code has forgotten
some deletes or not. It all goes away when the program ends.

Even if the program is for some reason killed abruptly (eg. by sending
it SIGKILL), the OS will remove the process and take its heap back. Again,
whether the program code forgot some deletes is completely irrelevant.
That memory management is at a completely different level.

In short: The OS sees the memory taken by a process just one big chunk
of heap. The OS couldn't care less how the program uses that heap. If the
program internally "leaks" objects inside the heap, that's completely
inconsequential. When the program ends, it takes that heap back (if the
runtime didn't release it explicitly) and that's it.

The reason why leaks are bad is if they happen in a loop or slowly over
long periods of time. This will cause the runtime to request more and more
heap space from the OS, consuming physical RAM and eventually running out
of it.

There are other resources that are much closer to the notion that the
program requests the OS directly a resource and the OS has then to release
it if the program itself doesn't. These may include (depending on the OS)
things like file handles (although these are often also process-specific
and not systemwide) and sockets. Temporary files are an interesting case
as well (if the temp file was requested from the OS).
 
J

Jorgen Grahn

I don't know why, but I find the notion that an OS would not free the
resources reserved by a process so antiquated that it's almost amusing.

Your view is narrow. There are interesting things you can easily do
if there's no explicit ownership of memory -- it's just that those of
us who have used Real OSes for ~15 years have forgotten about it.

I have forgotten too, but I remember back when I thought virtual
memory etc was just a way to throw hardware at a problem which was
really about sloppy programming ...

/Jorgen
 
C

Christopher

Yes, under Windows, Unix variants and other systems where exists a
thing known as a "process isolation". Such systems take on to free any
resources allocated by the program when program terminates. THat goes
not only for memory, but for all kinds of operating system resources
(files, sockets, mutexes...).

I've got quite a few executables that if run 10+ times and allowed to
crash 10+ times, will bring XP to a crawl.
Process Explorer shows the process exited, yet the system is no longer
as responsive and won't be until a reboot.

What possible reason could there be aside from some resource getting
used and not reclaimed?

Throw COM in the mix and you've got processes that were started by
other processes! Then your program can really do some damage even
after exit.
 
Ad

Advertisements

8

88888 Dihedral

Your view is narrow. There are interesting things you can easily do
if there's no explicit ownership of memory -- it's just that those of
us who have used Real OSes for ~15 years have forgotten about it.

I have forgotten too, but I remember back when I thought virtual
memory etc was just a way to throw hardware at a problem which was
really about sloppy programming ...

/Jorgen
Well, the large virtual mode address space of the heap is cheating
some novice programmers to think that a lousy big chunk returned from malloc
or new can really use more than a page of the real memory in the heap of the OS which talks to the hardware directly without any overheads.



 
G

Goran

I've got quite a few executables that if run 10+ times and allowed to
crash 10+ times, will bring XP to a crawl.
Process Explorer shows the process exited, yet the system is no longer
as responsive and won't be until a reboot.

What kinds of OS resources they handle and how?

You might, of course, be hitting an OS bug, but without knowing more,
I am inclined to say "you don't know what your processes are doing".
What possible reason could there be aside from some resource getting
used and not reclaimed?

Did you look at resource usage (even TaskManager can show you some of
that)? Without info, who knows. It's mighty improbable that the reason
indeed is an OS bug (compared to the probability that it's code
running atop of the OS that causes the slowdown).
Throw COM in the mix and you've got processes that were started by
other processes! Then your program can really do some damage even
after exit.

Unless you hit an OS bug, my claim is that you can't.

One thing you might try is to restart the shell (Explorer.exe). That's
because you might have OS resources being passed onto explorer through
COM and shell extensions.

Goran.
 
Ad

Advertisements

J

Joe keane

I don't know why, but I find the notion that an OS would not free the
resources reserved by a process so antiquated that it's almost amusing.

It will release process-specific VM, and close file descriptors, but
that doesn't mean it solves all your problems.

a) process creates a file, it intends to delete it when it's done, but
if it crashes the file is still there

b) lots more cases i can think of
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top