Implementing Malloc()

G

Gordon Burditt

The worst thing that can happen is that the programmer _does_ write to
the end of the mallocated block. In this case, either there's a SIGSEGV
again (no worse off than before), or if the 512Kb is in the middle of
the heap malloc() is drawing from then the writes might well succeed,
and the program can continue albeit with some possible minor data
corruption.

"possible minor data corruption", especially the kind you don't
notice, is about the worst case possible. You finally realize
what's happening, and then you discover that last year's backups
are corrupted and you've lost lots of work.

Remind me never to fly on any airplanes with your software running
them.

Remember, anything can run in zero time and zero memory if you don't
require the result to be correct.
 
J

James Fang

This is very bad in engineering practice, especially when the error is
undeterminstic and it is impossible to find the root cause of such
error.

Especially in some embedded systems without memory protection, this
kind of malloc implementation will make your program dancing as a
drunk.

BRs
James Fang
 
F

Flash Gordon

Richard Tobin wrote, On 27/11/07 00:56:
If you design it without mechanisms for returning error conditions that
is true. However, if you design it properly it is not true.

Complete and utter rubbish. One is predictable and occurs at the actual
point of failure the other is not guaranteed.

Not guaranteed by the C standard, but probably guaranteed on whatever
you're writing a window system for.[/QUOTE]

OK, let's try it with Windows 3.1, IIRC that did not have much in terms
of memory protection (and is probably still in use in some places since
I have very good reason to believe DOS is still used in some places on
PCs), or under Gem.
The C standard isn't the only
relevant source of guarantees for most programmers.

True. However it is better to rely on the C standard where it can
sensibly provide for what is wanted, and then work your way up through
the steadily less portable standards and system specifics only when
necessary.
Not that I advocate doing it. If it really isn't useful to handle the
error gracefully (which is often the case), then something like
xmalloc() is the oobvious solution.

I accept that a "tidy up and exit" malloc wrapper is sometimes
appropriate, also a "prompt the user and then retry" wrapper is
sometimes appropriate. I know that my customers would be *much* happier
with either of those than a segmentation violation, since a segmentation
violation implies (correctly IMHO) a bug in the program.

So I think you (Richard) and I are in agreement :)
 
F

Flash Gordon

Dik T. Winter wrote, On 27/11/07 00:51:
Not only that. It goes into a tight loop which is not user-friendly at
all, probably tying up many resources.

I've not seen Malcolm's implementation. If it prompts the users to free
up space and then waits (preferably with options to retry or abort) then
it is useful. If, as you are applying, it is simply a loop repeatedly
calling *alloc, then I would also consider it completely unacceptable.
 
S

santosh

Eric Sosman said:
CJ wrote On 11/26/07 16:40,:

This idea can be extended to produce the following
extremely efficient implementation of malloc() and its
companions:

#include <stdlib.h>

static unsigned long memory;

void *malloc(size_t bytes) {
return &memory;
}

void *calloc(size_t esize, size_t ecount) {
memory = 0;
return &memory;
}

void *realloc(void *old, size_t bytes) {
return old;
}

void free(void *ptr) {
#ifdef DEBUGGING
memory = 0xDEADBEEF;
#endif
}

Not only does this implementation avoid the processing
overhead of maintaining potentially large data structures
describing the state of memory pools, but it also reduces
the "memory footprint" of every program that uses it, thus
lowering page fault rates, swap I/O rates, and out-of-memory
problems.

ROTFL
 
S

Spoon

CJ said:
We were discussing implementing malloc(), in particular the following
situation.

Suppose the user requests 1Mb of memory. Unfortunately, we only have
512Kb available. In this situation, most mallocs() would return null.
The huge majority of programmers won't bother to check malloc() failure
for such a small allocation, so the program will crash with a SIGSEGV as
soon as the NULL pointer is dereferenced.

So why not just return a pointer to the 512Kb that's available? It's
quite possible that the user will never actually write into the upper
half of the memory he's allocated, in which case the program will have
continued successfully where before it would have crashed.

The worst thing that can happen is that the programmer _does_ write to
the end of the mallocated block. In this case, either there's a SIGSEGV
again (no worse off than before), or if the 512Kb is in the middle of
the heap malloc() is drawing from then the writes might well succeed,
and the program can continue albeit with some possible minor data
corruption.

On a related note, the Linux kernel may be configured so as to
overcommit memory. (It is even the default.)

http://lxr.linux.no/source/Documentation/vm/overcommit-accounting
 
C

CJ

Not only does this implementation avoid the processing
overhead of maintaining potentially large data structures
describing the state of memory pools, but it also reduces
the "memory footprint" of every program that uses it, thus
lowering page fault rates, swap I/O rates, and out-of-memory
problems.

All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.

Of course an implementation of malloc should return a pointer to as much
memory as requested whenever that's possible! The discussion was about
the rare failure case. Typically, programs assume malloc returns a
non-null pointer, so if it returns null on failure then the program will
crash and burn.

The idea is that instead of a guaranteed crash, isn't it better to make
a last-ditch effort to save the program's bacon by returning a pointer
to what memory there _is_ available? If the program's going to crash
anyway, it's got to be worth a shot.


=======================================
There once was an old man from Esser,
Who's knowledge grew lesser and lesser.
It at last grew so small,
He knew nothing at all,
And now he's a College Professor.
 
J

jacob navia

CJ said:
All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.

Of course an implementation of malloc should return a pointer to as much
memory as requested whenever that's possible! The discussion was about
the rare failure case. Typically, programs assume malloc returns a
non-null pointer, so if it returns null on failure then the program will
crash and burn.

The idea is that instead of a guaranteed crash, isn't it better to make
a last-ditch effort to save the program's bacon by returning a pointer
to what memory there _is_ available? If the program's going to crash
anyway, it's got to be worth a shot.


=======================================
There once was an old man from Esser,
Who's knowledge grew lesser and lesser.
It at last grew so small,
He knew nothing at all,
And now he's a College Professor.

Well, I proposed the function
mallocTry
that could give you the best of both worlds. Why do
you ignore that suggestion?
 
S

Stephen Sprunk

CJ said:
Of course an implementation of malloc should return a pointer to as much
memory as requested whenever that's possible! The discussion was about
the rare failure case. Typically, programs assume malloc returns a
non-null pointer, so if it returns null on failure then the program will
crash and burn.

A program only crashes on malloc() returning NULL if the programmer is
incompetent.
The idea is that instead of a guaranteed crash, isn't it better to make
a last-ditch effort to save the program's bacon by returning a pointer
to what memory there _is_ available? If the program's going to crash
anyway, it's got to be worth a shot.

The change you propose would mean that malloc() _might_ save some
incompetent programmers but _definitely_ make it impossible for competent
programmers to write correct programs because there's no longer a way to
detect if they got the amount of memory they requested. That goes against
both the fundamental philosophy of C and common sense.

S
 
D

dj3vande

Not only does this implementation avoid the processing
overhead of maintaining potentially large data structures
describing the state of memory pools, but it also reduces
the "memory footprint" of every program that uses it, thus
lowering page fault rates, swap I/O rates, and out-of-memory
problems.

All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.
[...]

The idea is that instead of a guaranteed crash,
the programmer should handle a failure to allocate resources properly,
and the implementation should make it possible to do so.

I think you're the one who's missing the point.


dave
 
S

santosh

CJ said:
All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.

Of course an implementation of malloc should return a pointer to as
much memory as requested whenever that's possible! The discussion was
about the rare failure case. Typically, programs assume malloc returns
a non-null pointer, so if it returns null on failure then the program
will crash and burn.

A properly written non-trivial program is not going to "crash and burn"
when malloc() returns NULL. It will shutdown gracefully, or even notify
the user to free up some memory and perhaps try again.
The idea is that instead of a guaranteed crash,

No. There is no guaranteed crash. Consider:

#include <stdlib.h>

int main(void) {
char *p = malloc(SIZE_MAX);
if (p) { free(p); return 0; }
else return EXIT_FAILURE;
}

Where is the "crash"?
isn't it better to
make a last-ditch effort to save the program's bacon by returning a
pointer to what memory there _is_ available? If the program's going to
crash anyway, it's got to be worth a shot.

If malloc() returns NULL for failure, the program can at least exit
gracefully or even try and recover some memory and keep going, but if
it is going to indicate success but return insufficient space, then a
memory overwrite is almost certainly going to lead to either hard to
debug data corruption or a core dump from a segmentation violation.
This is (for any serious program) worse than a clean, controlled exit.

Besides your function (for whatever it's worth) can easily be written on
top of malloc() for anyone mad enough to want it. No need to include
another gets() into the Standard.
 
E

Eric Sosman

CJ wrote On 11/27/07 12:58,:
All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.

Well, yes: I understood it as a joke. But if
you're actually serious ...
Of course an implementation of malloc should return a pointer to as much
memory as requested whenever that's possible! The discussion was about
the rare failure case. Typically, programs assume malloc returns a
non-null pointer, so if it returns null on failure then the program will
crash and burn.

One could take issue with your assertion about
"typically," and even about "rare." But that's a
side-issue.
The idea is that instead of a guaranteed crash, isn't it better to make
a last-ditch effort to save the program's bacon by returning a pointer
to what memory there _is_ available?

No. For starters, this makes what you consider an
"atypical" program as bad as a "typical" one, because
it becomes impossible to detect a malloc() failure: if
malloc() returns a non-NULL pointer, how can the caller
tell whether it was or wasn't able to supply the memory?
If the program's going to crash
anyway, it's got to be worth a shot.

Your unstated assumption is that a crash is worse
than all other outcomes, and I dispute that assumption.
A program that ceases to operate also ceases to produce
wrong answers. It probably also draws attention to the
fact that something has gone wrong, instead of plunging
silently ahead doing who-knows-what.

In my house I have smoke alarms that use nine-volt
batteries to supply their modest electrical needs. The
service life of the battery in this application is quite
long, well over a year, so the battery's "failure rate"
is quite low: When the alarm "requests more electricity"
from the battery, it nearly always succeeds. Yet on the
extremely rare occasion where the battery cannot deliver
enough, the alarm starts beeping at intervals to alert me
to the fact. Then I change the battery, and all is well.

If the battery were designed to work the way you want
malloc() to operate, this wouldn't happen. The battery
would (somehow) pretend to be able to supply the requested
voltage even when it could not, and the alarm would never
discover that the battery was dead. Instead, the alarm
would sit silently on my wall, giving the illusion of
performing its function without actually doing so, pretending
to protect me and my family while leaving us at risk. You
may not admire me much, but you know nothing of my family
and I request that you not condemn them to the flames.
 
F

Flash Gordon

CJ wrote, On 27/11/07 17:58:
All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.

No, I believe that Eric and most of the other respondents understood it
perfectly, and that is why they all considered it a terrible idea.
Of course an implementation of malloc should return a pointer to as much
memory as requested whenever that's possible! The discussion was about
the rare failure case.

Rare to you maybe, but I actually use machines fairly hard and it is not
uncommon for them to run out of memory.
Typically, programs assume malloc returns a
non-null pointer, so if it returns null on failure then the program will
crash and burn.

Not if it is written properly, and a lot of programs *are* written properly.
The idea is that instead of a guaranteed crash, isn't it better to make
a last-ditch effort to save the program's bacon by returning a pointer
to what memory there _is_ available? If the program's going to crash
anyway, it's got to be worth a shot.

Oh dear, you have just caused my copy of VMWare to crash possibly
causing corruption of the guest systems file system and forcing me to
restart a long and complex set of tests involving a number of VMware
sessions. You have also prevented one of the applications I develop from
tidying up cleanly behind itself and notifying the user, possibly
causing corruption of a companies corporate accounts.

No, it is FAR better to do what the standard requires and actually give
the application a shot at doing something sensible when it runs out of
memory, such as warning the user and giving them a chance to free some
memory up, or shutting down tidily so that data is not corrupted etc.
 
J

John Gordon

In said:
Typically, programs assume malloc returns a non-null pointer, so if it
< returns null on failure then the program will crash and burn.

Huh? If malloc returns null, the program detects it handles it as best
it can. It certainly doesn't *use* the null pointer gotten from malloc.

Where's this "guaranteed crash?"
 
E

Eric Sosman

jacob said:
Well, I proposed the function
mallocTry
that could give you the best of both worlds. Why do
you ignore that suggestion?

He's begun by assuming a caller who's too [insert
derogatory adjective] even to compare malloc's value
to NULL. Would this [ida] person have the smarts to
make effective use of mallocTry?
 
K

Kenneth Brody

CJ said:
We were discussing implementing malloc(), in particular the following
situation.

Suppose the user requests 1Mb of memory. Unfortunately, we only have
512Kb available. In this situation, most mallocs() would return null.

AFAIK, _all_ conforming mallocs return NULL. (With the possible
exception of things like Linux's overcommit scheme. However, it is my
understanding that it is the kernel that lies to malloc, and malloc
does, in fact, believe that the memory is available.)
The huge majority of programmers won't bother to check malloc() failure

I have to take exception with that statement.
for such a small allocation, so the program will crash with a SIGSEGV as
soon as the NULL pointer is dereferenced.

Someone who doesn't check for malloc() failure, especially on such
a large size, deserves what he gets.

Consider, too, the ease in debugging the NULL pointer dereference.
So why not just return a pointer to the 512Kb that's available? It's
Eww...

quite possible that the user will never actually write into the upper
half of the memory he's allocated, in which case the program will have
continued successfully where before it would have crashed.

I would consider that strategy "asking for trouble".
The worst thing that can happen is that the programmer _does_ write to
the end of the mallocated block. In this case, either there's a SIGSEGV
again (no worse off than before), or if the 512Kb is in the middle of
the heap malloc() is drawing from then the writes might well succeed,
and the program can continue albeit with some possible minor data
corruption.

So continuing "with some possible minor data corruption" is a
viable programming strategy? You want to make buffer overruns a
design strategy?

Consider the difference in difficulty in "malloc returned NULL" to
"I malloced a million bytes, but something else is mysteriously
writing into my buffer, and writing into my buffer causes some
other code to mysteriously crash".

How many Windows updates were to fix "malicious code could allow an
attacker to take over your computer" bugs which were caused by such
buffer overruns?
Do any implementations of malloc() use a strategy like this?

I sincerely hope not. I don't want my implementation lying to me.
("Lazy mallocs" are bad enough. We don't need "outright lying
mallocs".)

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:[email protected]>
 
D

Default User

CJ wrote:

Of course an implementation of malloc should return a pointer to as
much memory as requested whenever that's possible! The discussion was
about the rare failure case. Typically, programs assume malloc
returns a non-null pointer, so if it returns null on failure then the
program will crash and burn.

You're either an idiot or a troll. You haven't understood, or have
chosen to ignore all the follow-ups.

As such, you're a waste of my time (the greatest usenet sin).

*plonk*





Brian
 
U

user923005

All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.

Of course an implementation of malloc should return a pointer to as much
memory as requested whenever that's possible! The discussion was about
the rare failure case. Typically, programs assume malloc returns a
non-null pointer, so if it returns null on failure then the program will
crash and burn.

The idea is that instead of a guaranteed crash, isn't it better to make
a last-ditch effort to save the program's bacon by returning a pointer
to what memory there _is_ available? If the program's going to crash
anyway, it's got to be worth a shot.

Surely, surely. This is a troll.

If I asked for 4 MB and the implementation returned 2 MB, I can't
really imagine anything worse than that. Not only is it undefined
behavior, but the behavior is very unlikely even to be repeatable. Why
(on earth) would the program 'crash anyway'? Anyone who does not
check the return of malloc() in a production application is an
incompetent dimwit of the highest order. So there is no way that the
application is going to crash. But of course, anyone who ever did
real-life programming in C knows all of this already.

If malloc() fails, and I could succeed with less memory (e.g. if I am
creating a hash table with a small fill factor, I can increase it),
then I am going to try to allocate less memory. To simply return a
smaller block... you're a riot.

Quite an effective troll, sir. I congratulate you.
 
J

jameskuyper

CJ said:
All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.

No, we understood the context; we just have a well-justified lack of
respect for the attitudes that could create such a context
Of course an implementation of malloc should return a pointer to as much
memory as requested whenever that's possible! The discussion was about
the rare failure case. Typically, programs assume malloc returns a
non-null pointer, so if it returns null on failure then the program will
crash and burn.

The idea is that instead of a guaranteed crash, isn't it better to make
a last-ditch effort to save the program's bacon by returning a pointer
to what memory there _is_ available? If the program's going to crash
anyway, it's got to be worth a shot.

No, it is not. The sooner and more frequently such incompetently
written code fails, the sooner the programmer will realize why it was
a bad idea to ignore null pointers returned by malloc(). If the
programmer never realizes this, then the sooner and more frequently
the programs fail, the sooner the programmer will be fired and
replaced with someone more competent. Everyone who deserves to be will
be better off as a result. Even the fired programmer will be better
off in the long run, because getting fired will give the ex-programmer
an opportunity to change to a career which doesn't require as much
attention to detail as programming, which should make the ex-
programmer happier in the long run.

Competent programmers check for failed memory allocation, and take
appropriate action. If allocation fails, their programs will either
fail gracefully (NOT crash-and-burn), or they will free up memory
elsewhere and retry the allocation. Your proposed change would cause
competently written programs that rely upon the failure indication to
fail without warning, in order to provide very questionable protection
for incompetently written code.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top