Debugging standard C library routines

R

Richard Heathfield

jacob navia said:
The OP is running linux. Linux has memory protection, as far
as I remember.

That is irrelevant in comp.lang.c - even if the OP is running Linux, that
doesn't necessarily mean that Linux is the only system on which his code
will be run. Advice given here about C should be /general/ advice; it
should be good for /any/ hosted implementation (and, if applicable, on
freestanding implementations).
 
M

Michael Mair

jacob said:
The OP is running linux. Linux has memory protection, as far
as I remember.

You know, it's comp.lang.c -- you either give off-topic advice (when
qualifying it as Linux only answer) or not entirely correct advice.
Even though your statement may be correct for Linux and also for many
hosted environments, I'd still rather not follow it.

But better treat people of stupid immediately, specially if
its Jacob Navia...

There are two sides to something like that.

You often get into heated discussions and present your positions
in a rather unfavourable and unfortunate way. I am glad that people
like you, Paul Hsieh, and Chris Hills participate because you are
clearly intelligent and the positions you represent are decidedly
off the group's mainstream -- this inspires thought. As always in
comp.lang.c, there are issues with the given representation and
its interpretation :)

That doesn't cost anything since the guy never gets upset...

:-/


Cheers
Michael
 
J

jacob navia

Richard said:
jacob navia said:




That is irrelevant in comp.lang.c - even if the OP is running Linux, that
doesn't necessarily mean that Linux is the only system on which his code
will be run. Advice given here about C should be /general/ advice; it
should be good for /any/ hosted implementation (and, if applicable, on
freestanding implementations).

Who are YOU to tell ME what advice should I give?

I give advice adapted to the OP question!

He is running linux, then I assume memory protection!
Who cares about each possible CPU in the planet?
He is in a certain situation, and give specific advice to THAT
situation!
 
R

Richard Heathfield

jacob navia said:
Who are YOU to tell ME what advice should I give?

I'm a guy who subscribes to comp.lang.c, which is for discussions about C,
not for discussions about Linux. That's why it says "c", not "linux", in
the group's name. Who are YOU arbitrarily to CHANGE the topic of discussion
of an INTERNATIONAL NEWSGROUP to suit YOUR whim?
I give advice adapted to the OP question!

Yes, and if you give advice that has absolutely nothing to do with C, then
nobody can stop you doing that, but on the other hand you can't stop other
people complaining about it either. Newsgroups have topics for the
excellent reason that, if they didn't, there'd just be the one newsgroup,
with 10,000,000 posts an hour, on every subject from aardvarks to zymurgy,
and nobody would be able to find anything they were interested in
discussing because everything would be all mixed in together.

I would expect to have to explain this to a Usenet newbie, but you've been
posting to Usenet for longer than I have. You should have worked it out by
now! Are you being deliberately stupid, or just accidentally stupid? It
must be one or the other - you cannot reasonably claim ignorance of
topicality after over a decade of posting to Usenet!
He is running linux, then I assume memory protection!

If Linux is relevant to his question, he's in the wrong group.
Who cares about each possible CPU in the planet?

C programmers.
He is in a certain situation, and give specific advice to THAT
situation!

The best advice you can give him is to ask his question in a newsgroup where
it's topical.
 
K

Keith Thompson

Frederick Gotham said:
Richard Heathfield posted: [...]
Check every malloc/calloc/realloc to ensure it succeeded before you rely
on its return value.

If malloc'ing less than a kilobyte, I wouldn't bother.

That is extraordinarily bad advice.

(Frederick, I'm still waiting for your apology.)
 
W

Walter Roberson

jacob navia said:
The OP is running linux. Linux has memory protection, as far
as I remember.

Does Linux promise memory protection on every implementation?
If a system that Linux is ported to cannot do the kind of memory
protection you are thinking of, then is it prohibitted from calling
itself Linux? ucLinux has been ported to a number of embedded
environments: is it Linux on those environments or is it not?

Does Linux promise that the kind of memory protection you are thinking
of is active on -every- page of memory? What if there happens to be
writable memory within the first "page" of process memory?


SGI IRIX "has memory protection" too, but on IRIX 6.4 and later on
"big" systems that run with 16 Kb virtual pages, virtual address 0 is
*not* write protected for reasons having to do with the graphics
subsystems (which were designed to expect memory at the 8 Kb mark
so that the smallest address format could be used, for speed; that
design was done at a time when IRIX only supported 4 Kb virtual pages.)
 
J

jacob navia

Walter said:
Does Linux promise memory protection on every implementation?
If a system that Linux is ported to cannot do the kind of memory
protection you are thinking of, then is it prohibitted from calling
itself Linux? ucLinux has been ported to a number of embedded
environments: is it Linux on those environments or is it not?

Does Linux promise that the kind of memory protection you are thinking
of is active on -every- page of memory? What if there happens to be
writable memory within the first "page" of process memory?


SGI IRIX "has memory protection" too, but on IRIX 6.4 and later on
"big" systems that run with 16 Kb virtual pages, virtual address 0 is
*not* write protected for reasons having to do with the graphics
subsystems (which were designed to expect memory at the 8 Kb mark
so that the smallest address format could be used, for speed; that
design was done at a time when IRIX only supported 4 Kb virtual pages.)

The OP said:

"I am working on MontaVista Linux on a PPC board"

The PowerPC is a very modern chip with memory protection.
Linux with PPC chip is surely a combination where any NULL
access will crash, as it does in the x86 and in most
modern OSes/chips combinations.

There can be exceptions, of course, but they are that: exceptions,
and obviously you have to check for NULL in those implementations.
 
F

Frederick Gotham

Keith Thompson posted:
That is extraordinarily bad advice.

Of course, it depends on the requirements of the project and so forth, but
there are many circumstances in which I'd just assume that malloc succeeds.
At the very most, I might do something like:

#include <stddef.h>
#include <stdlib.h>

void *ExitiveMalloc(size_t const size)
{
void *const p = malloc(size);

if(!p) exit(EXIT_FAILURE);

return p;
}


Here's an example of where I'd be wreckless with malloc:

#include <stddef.h>
#include <stdlib.h>
#include <assert.h>
#include <string.h>
#include <ctype.h>

char *const TruncateAndMakeUppercase(char const *const arg,size_t const len)
{
int const assert_dummy = (assert(!!arg), 0);

char *const retval=ExitiveMalloc(len+1), *p=retval;

strncpy(retval,arg,len);

while( *p++ = toupper((char unsigned)*p) );

return (void)assert_dummy, retval;
}

(I'm not entirely sure if I violate a sequence point with the "*p++ = ").
 
R

Richard Heathfield

Frederick Gotham said:
Keith Thompson posted:


Of course, it depends on the requirements of the project and so forth, but
there are many circumstances in which I'd just assume that malloc
succeeds.

It's still a stupid assumption, no matter how many times you make it.
At the very most, I might do something like:

#include <stddef.h>
#include <stdlib.h>

void *ExitiveMalloc(size_t const size)
{
void *const p = malloc(size);

if(!p) exit(EXIT_FAILURE);

Bye-bye user's data. Who cares?
return p;
}


Here's an example of where I'd be wreckless with malloc:

We have no shortage of examples of stupid code. Why add to the mess? And
your code is far from wreckless. Spelling flames for the sake of it are
lame, but the difference in meaning between "reckless" and "wreckless" is
significant.
 
D

David Resnick

Frederick said:
Keith Thompson posted:


Of course, it depends on the requirements of the project and so forth, but
there are many circumstances in which I'd just assume that malloc succeeds.

Do you do this on code that others use and count on, or just personal
toy programs?

If you do this on code that others use and count on, then I think
poorly of your judgement and moreover of the abilities of those that
review and approve of your code. This is bad practive, and bad advice
to give. small *alloc calls can and do fail, and dealing with the
failure (even if by just an error message and exit) at the point of
failure is really the thing to do in basically all serious projects.
If you think the opinions of the posters of the board are without
weight, how about this from Kernighan and Pike's "The Practice of
Programming", page 14, "...in a real program the return value of
malloc, reallloc, strdup, or any other allocation routine should always
be checked". I suggest reading that book, it has lots of helpful ideas
about developing quality software.

You are currently arguing for this assertion.

Doing B depends on A succeeding. Don't check that A succeeded before
doing B.

Sounds good, no?

-David
 
F

Frederick Gotham

Richard Heathfield posted:
It's still a stupid assumption, no matter how many times you make it.


(1) Let's say we're running our program.
(2) Let's say we want to allocate 8 bytes.
(3) We call malloc.
(4) It fails.
(5) We must save our user's data and exit the program.
(6) But we must allocate 74 kilobytes in order to save the user's data.


The point I'm making is that it depends on the amount of memory you're trying
to allocate. If you're trying to allocated 6.7 MB, then by all means, take
precautions and preserve the user's data. If you're trying to allocate 13
bytes... well... abandon ship.

We have no shortage of examples of stupid code.


You cannot assert that the code is stupid unless you know the details of the
project. If the failure of the invocation of "TruncateAndToUppercase" leads
to an irreperable situation, then it might be quite pertinent to exit.

Why add to the mess? And your code is far from wreckless. Spelling
flames for the sake of it are lame, but the difference in meaning
between "reckless" and "wreckless" is significant.


Forgive my ignorance, but allow me a minute's recess to visit
dictionary.com...

I should have written "reckless" rather than "wreckless".
 
R

Richard Heathfield

Frederick Gotham said:
Richard Heathfield posted:



(1) Let's say we're running our program.
(2) Let's say we want to allocate 8 bytes.
(3) We call malloc.
(4) It fails.

You'll never know this unless you check, of course. And that's why you have
to check. So That You Know.
(5) We must save our user's data and exit the program.
(6) But we must allocate 74 kilobytes in order to save the user's data.

Fine, so you allocate an emergency reserve at the beginning of the program,
to cover your needs (and you try to find a smarter way to preserve the
data, if possible - one that doesn't need so much data). I've been
explaining this for years, to anyone who'll listen. Few do. The cliffs look
so inviting, and the sea down there is so pretty...
The point I'm making is that it depends on the amount of memory you're
trying to allocate.

No, it doesn't. That's ludicrous. What it depends on is whether you want
your program to work robustly. The kind of attitude you are espousing here
is typical amongst a great many "commercial" programmers, and it's one
reason why a great many "commercial" programs are a load of rubbish.
If you're trying to allocated 6.7 MB, then by all
means, take precautions and preserve the user's data. If you're trying to
allocate 13 bytes... well... abandon ship.

So you are claiming that the user's data is less important if you're only
trying to allocate a small amount of memory? Where's the logic in that?
That's just ridiculous! What if your word processor behaved that way? You'd
be furious, and rightly so.

You cannot assert that the code is stupid unless you know the details of
the project.

Yes, I can. It was evident from looking at the code that it was stupid. Even
the author wasn't sure it was correctly written. And of course there is my
rule of thumb: "never trust the code of anyone too stupid to check malloc,
too boorish to apologise for their libels, or both".
 
C

Charlton Wilbur

jacob navia said:
He is running linux, then I assume memory protection!
Who cares about each possible CPU in the planet?
He is in a certain situation, and give specific advice to THAT
situation!

<OT> And even in that specific situation, it's bad advice. There's no
guarantee that a pointer with a random value will point to memory that
that process does not have access to. Memory protection only works
when you attempt to scribble on memory belonging to another process,
not when you attempt to scribble on memory belonging to your own
process. </OT>

This is why you're consistently admonished to keep your posts on the
topic of Standard C.

Charlton
 
F

Frederick Gotham

Richard Heathfield posted:
Yes, I can. It was evident from looking at the code that it was stupid.
Even the author wasn't sure it was correctly written.

You have convinced me. My main peeve against checking malloc's of small
quantities of memory is that it's tedious, and so I think "ExitiveMalloc"
would relieve a bit of stress, not to mention reduce the signal-to-noise
ratio of the code (who wants to read countless lines of error-checking?).
Taking your advice on board though with regard to pre-allocating some reserve
memory in case of emergency, I've cooked up the following. It's a quick
sample of a program which holds important user data in memory -- data which
must be preserved even when the program fails. I think the ExitiveMalloc
works quite well.

---------- FILE BEGIN: exitmal.h ----------

#include <stddef.h>
void *ExitiveMalloc(size_t);

---------- FILE END: exitmal.h ----------

---------- FILE BEGIN: exitmal.c ----------

#include <stddef.h>
#include <stdlib.h>

void *ExitiveMalloc(size_t const size)
{
void *const p = malloc(size);

if(!p) exit(EXIT_FAILURE);

return p;
}

---------- FILE END: exitmal.c ----------

---------- FILE BEGIN: core.c ----------

#include <stdlib.h>

#include "exitmal.h"

void *mem_needed_for_saving;
int backup_required = 0;

void SetupBackupSystem(void)
{
void InitiateEmergencyBackupSequence(void);

mem_needed_for_saving = ExitiveMalloc(98304U);
atexit(InitiateEmergencyBackupSequence);
}

void InitiateEmergencyBackupSequence(void)
{
if(backup_required)
{
/* Use pre-allocated memory to facilitate in saving */
}
}

int main(void)
{
SetupBackupSystem();

/* Let's pretend that this is a circuit schematic package,
and that the user is working on a document. */

/* Now let's say that TruncateAndToUppercase is called, and
that it fails to allocate 23 bytes. TruncateAndToUppercase
contains a call to ExitiveMalloc, and so "exit" is called
upon the allocation failure. */

return 0;
}

---------- FILE END: core.c ----------


If any function in the program invokes ExitiveMalloc, we still preserve our
user's precious data. What do you think?
 
N

Nelu

Frederick said:
Richard Heathfield posted:


You have convinced me. My main peeve against checking malloc's of small
quantities of memory is that it's tedious, and so I think "ExitiveMalloc"
would relieve a bit of stress, not to mention reduce the signal-to-noise
ratio of the code (who wants to read countless lines of error-checking?).

I would like to read the countless lines of error checking
because I had bad experience with other people's code when they
don't do it and I have to write it myself to find out the problem
with buggy code that has too many failure points that are unchecked.
 
W

Walter Roberson

Charlton Wilbur said:
<OT> And even in that specific situation, it's bad advice. There's no
guarantee that a pointer with a random value will point to memory that
that process does not have access to. Memory protection only works
when you attempt to scribble on memory belonging to another process,
not when you attempt to scribble on memory belonging to your own
process. </OT>

Depends on the system. It isn't uncommon at all for systems to be
able to offer protection against writing to particular pages.
That would protect a process from itself.

For an obvious example of this, think of compilers that put
string literals into non-writable memory.

Per-page memory protection bits that I've heard of on various systems
include: Read, Write, Execute, and Copy-on-Write

On most of the systems, Execute implies Read as well, but on
some apparently it doesn't: those systems can be configured so that
no -explicit- read access to the pages are made.

This is why you're consistently admonished to keep your posts on the
topic of Standard C.

Unfortunately, your technical correction was itself flawed. And
probably -my- technical correction to your correction has some flaw
as well. It would have been better if Jacob hadn't made the
non-portable assumption, which probably wasn't even completely
portable within the domain of systems ("Linux") he was intending
his advice to apply to.
 
K

Keith Thompson

Frederick Gotham said:
Keith Thompson posted:


Of course, it depends on the requirements of the project and so forth, but
there are many circumstances in which I'd just assume that malloc succeeds.
[snip]

People should of course decide for themselves, but I personally
recommend avoiding any advice offered by Frederick Gotham. Anyone who
(a) advocates failing to check the result of malloc() and (b)
stubbornly refuses to apologize for repeated libel is not to be
trusted.

Frederick: I again call on you to apologize for your earlier insults.
(You know perfectly well what I'm talking about.)
 
J

jacob navia

Charlton said:
<OT> And even in that specific situation, it's bad advice. There's no
guarantee that a pointer with a random value will point to memory that
that process does not have access to. Memory protection only works
when you attempt to scribble on memory belonging to another process,
not when you attempt to scribble on memory belonging to your own
process. </OT>

This is why you're consistently admonished to keep your posts on the
topic of Standard C.

Charlton
According to the standard, malloc can return either NULL or
a VALID pointer so your argumentation makes no sense.

Malloc can never return an invalid pointer (at least according
to the specs).

You are confusing. I am speaking about not testing the result
of malloc, not using uninitialized variables or whatever!!!

The idea is that in a debug setting, a failed
malloc means that a bad size was passed in. In that case it
is better to leave the program crash right there and see
with the debugger what is happening.

This is one way of debugging software. It is maybe not
the best, and relies on a specific OS characteristics.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,614
Members
45,292
Latest member
EttaCasill

Latest Threads

Top