When to use automatic variables and when to use malloc

  • Thread starter Jorge Peixoto de Morais Neto
  • Start date
E

Eric Sosman

Jorge said:
[...]
When I told that "If there is no memory, just let it crash" I was
think of applications like video encoding. If, at the beginning of the
encoding, a buffer cannot be allocated, the program should just exit,
and I don't care about cleanup (since we have no useful data to be
flushed). But of course, I wouldn't do this in the main encoding loop,
since there may be useful data to be flushed, so I should call exit.
Heck, it may happen that he program runs out of memory in the very
last frame of the video...

<off-topic level="mostly">

Something of a reach, but the Marx Brothers addressed this
issue in a scene from "A Night at the Opera." For some reason
Chico and Harpo are pretending to be early aviators who have
flown the Atlantic and are now receiving a heroes' welcome in
America. Chico takes the microphone to describe their exploit
(paraphrase, from memory):

First time we fly to America, we get halfway across, we
run outta gas. So we gotta go back. Second time we
take plennya gas, we get almost all the way across, and
whaddya know? We run outta gas. So we gotta go back.
Third time, we take-a the boat.

Forgive my ignorance, but so what i

Something seems to have been lost -- is this where you ran
outta gas?

After the failure (assuming you choose to exit the program),
there are two likely candidates for the argument to exit(): either
EXIT_FAILURE, which means "failure (of some sort)" on all systems,
or some system-specific failure code that provides more information
("failed in such-and-such a way" rather than just "failed"). There
does not seem to be any standardization of what these other failure
codes might mean, or even whether they're available: You've got
everything from VMS' highly-structured system of condition codes
to Unix' "zero is good, all else is bad."

The program's exit status should not be confused with the errno
value from any particular operation. It is tempting to think of
errno as the "exit status" of a library call, but the analogy is
not good enough. For one thing, errno always has a value but that
value is only meaningful if a function has actually failed. The
program's exit status, on the other hand, always has its meaning
(however that meaning is interpreted by the host system).

By the way, only a subset of library functions are actually
required to set errno when they fail; others may do so (and often
do), but practice varies from implementation to implementation.
The Standard does not require malloc() to set errno; some versions
do and some don't, and some set it "sometimes." There are people
who argue that this means you shouldn't call perror() after failure
in malloc() or fopen() or any other function that isn't guaranteed
to set errno, because you risk printing a meaningless and possibly
misleading error: "Not a typewriter" for an out-of-memory condition,
for example. I belong to what might be called the optimistic camp:
I'm willing to take the risk of emitting a garbage message in the
hope that sometimes the message may be helpful. Take your pick.
 
W

websnarf

I've heard of all these things, have experienced some of
them, and do not see how they are due to malloc().

Ok, so you see how the OP wrote "The problems and dangers"? See, he's
referring to the same things I am. And some people in the world
actually measure for those kinds of risk, and don't really give a rats
ass about *YOUR* personal experience on the matter. See how that
works? When you talk about things that have been objectively measured
and observed, you are able to assume them with rational people and
have rational discourse about the real issues at hand rather than
having to reiterate the obvious. Objective reality, and things that
are measured inform some people about one way of looking at the world,
while your experience informs *YOU* in an entirely different kind of
way. Your way works for discourse with people who are your fans or
something of that nature, while the objective way is more suited for
this special class of people called "rational" who I don't have to
have ever met before (or even agree with, on very much) to be able to
converse with. Of course I do give up the opportunity to talk to the
"irrationalists", but I guess that's just the price I have to pay.
 
A

Andrew Poelstra

Ok, so you see how the OP wrote "The problems and dangers"? See, he's
referring to the same things I am. And some people in the world
actually measure for those kinds of risk, and don't really give a rats
ass about *YOUR* personal experience on the matter. See how that
works? When you talk about things that have been objectively measured
and observed, you are able to assume them with rational people and
have rational discourse about the real issues at hand rather than
having to reiterate the obvious. Objective reality, and things that
are measured inform some people about one way of looking at the world,
while your experience informs *YOU* in an entirely different kind of
way. Your way works for discourse with people who are your fans or
something of that nature, while the objective way is more suited for
this special class of people called "rational" who I don't have to
have ever met before (or even agree with, on very much) to be able to
converse with. Of course I do give up the opportunity to talk to the
"irrationalists", but I guess that's just the price I have to pay.

http://www.linux-watch.com/news/NS5656359853.html

"As of 2006, at least 20 percent of all in-house business programs were
still being written in Basic."

(And it goes on to say that most of these Basic programmers can't do
anything outside of Basic.)

*** Stupid programmers are the cause of far more problems than poorly
designed languages are. C is not difficult to use safely. The problem is
when non-C programmers try to use it without having learnt how to
program properly. ***

Therefore, /more/ C, at least in our schools, would help code quality
more than /less/ C would.
 
S

santosh

Andrew said:
Ok, so you see how the OP wrote "The problems and dangers"? See, he's
referring to the same things I am.
[ ... ] Your way works for discourse with people who are your fans or
something of that nature, while the objective way is more suited for
this special class of people called "rational" who I don't have to
have ever met before (or even agree with, on very much) to be able to
converse with. Of course I do give up the opportunity to talk to the
"irrationalists", but I guess that's just the price I have to pay.

http://www.linux-watch.com/news/NS5656359853.html

"As of 2006, at least 20 percent of all in-house business programs were
still being written in Basic."

(And it goes on to say that most of these Basic programmers can't do
anything outside of Basic.)

*** Stupid programmers are the cause of far more problems than poorly
designed languages are. C is not difficult to use safely. The problem is
when non-C programmers try to use it without having learnt how to
program properly. ***

Therefore, /more/ C, at least in our schools, would help code quality
more than /less/ C would.

IMHO, introducing C as the second programming language is good. Since
C doesn't automise many things that some of the other languages do, it
provides an excellent opportunity to learn careful programming.
 
N

Nick Keighley

Jorge said:
Good example. It reminds that a wrapper like mine should not be
blindly used. I still think it is OK to use in a lot of circustances,
though, as long as you know what you are doing.

the people who wrote the code for my car's anti-lock brakes
used malloc()?!!

<snip>

--
Nick Keighley

"High Integrity Software: The SPARK Approach to Safety and Security"
Customers interested in this title may also be interested in:
"Windows XP Home"
(Amazon)
 
J

Jorge Peixoto de Morais Neto

there are two likely candidates for the argument to exit(): either
EXIT_FAILURE, which means "failure (of some sort)" on all systems,
or some system-specific failure code that provides more information
("failed in such-and-such a way" rather than just "failed"). There
does not seem to be any standardization of what these other failure
codes might mean, or even whether they're available: You've got
everything from VMS' highly-structured system of condition codes
to Unix' "zero is good, all else is bad."
There is sysexits.h (an attempt of standardization) , although by
reading it I don't know which error should I use. Maybe EX_OSERR, but
I'm not sure.
The program's exit status should not be confused with the errno
value from any particular operation. It is tempting to think of
errno as the "exit status" of a library call, but the analogy is
not good enough. For one thing, errno always has a value but that
value is only meaningful if a function has actually failed. The
program's exit status, on the other hand, always has its meaning
(however that meaning is interpreted by the host system).

By the way, only a subset of library functions are actually
required to set errno when they fail; others may do so (and often
do), but practice varies from implementation to implementation.
The Standard does not require malloc() to set errno; some versions
do and some don't, and some set it "sometimes." There are people
who argue that this means you shouldn't call perror() after failure
in malloc() or fopen() or any other function that isn't guaranteed
to set errno, because you risk printing a meaningless and possibly
misleading error: "Not a typewriter" for an out-of-memory condition,
for example. Funny!
I belong to what might be called the optimistic camp:
I'm willing to take the risk of emitting a garbage message in the
hope that sometimes the message may be helpful. Take your pick.
Well, Unix98 requires malloc to set errno to ENOMEM (from the malloc
man page), and that's good enough to me, in my particular situation.
 
R

Roberto Waltman

Nick Keighley said:
the people who wrote the code for my car's anti-lock brakes
used malloc()?!!

Hopefully not! My comment was with regards to "... I just close the
program ...", not to the actual usage of malloc().

But just to be safe, next time you go for an oil change, ask you
mechanic to check the malloc()'s on any on-board computer.
And mail 'the people' a copy of the MISRA standard ... ;)

Roberto Waltman

[ Please reply to the group,
return address is invalid ]
 
K

Keith Thompson

CBFalconer said:
(e-mail address removed) wrote: [...]
Is an array really equivalent to a non-mutable non-NULL pointer?

Yes, except that that pointer is a different beast than the kind
you normally bandy about and pass hither and yon. It includes such
information as the size and type of the array, the scope, the
location, and what that location is relative to (local or global
storage, which local storage, etc.) (using the common usage for
global/local). It typically lives in the compilers internal
tables.

May I assume that by "Yes", you really meant "No"?

Arrays are not pointers. Pointer are not arrays. An array object
declaration implies a pointer *value* -- but so does any object
declaration:

int i;
/* &i is a pointer to the object */

The only thing special about arrays is the implicit array-to-pointer
conversion.
 
K

Keith Thompson

Jorge Peixoto de Morais Neto said:
There is sysexits.h (an attempt of standardization) , although by
reading it I don't know which error should I use. Maybe EX_OSERR, but
I'm not sure.
[...]

sysexits.h is system-specific. I see it on Solaris and Linux, so it
may be common to Unix-like systems, but it doesn't seem to be
mentioned by POSIX.

Furthermore, it doesn't appear to be an attempt to standardize exit
codes in general. Here's part of a comment from the Solaris version:

* SYSEXITS.H -- Exit status codes employed by the mail subsystem.
*
* This include file attempts to categorize possible error
* exit statuses for mail subsystem.
*
* Error numbers begin at EX__BASE to reduce the possibility of
* clashing with other exit statuses that random programs may
* already return.

EX__BASE is 64; few Unix programs that I've seen use exit codes that
large.

0, EXIT_SUCCESS, and EXIT_FAILURE are the only guaranteed portable
exit codes.

<OT>
Common practice on Unix is to use 0 for success, 1 for general error,
and possibly other small integers for specific failure modes
documented for each command.
</OT>

I've seen code that passes errno values to exit(), but it's not common
practice on any systems I'm aware of. In a more strongly-typed
language, errno and the parameter to exit() would likely have
incompatible types.
 
J

Jorge Peixoto de Morais Neto

Jorge Peixoto de Morais Neto said:
reading it I don't know which error should I use. Maybe EX_OSERR, but
I'm not sure.

[...]

sysexits.h is system-specific. I see it on Solaris and Linux, so it
may be common to Unix-like systems, but it doesn't seem to be
mentioned by POSIX.

Furthermore, it doesn't appear to be an attempt to standardize exit
codes in general. Here's part of a comment from the Solaris version:

* SYSEXITS.H -- Exit status codes employed by the mail subsystem.
*
* This include file attempts to categorize possible error
* exit statuses for mail subsystem.
I read somewhere that sysexits was an attempt of general
standardization.
My version says
"This include file attempts to categorize possible error
* exit statuses for system programs, notably delivermail
* and the Berkeley network."

Anyway, programs are supposed to come up with an arbitrary choice for
exit status, whith the recommendation that it is a small integer
(AFAIK). So I arbitrarily choose 12 (the value of ENOMEM on my
system). But it is better to use exit(MYENOMEM), with MYENOMEM defined
to 12, and document it, than to use exit(ENOMEM), because we don't
want the return value of the program to change from system to system.
 
K

Keith Thompson

Your way works for discourse with people who are your fans or
something of that nature, while the objective way is more suited for
this special class of people called "rational" who I don't have to
have ever met before (or even agree with, on very much) to be able to
converse with. Of course I do give up the opportunity to talk to the
"irrationalists", but I guess that's just the price I have to pay.

You give up the opportunity to talk to a lot of people, many of whom
are quite rational. It's the price you pay for being rude.
 
A

Andrew Poelstra

IMHO, introducing C as the second programming language is good. Since
C doesn't automise many things that some of the other languages do, it
provides an excellent opportunity to learn careful programming.

I think that C should be a /first/ language, and others taught on top of
that with the explanation that they do feature <x> behind the scenes or
automatically. Learning high-level languages without any idea of what's
happening is, IMHO, a recipe for security mistakes. How can a student
protect against buffer overflows if he's never heard of "array
boundaries"?

Learning C after the fact wouldn't have the same effect, I think.
 
U

user923005

(e-mail address removed) writes:

[...]
Your way works for discourse with people who are your fans or
something of that nature, while the objective way is more suited for
this special class of people called "rational" who I don't have to
have ever met before (or even agree with, on very much) to be able to
converse with. Of course I do give up the opportunity to talk to the
"irrationalists", but I guess that's just the price I have to pay.

You give up the opportunity to talk to a lot of people, many of whom
are quite rational. It's the price you pay for being rude.

The real tragedy here is that he is interesting and intelligent, but
even more curt than Dan Pop. I still read what he has to say, but I
guess that he will get himself killfiled by a lot of readers. If a
whinging twit with no brains is brusque and crusty, then I killfile
them or ignore them or whatever. But there are clearly exceptions who
are still worth reading.

On the other hand, we could all learn a lesson from posters like
Tanmoy Bhattacharya or Chris Torek who are never, ever rude (even when
someone is screaming for a clue-by-four upside the head).

IMO-YMMV.
 
C

CBFalconer

Andrew said:
I think that C should be a /first/ language, and others taught on
top of that with the explanation that they do feature <x> behind
the scenes or automatically. Learning high-level languages without
any idea of what's happening is, IMHO, a recipe for security
mistakes. How can a student protect against buffer overflows if
he's never heard of "array boundaries"?

Learning C after the fact wouldn't have the same effect, I think.

No, Pascal should be the first language. That will teach
organization without the confusion of Cs arcane symbology. The
later addition of C to the repetoire is quite easy. The student
may need to rework his i/o semantics, or write C functions to
implement Pascal standard procedures.
 
C

CBFalconer

Keith said:
You give up the opportunity to talk to a lot of people, many of whom
are quite rational. It's the price you pay for being rude.

s/rude/boorish/
 
D

Default User

CBFalconer said:
s/rude/boorish/

And that's the important distinction. Most of us have occasion to be
rude (excepting Chris Torek, perhaps). Paul's consistent ill-mannered
approach to discourse landed him in my killfile long ago.

Whether I miss out on some pearl of wisdom or not doesn't keep me up at
night. Besides, if he actually says something interesting, someone else
will likely reply and quote it.





Brian
 
R

Randy Howard

No, Pascal should be the first language.

I don't disagree, but in reality, the important thing is the quality of
the instructor and the curriculum, regardless of the language. From
the "homework" assignments seen here regularly, I'd say that quality
versions of them are in very short supply.
That will teach organization without the confusion of Cs arcane symbology.
The later addition of C to the repetoire is quite easy. The student
may need to rework his i/o semantics, or write C functions to
implement Pascal standard procedures.

It's also quite important to learn low-level system architecture issues
as well, once part of degree plans, now extinct.
 
J

Jorge Peixoto

Unfortunately, there is no elegant way to catch VLA allocation errors
and there is no way to reliably expect or detect them either.
You could do the old 'stack address subtraction' trick, but you can't
really {portably} know for sure how much stack is available.

On my system, using huge automatic variables gives me a message of
Segmentation fault. However, when I write a handler for SIGSEGV, the
handler is ignored and the program just crashes. But if I modify the
program to read a NULL pointer, the handler is called.

What is happening? Can't segmentation faults caused by stack overflow
be caught?
 
E

Eric Sosman

Jorge Peixoto wrote On 03/01/07 08:51,:
On my system, using huge automatic variables gives me a message of
Segmentation fault. However, when I write a handler for SIGSEGV, the
handler is ignored and the program just crashes. But if I modify the
program to read a NULL pointer, the handler is called.

What is happening? Can't segmentation faults caused by stack overflow
be caught?

The C language itself makes no promises about what can
and can't be done in such situations. Your system might make
additional promises. Check the system's documentation.

<off-topic>

One thing you might find in the system's documentation is
that signal handlers might not be called if there's no stack
space remaining to call them with ... You might also find
that you can prepare for this problem by using an "alternate
signal stack" or something of the kind.

</off-topic>

.... but I emphasize: all such might-be's are properties of
the system you happen to be running C on, not of C.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top