malloc syntax

P

pertheli

Hello all

What is the difference between Method 1 and Method 2 below? Is Method 2 safe to use?


typedef short Word;
typedef unsigned char Char;

int nAllocSize = large number;

//Method 1 crashes in my machine for large nAllocSize
Word* pShort = (Word*)malloc(nAllocSize*sizeof(Word));

//Method 2 seems ok even if nAllocSize is large
Word* pShort;
Char* pChar = (Char*)malloc(nAllocSize*sizeof(Char)*2);
pShort = (Word*)pChar;
 
D

Dan Pop

In said:
What is the difference between Method 1 and Method 2 below? Is Method 2 safe to use?

typedef short Word;
typedef unsigned char Char;

int nAllocSize = large number;

//Method 1 crashes in my machine for large nAllocSize
Word* pShort = (Word*)malloc(nAllocSize*sizeof(Word));

//Method 2 seems ok even if nAllocSize is large
Word* pShort;
Char* pChar = (Char*)malloc(nAllocSize*sizeof(Char)*2);
pShort = (Word*)pChar;

If sizeof(Word) == 2, the two methods are perfectly equivalent.
The right way of doing it, however, is:

size_t size = nAllocSize * sizeof *pShort;

if (size < nAllocSize || size < sizeof *pShort) {
/* we want to allocate more bytes than size_t can represent */
}

pShort = malloc(size);


Dan
 
R

Robert Stankowic

pertheli said:
Hello all

What is the difference between Method 1 and Method 2 below? Is Method 2 safe to use?


typedef short Word;
typedef unsigned char Char;

Why??
What's wrong with short and char?
Every C programmer knows these types and so the typedefs just add noise to
the code and force a maintainer to look the typedefs up...
int nAllocSize = large number;

//Method 1 crashes in my machine for large nAllocSize
Word* pShort = (Word*)malloc(nAllocSize*sizeof(Word));

#include <stdlib.h>
and make that
short *short_array = malloc(<large number> * sizeof *short_array);
Are you _sure_ that sizeof(short) == 2 in your implementation??
Method 1 crashing and Method 2 succeeding suggests otherwise.
//Method 2 seems ok even if nAllocSize is large
Word* pShort;
Char* pChar = (Char*)malloc(nAllocSize*sizeof(Char)*2);

char *other_array = malloc((<large number> * sizeof *other_array * 2);
And if sizeof(short) > 2 and you use the malloc()ed space you will probably
trigger WW3

The two methods are _not_ repeat _not_ equivalent.
Robert
 
J

John Bode

Hello all

What is the difference between Method 1 and Method 2 below? Is Method 2 safe to use?


typedef short Word;
typedef unsigned char Char;

int nAllocSize = large number;

//Method 1 crashes in my machine for large nAllocSize
Word* pShort = (Word*)malloc(nAllocSize*sizeof(Word));

Lose the cast. You don't need it (in C, anyway), and having it there
could supress a diagnostic if you don't have a prototype for malloc()
in scope (i.e., you forgot to #include stdlib.h -- you *did* remember
to #include stdlib.h, right?).

Word *pShort = malloc (nAllocSize * sizeof *pShort);

works just as well, and is a little cleaner to look at.

As to why one crashes when the other doesn't, I don't know. Provided
sizeof (short) == 2 * sizeof (char), they should allocate the same
amount of space.
 
A

August Derleth

Hello all

What is the difference between Method 1 and Method 2 below? Is Method 2 safe to use?

As a matter of fact, both are incorrect.
#include said:
typedef short Word;
typedef unsigned char Char;

This typedef could be confusing. Try uchar next time.
int nAllocSize = large number;

//Method 1 crashes in my machine for large nAllocSize
Word* pShort = (Word*)malloc(nAllocSize*sizeof(Word));

First, don't cast the return value of malloc. It returns a pointer to
void, which will be converted to the correct type automatically
without any casting. Casting only serves to hide the fact that you
didn't include stdlib.h like you should have.

Second, always check the return value of malloc. Always. Every time.
It's real easy, too. Watch:

#include <stdlib.h>

int *intbuf;

intbuf = malloc(50 * sizeof(*intbuf));
if (NULL == intbuf) {
fprintf(stderr, "malloc failed to allocate core\n");
exit(EXIT_FAILURE);
}

There. That's all. It will save you many headaches later on.

Third, while the rest of the code is correct, you should usually apply
sizeof to objects, not type names. That way, when you change the type
of an object, you don't have to hunt through your code changing a
bunch of sizeofs.
//Method 2 seems ok even if nAllocSize is large
Word* pShort;
Char* pChar = (Char*)malloc(nAllocSize*sizeof(Char)*2);
pShort = (Word*)pChar;

Again, don't cast the return value of malloc. (Yes, I'm bound to say
that every time. ;)) This will probably work, but when you try to
access pShort you may not get sensible results. For one thing, it
assumes that sizeof(short) is 2*. That could easily be false on a
given system. You could do it better with
sizeof(*pShort)/sizeof(*pChar). For another thing, you didn't copy the
array, only the pointer to the first element of the array. A change to
*pShort will modify *pChar and vice versa, and this could easily be
surprising.

*sizeof(char) is defined to be 1. You can rely on this.

And now I have a question: When I put a Char in pChar[0] and
immediately after read a Word out of pShort[0], is the result defined?
Or have I effectively accessed uninitialized memory?
 
A

August Derleth

(e-mail address removed) (Dan Pop) wrote in on
Thu 20 Nov 2003 10:09:51a:
In <[email protected]>


As a matter of fact, both are correct. Stylistically objectionable,
but technically correct.

Dan

I can't be held accountable for all of the crap the Standard implicitly
condones. ;)

The fact is that casting the return value of malloc (and all of the 'alloc
clan) is rather stupid and is a habit that should be broken in all C
programmers. It hides another error and implies a fundamental
misunderstanding of the whole point of having a pointer to void type.
 
S

Sidney Cadot

August said:
I can't be held accountable for all of the crap the Standard implicitly
condones. ;)

The fact is that casting the return value of malloc (and all of the 'alloc
clan) is rather stupid and is a habit that should be broken in all C
programmers. It hides another error and implies a fundamental
misunderstanding of the whole point of having a pointer to void type.

In all seriousness, I think the implication is a false one. I think I do
understand the point of having pointers to void in a language like C,
yet I am in favor of malloc() casts, both from a practical and a
theoretical point-of-view.

Admittedly, a few weeks ago when this came up some genuine disadvantages
were identified, but none of them was a completely convincing reason
for me to change (some would say mend) my ways in this respect.

Now I am not usually one to invoke Bjarne Stroustrup as a source of
profound wisdom, but I hope that you will agree that he knows a thing or
two about C. Of course, C++ doesn't allow implicit casts from void* to
pointers of any other types, and the reason is given in $5.6 of "The C++
Programming Language". After stating allowed operations on void*, it
continues "Other operations would be unsafe because the compiler cannot
know what kind of object is really pointed to."

Now in C of course, other operations are defined, such as implicit casts
to other pointer types. Hesitantly, but surely, I agree deeply with Mr.
Straustrup that this is a bad idea, and I consider this one of the messy
points in C. The fact that it is _allowed_ doesn't make it right, as far
as I am concerned.

Please understand that I don't want to re-open this particular issue for
debate; I would simply urge you to be careful with implying lack of
understanding of C features from things that are at least partly a
matter of style. I think casting malloc() results is not a practice that
merits disqualification of the practioner out of hand.

Surely, I don't like casting any more than the next guy; I've had to
wade through student's code littered with them a couple of hundreds
times too many for that. A cast, to me, is a legit operation if and only
if it serves to correct the idea that the compiler has about the type
of an expression, based on knowledge that I, the programmer have - but
the compiler doesn't. malloc(100*sizeof(double)), to me, is such a
situation. Concluding from this that I don't get the point of void*,
borders on being offensive. But I'm sure it wasn't intended that way.


Best regards,

Sidney
 
S

Simon Biber

Dan Pop said:
The right way of doing it, however, is:

size_t size = nAllocSize * sizeof *pShort;

if (size < nAllocSize || size < sizeof *pShort) {
/* we want to allocate more bytes than size_t can represent */
}

pShort = malloc(size);

Assuming:
SIZE_MAX == 65535
nAllocSize == 30000
sizeof *pShort == 4
Then:
nAllocSize * sizeof *pShort == 120000
120000 % 65536 == 54464
54464 > nAllocSize
54464 > sizeof *pShort

Or are you relying on the assumption that sizeof *pShort is 2? This
is not good code then...
 
S

Simon Biber

Sidney Cadot said:
Now I am not usually one to invoke Bjarne Stroustrup as a source of
profound wisdom, but I hope that you will agree that he knows a thing
or two about C. Of course, C++ doesn't allow implicit casts from void*
to pointers of any other types, and the reason is given in $5.6 of
"The C++ Programming Language". After stating allowed operations on
void*, it continues "Other operations would be unsafe because the
compiler cannot know what kind of object is really pointed to."

The objection that you "don't know what kind of object is really pointed to"
is not satisfied by adding casts.

For example:
void foo(int type, void *vp)
{
if(type == 1)
{
float *fp = vp;
*fp = 1.0;
}
else if(type == 2)
{
double *dp = vp;
*dp = 2.0;
}
}

Now, this function is valid C but invalid C++ because of the void* issue.
I agree that you cannot be sure whether the user of the function will
actually pass in the valid and correctly-aligned address of the specified
type, float or double respectively.

However, if I change it to be valid C++:
void bar(int type, void *vp)
{
if(type == 1)
{
float *fp = (float *)vp;
*fp = 1.0;
}
else if(type == 2)
{
double *dp = (double *)vp;
*dp = 2.0;
}
}

How has this actually helped the situation? The compiler still does not
check whether the conversion is valid.
 
S

Sidney Cadot

Simon said:
The objection that you "don't know what kind of object is really pointed to"
is not satisfied by adding casts.

That is not my point. The point is that, IMHO, this is a valid argument
for outlawing implicit casts from void* to something_else*.

It isn't outlawed in C, and to me that doesn't feel right.

For example:
void foo(int type, void *vp)
{
if(type == 1)
{
float *fp = vp;
*fp = 1.0;
}
else if(type == 2)
{
double *dp = vp;
*dp = 2.0;
}
}

Now, this function is valid C but invalid C++ because of the void* issue.
I agree that you cannot be sure whether the user of the function will
actually pass in the valid and correctly-aligned address of the specified
type, float or double respectively.

However, if I change it to be valid C++:
void bar(int type, void *vp)
{
if(type == 1)
{
float *fp = (float *)vp;
*fp = 1.0;
}
else if(type == 2)
{
double *dp = (double *)vp;
*dp = 2.0;
}
}

How has this actually helped the situation? The compiler still does not
check whether the conversion is valid.

Casts are few and far inbetween, in any reasonable program. To me, they
serve the purpose of putting out a warning sign: "something potentially
dangerous is going on here". I miss that with implicit casting.

Surely, for the compiler it doesn't matter much (it doesn't hurt to
include the casts either).

Along the same lines, I don't particularly like implicit casts from
double to int in C. These go one step further than pointer casts, in
that they actually *do* generate code. I don't like that for one bit.

Best regards,

Sidney
 
A

August Derleth

Sidney Cadot said:
That is not my point. The point is that, IMHO, this is a valid argument
for outlawing implicit casts from void* to something_else*.

It isn't outlawed in C, and to me that doesn't feel right.

Well, it feels very right to the people who don't want to have to
worry about getting casts right all the time, and who think casts in
general should be reserved for situations that are more questionable
than pointer conversion.

Don't think about the contortions the machine must go through to get
the pointers converted correctly: You're using C, not assembly. If you
code portably, you can legitimately say that it isn't your fault if
the pointers don't behave as they should. (You still might have to fix
the code, but at least you'll be able to say that the compiler
designer screwed up.)

Casts are few and far inbetween, in any reasonable program. To me, they
serve the purpose of putting out a warning sign: "something potentially
dangerous is going on here". I miss that with implicit casting.

Surely, for the compiler it doesn't matter much (it doesn't hurt to
include the casts either).

It doesn't hurt to include casts, except that it clutters the code
visually and obscures an important reason to use C instead of
assembly: You get nice features you don't have to think about. You
don't need to think about how floating-point numbers are represented,
how integer promotion needs to work, or how pointer conversion must
happen.

Casts are ugly, and they should only be used when something /really/
odd is happening. Pointer conversion isn't that odd: The Standard
Library depends upon it, and it can be used to great advantage to
write code that approaches polymorphism as closely as is possible in a
non-OO language.
Along the same lines, I don't particularly like implicit casts from
double to int in C. These go one step further than pointer casts, in
that they actually *do* generate code. I don't like that for one bit.

Again, don't worry about it. Or, if you must worry about it, choose a
type-safe language like Pascal or Ada. (Hell, some Pascal variants
don't even allow you to cast /at all/: Once you have a floating-point
number, you'll never be able to perform integer arithmetic on it!) C
was designed for a different group and with a different philosophy.
 
S

Sidney Cadot

Well, it feels very right to the people who don't want to have to
worry about getting casts right all the time, and who think casts in
general should be reserved for situations that are more questionable
than pointer conversion.

I don't agree, but well - that's life.
Don't think about the contortions the machine must go through to get
the pointers converted correctly: You're using C, not assembly. If you
code portably, you can legitimately say that it isn't your fault if
the pointers don't behave as they should. (You still might have to fix
the code, but at least you'll be able to say that the compiler
designer screwed up.)

I have no issue with the work the compiler needs to do in any
circumstance. My issue is that I want the source code to clearly reflect
the semantics of what is going on. I simply don't like C's ability to
'help' me by providing an implicit cast in this case.
It doesn't hurt to include casts, except that it clutters the code
visually and obscures an important reason to use C instead of
assembly: You get nice features you don't have to think about.

A line has to be drawn somewhere. In the days prior to C99, you could
happily not declare a result type for a function returning int; the
compiler would "help" by providing it for you. I didn't like that in
pretty much the same way as I don't like implicit void* casts.
> You don't need to think about how floating-point numbers are represented,

Actually, in my work, I really *do* need to do this.
how integer promotion needs to work,

For any piece of non-trivial low-level C code, you'll get burned pretty
soon if you don't know this.
or how pointer conversion must happen.

The "how" doesn't interest me all that much... It's more an issue of who
is in control of deciding whether two given types are assignment
compatible. I maintain that this is an issue that is for the programmer
to decide.
Casts are ugly, and they should only be used when something /really/
odd is happening.

....Like, changing the type of something, because I, the programmer, know
that the type inferred by the compiler is not correct? That's my rule
for determining whether a cast is warranted.

Could you define the circumstances where you feel a cast is warranted?
Pointer conversion isn't that odd: The Standary Library depends upon
> it, and it can be used to great advantage to write code that approaches
> polymorphism as closely as is possible in a non-OO language.

I'm not claiming pointer conversions are odd. I'm saying that "oddness"
is a pretty bad criterion to decide whether casts are ok, since it
doesn't mean much.
Again, don't worry about it.

That would be, well, foolish. I need to be able to know what this kind
of code does:


#include <stdio.h>
int main(void)
{
double x;
int y;

x = 1.9; y=x;
printf ("%g -> %d\n", x, y);

x = -1.9; y=x;
printf ("%g -> %d\n", x, y);

x = 1e30; y=x;
printf ("%g -> %d\n", x, y);

x = -1e30; y=x;
printf ("%g -> %d\n", x, y);

return 0;
}
Or, if you must worry about it, choose a
type-safe language like Pascal or Ada.

There's many thinks about C that make it the better choice for my kind
of work. My software needs to be distributed and compiled by users, for
one thing; I can only safely assume that they'll have a C compiler.
> (Hell, some Pascal variants don't even allow you to cast /at all/:
> Once you have a floating-point number, you'll never be able to perform
> integer arithmetic on it!)

As to the first remark: if memory serves, ISO C doesn't support casts.

As to the second statement, well, that is of course simply wrong. You
have trunc() and round() for that, at least. Which I think is a very
good idea.
C was designed for a different group and with a different philosophy.

Both have evolved. Unfortunately, it seems that there is no market
pressure nowadays to bring the Pascal standard up-to-date; in principle,
I think it has the potential of being a very useful and practical
language for many domains where C now reigns supreme. If you look at the
Delphi and Freepascal languages, there are few applications that they
cannot handle as well or better than C. Just my $0.05.


Best regards,

Sidney
 
K

Kevin Goodsell

Sidney said:
That is not my point. The point is that, IMHO, this is a valid argument
for outlawing implicit casts from void* to something_else*.

It isn't outlawed in C, and to me that doesn't feel right.

That's fine, but it's not an argument in favor of malloc-casting, as far
as I can tell. We are talking about the C language, not some theoretical
language similar to C but without implicit conversions from void* to
other pointer types. Within the C language as it actually exists,
omitting the cast is usually (slightly) safer.
Casts are few and far inbetween, in any reasonable program. To me, they
serve the purpose of putting out a warning sign: "something potentially
dangerous is going on here". I miss that with implicit casting.

There is no such thing as "implicit casting". A cast is, by definition,
an explicit conversion. An implicit conversion is not a cast.

As for your warning sign analogy, that's fine. But why would you want to
flag a perfectly safe malloc call with such a warning? Warning signs are
only useful when they are accurate.
Surely, for the compiler it doesn't matter much (it doesn't hurt to
include the casts either).

It doesn't "hurt" in the same sense that any unnecessary text (that
neither changes nor clarifies the meaning of the program) doesn't hurt.
You could cast every expression you type, throw in a few dozen
parenthesis for good measure, and have a haiku comment after every line
in your program if you wanted to, and it wouldn't "hurt", at least in
the sense that the program would still behave the same.

Of course, this only applies to casts as long as you haven't made a
mistake. Unnecessary casts are harmful (for normal people) mainly
because they can hide mistakes. If you never make a mistake in your
code, then you can certainly cast all you want without any harm.
Along the same lines, I don't particularly like implicit casts from
double to int in C. These go one step further than pointer casts, in
that they actually *do* generate code. I don't like that for one bit.

What makes you think a pointer cast doesn't generate code?

If you don't like implicit floating point to integer conversions, you
are free to not use them. Since such a conversion could actually be a
problem (unlike the implicit conversion of malloc's return value), using
a cast as a "warning sign" like you mentioned before may be appropriate.

-Kevin
 
S

Sidney Cadot

Kevin said:
Sidney Cadot wrote:
That's fine, but it's not an argument in favor of malloc-casting, as far
as I can tell. We are talking about the C language, not some theoretical
language similar to C but without implicit conversions from void* to
other pointer types.

There are features of C I do not use because I consider them not useful
or downright ugly. This is one of them; I would be surprised if the same
wouldn't be true for most of us here (trigraphs come to mind).

So for me, at least, this /is/ an argument against implicit void*
conversions, and therefore for explicit malloc casts.
Within the C language as it actually exists, omitting the cast is usually
> (slightly) safer.

This came up in the earlier discussion a few weeks ago. The "missing
#include stdlib.h" argument I think is not very strong, given that any
quality compiler can be made to warn about that nowadays. The argument
that you have to maintain consistency between no less than three type
specifiers when you do:

double *x = (double *)malloc(n * sizeof(double))

....is somewhat stronger, but this is still "taste" territory IMHO. All
in all, I'd have to say that the discussion is not an open-and-shut case
as many here feel it is, since there are also arguments /for/ malloc
casting.

I still see some validity in maintaining C++ compatibility. Essentially,
malloc casts are by far the most important (often only) thing to keep a
valid C program from also being a valid C++ program. Many disagreed,
arguing that I wouldn't try to compile my C program using a Fortran
compiler either. Upon which I responded that actually, I would, if this
could help me to catch errors (which is not the case, as it is with C++).

The second, more philosophical argument, is that I use casts to improve
or correct the compiler's knowledge of the type of an expression. Given
the loose expression malloc(1000 * sizeof(double)), this ought to be of
type double* instead of void*, so I cast. The fact that it is usually
assigned to a variable of type double* immediately after, invoking
implicit conversion, is irrelevant as far as I am concerned; the
expression itself ought to be of type double*, which allows the compiler
to catch errors of the form:

int *x = (double *)malloc(1000 * sizeof(double))

That would go unnoticed on

int *x = malloc(1000 * sizeof(double))

Although many (most here, at least) would prefer

int *x = malloc(1000 * sizeof *x)

Which doesn't suffer from any problem, except that the RHS expression is
of type void*, and needs an implicit conversion due to assignment
(which I don't like).

This basically summarizes the discussion pretty even-handedly, I hope.
The only thing I want to argue is that it's still up for discussion. No
killer arguments are made from both side of the fence I think.
There is no such thing as "implicit casting". A cast is, by definition,
an explicit conversion. An implicit conversion is not a cast.

I'll try to be more careful in the present and future.
As for your warning sign analogy, that's fine. But why would you want to
flag a perfectly safe malloc call with such a warning? Warning signs are
only useful when they are accurate.

Having a block of a thousand doubles floating around on the heap and
pointing to it by a void* to me is a strange situation. I mend it with
the cast. Overriding the type inferred by the compiler is something that
merits a warning for me.
It doesn't "hurt" in the same sense that any unnecessary text (that
neither changes nor clarifies the meaning of the program) doesn't hurt.
You could cast every expression you type, throw in a few dozen
parenthesis for good measure, and have a haiku comment after every line
in your program if you wanted to, and it wouldn't "hurt", at least in
the sense that the program would still behave the same.

My line was a response to the statement that not including the cast
doesn't make a whole lot of difference to the compiler.

Haiku comments and superfluous parentheses, you (and I) don't do that.
However we probably both use indenting and a consisting bracing style.
There's a whole spectrum of things one can do to source code without
altering its meaning, everybody draws a line somewhere.
Of course, this only applies to casts as long as you haven't made a
mistake. Unnecessary casts are harmful (for normal people) mainly
because they can hide mistakes. If you never make a mistake in your
code, then you can certainly cast all you want without any harm.

Incorrect casts are harmful almost by definition. Unnecessary casts, in
my opinion, depends a lot on one's definition of the word "unnecessary".

I have stated my criterion for using casts several times now, which is
overriding the type inferred by the compiler for an expression when I,
the programmer, know better. That's a pretty clear criterion. It's also
a defensible criterion, I think. Followinging this criterion, malloc
casts are no longer unneccesary, but an integral part of my particular
coding style.
What makes you think a pointer cast doesn't generate code?

Experience with modern hardware. Surely the standard allows for
architectures that have type-tagged memory and perhaps there are some
architectures even today that do need to execute code on a pointer type
conversion (either implicit or by cast), but I think it is fair to say
that this is rare. Especially compared to float-to-int conversions, that
must generate code.
If you don't like implicit floating point to integer conversions, you
are free to not use them. Since such a conversion could actually be a
problem (unlike the implicit conversion of malloc's return value), using
a cast as a "warning sign" like you mentioned before may be appropriate.

I think it is, yes.

Best regards,

Sidney
 
C

Chris Torek

August Derleth wrote:
[Some feel that]
...Like, changing the type of something, because I, the programmer, know
that the type inferred by the compiler is not correct? That's my rule
for determining whether a cast is warranted.

I think it is worth noting here that the phrase "changing the type
of something" has implications that are false. A cast from double
to int, for instance, does not change the type of the double value
being cast:

double x;
...
printf("(int)x = %d\n", (int)x);

but rather simply produces a new value, of the type given in the
cast, as defined by C's conversion rules for casts, which are a
superset of those for ordinary assignment. (The "superset" occurs
only because casts permit conversions that would otherwise be
forbidden, e.g., "int *" to "double *".)

Of course, since (as you mentioned earlier) you dislike the automatic
conversions -- e.g.:

int i;
...
i = x;

automatically truncates values like 3.14159265358979323846 (as
stored in x) to just 3 (then stored in i) -- then presumably you
will dislike the automatic conversions for "void *" (and apparently
you do):


Note that these are not "casts" but rather "conversions", because in
C, "cast" refers specifically and only to the syntax in which a
type-name is enclosed in parentheses.

You say they "go one step further" in generating actual code, but
in fact, perhaps they do not. For instance, the Data General
Eclipse has two pointer formats, so that pointer conversions
sometimes generate code; and hypothetically, one could have a C
implementation in which "int"s are actually stored in the same
floating-point form as "double"s, but are simply forced to be
integral and within the range that avoids stepwise increases (e.g.,
where nextafter(x) is x+2.0 instead of x+1.0 or less).

(The latter -- storing "int"s in FP bitpatterns -- might be used
on a Burroughs A-series machine, as I understand it. These machines
would certainly make implementing C "interesting", since they were
designed to produce only correct answers even if it took longer
than getting a fast-but-wrong answer. They did not survive
commercially. This pattern repeated later with the Intel 432.)
There's many thinks about C that make it the better choice for my kind
of work. My software needs to be distributed and compiled by users, for
one thing; I can only safely assume that they'll have a C compiler.

In that case, write in Ada, and point them at GNAT. :)
 
D

Dan Pop

In said:
(e-mail address removed) (Dan Pop) wrote in on
Thu 20 Nov 2003 10:09:51a:


I can't be held accountable for all of the crap the Standard implicitly
condones. ;)

The fact is that casting the return value of malloc (and all of the 'alloc
clan) is rather stupid and is a habit that should be broken in all C
programmers. It hides another error and implies a fundamental
misunderstanding of the whole point of having a pointer to void type.

Nevertheless, it is still technically correct, therefore you're wrong
calling it incorrect. It's OK to call it stupid or bad habit, but you
have no grounds on calling it incorrect.

Dan
 
D

Dan Pop

In said:
Assuming:
SIZE_MAX == 65535
nAllocSize == 30000
sizeof *pShort == 4
Then:
nAllocSize * sizeof *pShort == 120000
120000 % 65536 == 54464
54464 > nAllocSize
54464 > sizeof *pShort

Or are you relying on the assumption that sizeof *pShort is 2? This
is not good code then...

I've merely used the rule for detecting addition overflow, instead of the
one for multiplication overflow:

if (size / nAllocSize != sizeof *pShort) { ...

Dan
 
S

Sidney Cadot

Chris said:
I think it is worth noting here that the phrase "changing the type
of something" has implications that are false. A cast from double
to int, for instance, does not change the type of the double value
being cast:

double x;
...
printf("(int)x = %d\n", (int)x);

but rather simply produces a new value, of the type given in the
cast, as defined by C's conversion rules for casts, which are a
superset of those for ordinary assignment. (The "superset" occurs
only because casts permit conversions that would otherwise be
forbidden, e.g., "int *" to "double *".)

You are right. My terminology was imprecise.
[snipped a bit...]

Note that these are not "casts" but rather "conversions", because in
C, "cast" refers specifically and only to the syntax in which a
type-name is enclosed in parentheses.

Yes. Again, I stand corrected and will try my best to be more precise
next time.
You say they "go one step further" in generating actual code, but
in fact, perhaps they do not. For instance, the Data General
Eclipse has two pointer formats, so that pointer conversions
sometimes generate code;

I realize there are architectures where pointer casts would generate
code (although it is fair to say that nowadays they are quite rare, I
think).
and hypothetically, one could have a C
implementation in which "int"s are actually stored in the same
floating-point form as "double"s, but are simply forced to be
integral and within the range that avoids stepwise increases (e.g.,
where nextafter(x) is x+2.0 instead of x+1.0 or less).

That is an interesting possibility that I overlooked. Now that I think
of it: yes, it would be rather possible to use e.g. IEEE-754 doubles (or
even singles) as a valid integer type in C, by defining the exponent
part as padding bits. I think that could just be brought in line with C
integer semantics. Funny.
(The latter -- storing "int"s in FP bitpatterns -- might be used
on a Burroughs A-series machine, as I understand it. These machines
would certainly make implementing C "interesting", since they were
designed to produce only correct answers even if it took longer
than getting a fast-but-wrong answer. They did not survive
commercially. This pattern repeated later with the Intel 432.)


In that case, write in Ada, and point them at GNAT. :)

There are more reasons. For starters, I don't know Ada :)

Best regards,

Sidney
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,189
Latest member
CryptoTaxSoftware

Latest Threads

Top