May the size argument of operator new overflow?

I

Ian Collins

James said:
His question concerned operator new. Not unsigned integral
arithmetic.
He asked:

S* allocate(std::size_t size)
{
return new S[size]; // How many bytes of memory must the new operator
allocate if size equals std::numeric_limits<size_t>::max()?
}

Which has boils down to what is N*std::numeric_limits<size_t>::max()?
 
J

James Kanze

James said:
James Kanze wrote:
Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?
For example:
struct S
{
char a[64];
};
S* allocate(int size)
{
return new S[size]; // What happens here?
}
int main()
{
allocate(0x7FFFFFFF);
}
Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch). That's according
to the standard; a lot of implementations seem to have bugs
here.
I think, you are missing a twist that the OP has hidden within
his posting: the size of S is at least 64. The number of S
objects that he requests is close to
numeric_limits<size_t>::max().
It's not on the systems I usually use, but that's not the point.
So when new S[size] is translated into raw memory allocation,
the number of bytes (not the number of S objects) requested
might exceed numeric_limits<size_t>::max().
And? That's the implementation's problem, not mine. I don't
see anything in the standard which authorizes special behavior
in this case.
The question is what behavior is "special". I do not see which
behavior the standard requires in this case.

I agree that it's not as clear as it could be, but the standard
says that "A new-expression passes the amount of space requested
to the allocation function as the first argument of type std::
size_t." That's clear enough (and doesn't talk about
arithmetic; how the compiler knows how much to allocate is an
implementation detail, as long as it gets it right). The
problem is what happens when the "amount of space" cannot be
represented in a size_t; the standard seems to ignore this case,
but since it is clear that the requested allocation can't be
honored, the only reasonable interpretation is that the code
behave as if the requested allocation can't be honored: throw a
bad_alloc, unless the operator new function is nothrow, in which
case return a null pointer.
I think (based on my understanding of [5.3.4/12]) that in such
a case, the unsigned arithmetic will just silently overflow
and you end up allocating a probably unexpected amount of
memory.
Could you please point to something in §5.3.4/12 (or elsewhere)
that says anything about "unsigned arithmetic".
I qualified my statement by "I think" simply because the
standard is vague to me. However, it says for instance
new T[5] results in a call of operator new[](sizeof(T)*5+x),
and operator new takes its argument at std::size_t. Now,
whenever any arithmetic type is converted to std::size_t, I
would expect [4.7/2] to apply since size_t is unsigned. When
the standard does not say that usual conversion rules do not
apply in the evaluation of the expression

Note that code is part of a non-normative example, designed to
show one particular aspect, and not to be used as a normative
implementation.
sizeof(T)*5+x
what am I to conclude?

That the example is concerned about showing the fact that the
requested space may be larger than simply sizeof(T)*5, and
doesn't bother with other issues:).
It gives the formula above. It does not really matter whether
you interpret
sizeof(T)*5+x

as unsigned arithmetic or as plain math. A conversion to
std::size_t has to happen at some point because of the
signature of the allocation function. If [4.7/2] is not meant
to apply to that conversion, the standard should say that
somewhere.
(It is a bit vague, I'll admit, since it says "A
new-expression passes the amount of space requested to the
allocation function as the first argument of type std::
size_t." It doesn't really say what happens if the "amount
of space" isn't representable in a size_t.
So you see: taken litterally, the standard guarantees
something impossible to happen.

More or less. And since the compiler can't honor impossible
requests, the request must fail somehow. The question is how:
undefined behavior or something defined? In the case of
operator new, the language has specified a defined behavior for
cases where the request fails.

There are two ways to interpret this: at least one school claims
that if the system cannot honor your request, you've exceeded
its resource limit, and so undefined behavior ensues. While the
standard says you must get a bad_alloc, it's not really required
because of this undefined behavior. This logic has often been
presented as a justification of lazy commit. (Note that from
the user point of view, the results of overflow here or lazy
commit are pretty much the same: you get an apparently valid
pointer back, and then core dump when you try to access the
allocated memory.)

Note that the problem is more general. Given something like:

struct S { char c[ SIZE_MAX / 4 ] ; } ;
std::vector< S > v( 2 ) ;
v.at( 4 ) ;

am I guaranteed to get an exception? (Supposing that I didn't
get a bad_alloc in the constructor of v.)
Hm, that is a mixure of common sense and wishfull thinking :)

Maybe:). I think that the wording of the standard here is
vague enough that you have to use common sense to interpret it.

In some ways, the problem is similar to that of what happens to
the allocated memory if the constructor in a new expression
throws. The ARM didn't specify clearly, but "common sense" says
that the compiler must free it. Most implementations ignored
common sense, but when put to the point, the committee clarified
the issue in the direction of common sense.
I agree that a bad_alloc is clearly what I would _want_ to
get. I do not see, however, how to argue from the wording of
the standard that I _will_ get that.

The absense of any specific liberty to do otherwise?
 
G

Greg Herlihy

James said:
Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?
For example:
struct S
{
 char a[64];
};
S* allocate(int size)
{
 return new S[size]; // What happens here?
}
int main()
{
 allocate(0x7FFFFFFF);
}
Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch).  That's according
to the standard; a lot of implementations seem to have bugs
here.
I think, you are missing a twist that the OP has hidden within
his posting: the size of S is at least 64. The number of S
objects that he requests is close to
numeric_limits<size_t>::max().

It's not on the systems I usually use, but that's not the point.
So when new S[size] is translated into raw memory allocation,
the number of bytes (not the number of S objects) requested
might exceed numeric_limits<size_t>::max().

And?  That's the implementation's problem, not mine.  I don't
see anything in the standard which authorizes special behavior
in this case.
I think (based on my understanding of [5.3.4/12]) that in such
a case, the unsigned arithmetic will just silently overflow
and you end up allocating a probably unexpected amount of
memory.

Could you please point to something in §5.3.4/12 (or elsewhere)
that says anything about "unsigned arithmetic".  I only have a
recent draft here, but it doesn't say anything about using
unsigned arithmetic, or that the rules of unsigned arithmetic
apply for this calcule, or even that there is a calcule.

The problem in this case that the calculated size of the array:
sizeof(T) * N wraps around if the result of the multiplication
overflows. The product is certain to overflow - because size_t is
required to be an unsigned integral type.

So it can well be the case that the size of the memory request as
passed to the allocation function winds up being small enough to be
allocated (due to the overflow), even though the size of the needed
memory allocation is much larger. So the behavior of a program that
attempts to allocate an array of an N-number of T objects (when N
*sizeof(T) overflows) is undefined,y.

Moreover, the C++ Standards Committee agrees with this interpretation
- but has (so far) decided not to require that std::bad_alloc be
thrown in this situation. They reasoned:

"Each implementation is required to document the maximum size of an
object (Annex B [implimits]). It is not difficult for a program to
check array allocations to ensure that they are smaller than this
quantity. Implementations can provide a mechanism in which users
concerned with this problem can request extra checking before array
allocations, just as some implementations provide checking for array
index and pointer validity. However, it would not be appropriate to
require this overhead for every array allocation in every program."

See: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2506.html#256

This same issue has since been reopened (#624) with the proposed
additional wording:

"If the value of the expression is such that the size of the allocated
object would exceed the implementation-defined limit, an exception of
type std::bad_alloc is thrown and no storage is obtained."

See: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2504.html#624

But until and unless Issue #624 is adopted, the behavior of a program
that makes an oversized allocation request - is undefined.

Greg
 
J

James Kanze

[...]
I think (based on my understanding of [5.3.4/12]) that in such
a case, the unsigned arithmetic will just silently overflow
and you end up allocating a probably unexpected amount of
memory.
Could you please point to something in §5.3.4/12 (or elsewhere)
that says anything about "unsigned arithmetic". I only have a
recent draft here, but it doesn't say anything about using
unsigned arithmetic, or that the rules of unsigned arithmetic
apply for this calcule, or even that there is a calcule.
The problem in this case that the calculated size of the array:
sizeof(T) * N wraps around if the result of the multiplication
overflows. The product is certain to overflow - because size_t is
required to be an unsigned integral type.

As I said, that's the implementation's problem, not mine:).
So it can well be the case that the size of the memory request as
passed to the allocation function winds up being small enough to be
allocated (due to the overflow), even though the size of the needed
memory allocation is much larger. So the behavior of a program that
attempts to allocate an array of an N-number of T objects (when N
*sizeof(T) overflows) is undefined,y.
Moreover, the C++ Standards Committee agrees with this interpretation
- but has (so far) decided not to require that std::bad_alloc be
thrown in this situation. They reasoned:
"Each implementation is required to document the maximum size of an
object (Annex B [implimits]). It is not difficult for a program to
check array allocations to ensure that they are smaller than this
quantity. Implementations can provide a mechanism in which users
concerned with this problem can request extra checking before array
allocations, just as some implementations provide checking for array
index and pointer validity. However, it would not be appropriate to
require this overhead for every array allocation in every program."

I thought that there was a DR about this, but I couldn't
remember exactly. Thanks for the reference.

Regretfully, the rational is technically incorrect; the user
hasn't the slightest way of knowing whether the required
arithmetic will overflow. (Remember, the equation is
n*sizeof(T)+e, where e is unspecified, and may even vary between
invocations of new. And since you can't know e, you're screwed
unless the compiler---which does know e---does something about
it.)
This same issue has since been reopened (#624) with the proposed
additional wording:
"If the value of the expression is such that the size of the allocated
object would exceed the implementation-defined limit, an exception of
type std::bad_alloc is thrown and no storage is obtained."

But until and unless Issue #624 is adopted, the behavior of a
program that makes an oversized allocation request - is
undefined.

In other words:

struct S { char c[2] ; } ;
new S[2] ;

is undefined, since e could be something outrageously large.

Also, while an implementation is required to document the
implementation-defined limit of the size of an object (lot's of
luck finding that documentation), it doesn't make this value
available in any standard form within the code, so you can't
write any portable checks against it. (Of course, you can write
portable checks against std::numeric_limits<size_t>::max(),
which would be sufficient if there wasn't that e.)
 
P

peter koch

[snip]
Moreover, the C++ Standards Committee agrees with this interpretation
- but has (so far) decided not to require that std::bad_alloc be
thrown in this situation. They reasoned:
"Each implementation is required to document the maximum size of an
object (Annex B [implimits]). It is not difficult for a program to
check array allocations to ensure that they are smaller than this
quantity. Implementations can provide a mechanism in which users
concerned with this problem can request extra checking before array
allocations, just as some implementations provide checking for array
index and pointer validity. However, it would not be appropriate to
require this overhead for every array allocation in every program."
See:http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2506.html#256

I thought that there was a DR about this, but I couldn't
remember exactly.  Thanks for the reference.

Regretfully, the rational is technically incorrect; the user
hasn't the slightest way of knowing whether the required
arithmetic will overflow.  (Remember, the equation is
n*sizeof(T)+e, where e is unspecified, and may even vary between
invocations of new.  And since you can't know e, you're screwed
unless the compiler---which does know e---does something about
it.)

I believe that turning off error-detection here is the wrong
direction. C++ does not need one more situation where you have to rely
on the compiler to do an entirely reasonable error detection. Also,
how expensive is the check? I can not imagine any program where
checking for overflow will lead to either bloated code or a
performance degradation that is at all perceptible.
I know that well-written programs rarely (if ever) need new[], but the
check should precisely be made for the weaker programmers, which could
risk transferring a negative value as the size.

/Peter
This same issue has since been reopened (#624) with the proposed
additional wording:
"If the value of the expression is such that the size of the allocated
object would exceed the implementation-defined limit, an exception of
type std::bad_alloc is thrown and no storage is obtained."
See:http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2504.html#624
But until and unless Issue #624 is adopted, the behavior of a
program that makes an oversized allocation request - is
undefined.

In other words:

    struct S { char c[2] ; } ;
    new S[2] ;

is undefined, since e could be something outrageously large.

Also, while an implementation is required to document the
implementation-defined limit of the size of an object (lot's of
luck finding that documentation), it doesn't make this value
available in any standard form within the code, so you can't
write any portable checks against it.  (Of course, you can write
portable checks against std::numeric_limits<size_t>::max(),
which would be sufficient if there wasn't that e.)

Right - but why should you bother in the first place?

/Peter
 
G

Greg Herlihy

Moreover, the C++ Standards Committee agrees with this interpretation
- but has (so far) decided not to require that std::bad_alloc be
thrown in this situation. They reasoned:
"Each implementation is required to document the maximum size of an
object (Annex B [implimits]). It is not difficult for a program to
check array allocations to ensure that they are smaller than this
quantity. Implementations can provide a mechanism in which users
concerned with this problem can request extra checking before array
allocations, just as some implementations provide checking for array
index and pointer validity. However, it would not be appropriate to
require this overhead for every array allocation in every program."
See:http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2506.html#256

I thought that there was a DR about this, but I couldn't
remember exactly.  Thanks for the reference.

Actually, you deserve credit for filing Issue #256 (back in 2000), and
thereby first bringing this problem to the Committee's attention.
Regretfully, the rational is technically incorrect; the user
hasn't the slightest way of knowing whether the required
arithmetic will overflow.  (Remember, the equation is
n*sizeof(T)+e, where e is unspecified, and may even vary between
invocations of new.  And since you can't know e, you're screwed
unless the compiler---which does know e---does something about
it.)

The rationale provided is unsatisfactory on any number of levels.
Perhaps the most obvious shortcoming with the Committee's solution is
that a sizable number of C++ programmers (if the responses on this
thread are any indication) believe that this problem does not - or
could not - exist. (In fact, I was not aware of its existence either -
before I read this thread).
This same issue has since been reopened (#624) with the proposed
additional wording:
"If the value of the expression is such that the size of the allocated
object would exceed the implementation-defined limit, an exception of
type std::bad_alloc is thrown and no storage is obtained."
See:http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2504.html#624
But until and unless Issue #624 is adopted, the behavior of a
program that makes an oversized allocation request - is
undefined.

In other words:

    struct S { char c[2] ; } ;
    new S[2] ;

is undefined, since e could be something outrageously large.

In theory, yes. In practice, almost certainly not. The default
allocators supplied with g++ and Visual C++ do throw a std::bad_alloc
for any outsized memory allocation request - even when the size of the
requested allocation has overflowed. So the rationale provided by the
Committee seems not only out of touch with most C++ programmer's
expectations, but out of touch even with current C++ compiler
implementations.

Greg
 
J

James Kanze

Moreover, the C++ Standards Committee agrees with this interpretation
- but has (so far) decided not to require that std::bad_alloc be
thrown in this situation. They reasoned:
"Each implementation is required to document the maximum size of an
object (Annex B [implimits]). It is not difficult for a program to
check array allocations to ensure that they are smaller than this
quantity. Implementations can provide a mechanism in which users
concerned with this problem can request extra checking before array
allocations, just as some implementations provide checking for array
index and pointer validity. However, it would not be appropriate to
require this overhead for every array allocation in every program."
See:http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2506.html# 256
Regretfully, the rational is technically incorrect; the user
hasn't the slightest way of knowing whether the required
arithmetic will overflow. (Remember, the equation is
n*sizeof(T)+e, where e is unspecified, and may even vary between
invocations of new. And since you can't know e, you're screwed
unless the compiler---which does know e---does something about
it.)
I think that, properly read, it's right. The object being
allocated is the array, and the size of the array is the size
of an element times the number of elements. That's the value
that has to be compared to the maximum size of an object. Any
internal overhead is part of the allocation, but not part of
the object. The implementation has to allow for internal
overhead when it specifies the maximum size of an object.

In other words (if I understand you correctly), an
implementation isn't required to check for overflow on the
multiplication, but it is required to check on the following
addition?
 
A

Angel Tsankov

Moreover, the C++ Standards Committee agrees with this interpretation
- but has (so far) decided not to require that std::bad_alloc be
thrown in this situation. They reasoned:
"Each implementation is required to document the maximum size of an
object (Annex B [implimits]).

In what units must the maximum size of arrays be specified: bytes or
elements? If it is in bytes, does the specified amount include padding,
alignment and the like?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top