May the size argument of operator new overflow?

A

Angel Tsankov

Hello!

Does the C++ standard define what happens when the size argument of void*
operator new(size_t size) cannot represent the total number of bytes to be
allocated? For example:

struct S
{
char a[64];
};

S* allocate(int size)
{
return new S[size]; // What happens here?
}

int main()
{
allocate(0x7FFFFFFF);
}
 
I

Ian Collins

Angel said:
Hello!

Does the C++ standard define what happens when the size argument of void*
operator new(size_t size) cannot represent the total number of bytes to be
allocated? For example:
size_t will always be wide enough to represent the maximum memory range
on a given system.

If the system can't supply the requested size, new throws std::bad_alloc.
 
J

joseph cook

Hello!

Does the C++ standard define what happens when the size argument of void*
operator new(size_t size) cannot represent the total number of bytes to be
allocated? For example:

Yes. You cannot exceed numeric_limits<size_t>::max(). The same is
true for array size.
 
A

Angel Tsankov

Hello!
Does the C++ standard define what happens when the size argument of void*
operator new(size_t size) cannot represent the total number of bytes to be
allocated? For example:

Yes. You cannot exceed numeric_limits<size_t>::max(). The same is
true for array size.

OK, but what happens in the example that you have cut off?
 
A

Angel Tsankov

Hello!
size_t will always be wide enough to represent the maximum memory range
on a given system.

If the system can't supply the requested size, new throws std::bad_alloc.

This is not an answer to the question what happens in the example you have
cut off.
 
D

Daniel T.

This is not an answer to the question what happens in the example you have
cut off.

Either the system will supply the requested size, or std::bad_alloc
will be thrown. That is what happens in the example that was cut off.
 
J

James Kanze

Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?
For example:
struct S
{
char a[64];
};
S* allocate(int size)
{
return new S[size]; // What happens here?
}
int main()
{
allocate(0x7FFFFFFF);
}

Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch). That's according
to the standard; a lot of implementations seem to have bugs
here.
 
K

Kai-Uwe Bux

James said:
Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?
For example:
struct S
{
char a[64];
};
S* allocate(int size)
{
return new S[size]; // What happens here?
}
int main()
{
allocate(0x7FFFFFFF);
}

Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch). That's according
to the standard; a lot of implementations seem to have bugs
here.

I think, you are missing a twist that the OP has hidden within his posting:
the size of S is at least 64. The number of S objects that he requests is
close to numeric_limits<size_t>::max(). So when new S[size] is translated
into raw memory allocation, the number of bytes (not the number of S
objects) requested might exceed numeric_limits<size_t>::max().

I think (based on my understanding of [5.3.4/12]) that in such a case, the
unsigned arithmetic will just silently overflow and you end up allocating a
probably unexpected amount of memory.


Best

Kai-Uwe Bux
 
B

Bo Persson

Kai-Uwe Bux said:
James said:
Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?
For example:
struct S
{
char a[64];
};
S* allocate(int size)
{
return new S[size]; // What happens here?
}
int main()
{
allocate(0x7FFFFFFF);
}

Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch). That's according
to the standard; a lot of implementations seem to have bugs
here.

I think, you are missing a twist that the OP has hidden within his
posting: the size of S is at least 64. The number of S objects that
he requests is close to numeric_limits<size_t>::max(). So when new
S[size] is translated into raw memory allocation, the number of
bytes (not the number of S objects) requested might exceed
numeric_limits<size_t>::max().

I think (based on my understanding of [5.3.4/12]) that in such a
case, the unsigned arithmetic will just silently overflow and you
end up allocating a probably unexpected amount of memory.

Here is what one compiler does - catch the overflow and wrap it back
to numeric_limits<size_t>::max().

int main()
{
allocate(0x7FFFFFFF);
00401000 xor ecx,ecx
00401002 mov eax,7FFFFFFFh
00401007 mov edx,40h
0040100C mul eax,edx
0040100E seto cl
00401011 neg ecx
00401013 or ecx,eax
00401015 push ecx
00401016 call operator new[] (401021h)
0040101B add esp,4
}
0040101E xor eax,eax
00401020 ret


Bo Persson
 
J

Jerry Coffin

Hello!

Does the C++ standard define what happens when the size argument of void*
operator new(size_t size) cannot represent the total number of bytes to be
allocated? For example:

struct S
{
char a[64];
};

S* allocate(int size)
{
return new S[size]; // What happens here?
}

int main()
{
allocate(0x7FFFFFFF);
}

Chances are pretty good that at some point, you get something like:

void *block = ::new(0x7FFFFFFF*64);

On an implementation with a 32-bit size_t, that'll wraparound, and it'll
attempt to allocate 0xffffffc0 bytes instead of 0x1fffffffc0 bytes.
Chances are that allocation will immediately fail since that number is
_barely_ short of 4 gigabytes, and no 32-bit system I know of wiil have
that much contiguous address space available.

If, OTOH, you picked numbers where the wraparound produced a relatively
small number, chances are that the allocation would succeed, but when
you attempted to access what appeared to be successfully allocated
memory, you'd quickly go past the end of the real allocation, and get
undefined behavior.
 
A

Angel Tsankov

Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?

For example:

struct S
{
char a[64];
};

S* allocate(int size)
{
return new S[size]; // What happens here?
}

int main()
{
allocate(0x7FFFFFFF);
}

Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch). That's according
to the standard; a lot of implementations seem to have bugs
here.

I think, you are missing a twist that the OP has hidden within his
posting: the size of S is at least 64. The number of S objects that
he requests is close to numeric_limits<size_t>::max(). So when new
S[size] is translated into raw memory allocation, the number of
bytes (not the number of S objects) requested might exceed
numeric_limits<size_t>::max().

Thanks for pointing this out, I though it would be obvious to everyone.
The following example might be a little bit less confusing:

struct S
{
char a[64]; // Any size greater than 1 would do.
};

S* allocate(std::size_t size)
{
return new S[size]; // How many bytes of memory must the new operator
I think (based on my understanding of [5.3.4/12]) that in such a
case, the unsigned arithmetic will just silently overflow and you
end up allocating a probably unexpected amount of memory.

Here is what one compiler does - catch the overflow and wrap it back to
numeric_limits<size_t>::max().

int main()
{
allocate(0x7FFFFFFF);
00401000 xor ecx,ecx
00401002 mov eax,7FFFFFFFh
00401007 mov edx,40h
0040100C mul eax,edx
0040100E seto cl
00401011 neg ecx
00401013 or ecx,eax
00401015 push ecx
00401016 call operator new[] (401021h)
0040101B add esp,4
}
0040101E xor eax,eax
00401020 ret

Yes, the size requested is rounded to the maximum allocatable size, but is
this standard-compliant behavior? And if it is, how is client code notified
of the rounding?
 
I

Ian Collins

Angel said:
This is not an answer to the question what happens in the example you have
cut off.
What more is there to say other than "If the system can't supply the
requested size, new throws std::bad_alloc"? If the system had 32GB
free, new would succeed, otherwise it would fail.
 
I

Ian Collins

Angel Tsankov wrote:

[please don't snip attributions]
Bo Persson wrote:
Here is what one compiler does - catch the overflow and wrap it back to
numeric_limits<size_t>::max().

int main()
{
allocate(0x7FFFFFFF);
00401000 xor ecx,ecx
00401002 mov eax,7FFFFFFFh
00401007 mov edx,40h
0040100C mul eax,edx
0040100E seto cl
00401011 neg ecx
00401013 or ecx,eax
00401015 push ecx
00401016 call operator new[] (401021h)
0040101B add esp,4
}
0040101E xor eax,eax
00401020 ret

Yes, the size requested is rounded to the maximum allocatable size, but is
this standard-compliant behavior? And if it is, how is client code notified
of the rounding?
Your question has nothing to do with operator new() and everything to do
with integer overflow.

The reason some of us answered the way we did is probably because we are
used to systems where sizeof(int) == 4 and sizeof(size_t) == 8, so your
original code would simply have requested 32GB, not a lot on some systems.
 
B

Bo Persson

Angel said:
Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?

For example:

struct S
{
char a[64];
};

S* allocate(int size)
{
return new S[size]; // What happens here?
}

int main()
{
allocate(0x7FFFFFFF);
}

Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch). That's according
to the standard; a lot of implementations seem to have bugs
here.

I think, you are missing a twist that the OP has hidden within his
posting: the size of S is at least 64. The number of S objects
that he requests is close to numeric_limits<size_t>::max(). So
when new S[size] is translated into raw memory allocation, the
number of bytes (not the number of S objects) requested might
exceed numeric_limits<size_t>::max().

Thanks for pointing this out, I though it would be obvious to
everyone. The following example might be a little bit less
confusing:
struct S
{
char a[64]; // Any size greater than 1 would do.
};

S* allocate(std::size_t size)
{
return new S[size]; // How many bytes of memory must the new
I think (based on my understanding of [5.3.4/12]) that in such a
case, the unsigned arithmetic will just silently overflow and you
end up allocating a probably unexpected amount of memory.

Here is what one compiler does - catch the overflow and wrap it
back to numeric_limits<size_t>::max().

int main()
{
allocate(0x7FFFFFFF);
00401000 xor ecx,ecx
00401002 mov eax,7FFFFFFFh
00401007 mov edx,40h
0040100C mul eax,edx
0040100E seto cl
00401011 neg ecx
00401013 or ecx,eax
00401015 push ecx
00401016 call operator new[] (401021h)
0040101B add esp,4
}
0040101E xor eax,eax
00401020 ret

Yes, the size requested is rounded to the maximum allocatable size,
but is this standard-compliant behavior? And if it is, how is
client code notified of the rounding?

Requesting a numeric_limits<size_t>::max() allocation size is pretty
much assured to fail with a std::bad_alloc exception.


Bo Persson
 
J

James Kanze

James said:
Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?
For example:
struct S
{
char a[64];
};
S* allocate(int size)
{
return new S[size]; // What happens here?
}
int main()
{
allocate(0x7FFFFFFF);
}
Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch). That's according
to the standard; a lot of implementations seem to have bugs
here.
I think, you are missing a twist that the OP has hidden within
his posting: the size of S is at least 64. The number of S
objects that he requests is close to
numeric_limits<size_t>::max().

It's not on the systems I usually use, but that's not the point.
So when new S[size] is translated into raw memory allocation,
the number of bytes (not the number of S objects) requested
might exceed numeric_limits<size_t>::max().

And? That's the implementation's problem, not mine. I don't
see anything in the standard which authorizes special behavior
in this case.
I think (based on my understanding of [5.3.4/12]) that in such
a case, the unsigned arithmetic will just silently overflow
and you end up allocating a probably unexpected amount of
memory.

Could you please point to something in §5.3.4/12 (or elsewhere)
that says anything about "unsigned arithmetic". I only have a
recent draft here, but it doesn't say anything about using
unsigned arithmetic, or that the rules of unsigned arithmetic
apply for this calcule, or even that there is a calcule. (It is
a bit vague, I'll admit, since it says "A new-expression passes
the amount of space requested to the allocation function as the
first argument of type std:: size_t." It doesn't really say
what happens if the "amount of space" isn't representable in a
size_t. But since it's clear that the request can't be honored,
the only reasonable interpretation is that you get a bad_alloc.)
 
J

James Kanze

Angel Tsankov wrote:
Bo said:
Here is what one compiler does - catch the overflow and
wrap it back to numeric_limits<size_t>::max().
int main()
{
allocate(0x7FFFFFFF);
00401000 xor ecx,ecx
00401002 mov eax,7FFFFFFFh
00401007 mov edx,40h
0040100C mul eax,edx
0040100E seto cl
00401011 neg ecx
00401013 or ecx,eax
00401015 push ecx
00401016 call operator new[] (401021h)
0040101B add esp,4
}
0040101E xor eax,eax
00401020 ret
Yes, the size requested is rounded to the maximum
allocatable size, but is this standard-compliant behavior?

If the implementation can be sure that the call to operator
new[] will fail, it's probably the best solution. (This would
be the case, for example, if it really was impossible to
allocate that much memory.)

It doesn't have to be.
Your question has nothing to do with operator new() and
everything to do with integer overflow.

His question concerned operator new. Not unsigned integral
arithmetic.
The reason some of us answered the way we did is probably
because we are used to systems where sizeof(int) == 4 and
sizeof(size_t) == 8, so your original code would simply have
requested 32GB, not a lot on some systems.

Or because we take the standard literally.
 
J

James Kanze

Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated? For example:
struct S
{
char a[64];
};
S* allocate(int size)
{
return new S[size]; // What happens here?
}
int main()
{
allocate(0x7FFFFFFF);
}
Chances are pretty good that at some point, you get something
like:
void *block = ::new(0x7FFFFFFF*64);

There are a lot of implementations that do that. Luckily,
there's nothing in the standard which allows it.
 
J

James Kanze

Jerry Coffin <[email protected]> kirjutas:

[...]
The standard says that for too large allocations
std::bad_alloc must be thrown. In the user code there is no
unsigned arithmetic done, thus no wraparound can occur. I
would say that if the implementation does not check for the
overflow and silently wraps the result, the implementation
does not conform to the standard. It is irrelevant if the
implementation uses unsigned arithmetics inside, or e.g.
double.
I have not studied the standard in detail, so this is just my
opinion how it should work.

I have studied the standard in some detail, and your analysis is
clearly correct. Whether this is actually what the authors
meant to say is another question, but it is clearly what the
standard says. It is also obviously how it should work, from a
quality of implementation point of view. Anything else more or
less makes array new unusable. (On the other hand: who cares?
In close to twenty years of C++ programming, I've yet to find a
use for array new.)
 
K

Kai-Uwe Bux

James said:
James said:
Does the C++ standard define what happens when the size
argument of void* operator new(size_t size) cannot represent
the total number of bytes to be allocated?
For example:
struct S
{
char a[64];
};
S* allocate(int size)
{
return new S[size]; // What happens here?
}
int main()
{
allocate(0x7FFFFFFF);
}
Supposing that all values in an int can be represented in a
size_t (i.e. that size_t is unsigned int or larger---very, very
probably), then you should either get the memory, or get a
bad_alloc exception (which you don't catch). That's according
to the standard; a lot of implementations seem to have bugs
here.
I think, you are missing a twist that the OP has hidden within
his posting: the size of S is at least 64. The number of S
objects that he requests is close to
numeric_limits<size_t>::max().

It's not on the systems I usually use, but that's not the point.
So when new S[size] is translated into raw memory allocation,
the number of bytes (not the number of S objects) requested
might exceed numeric_limits<size_t>::max().

And? That's the implementation's problem, not mine. I don't
see anything in the standard which authorizes special behavior
in this case.

The question is what behavior is "special". I do not see which behavior the
standard requires in this case.

I think (based on my understanding of [5.3.4/12]) that in such
a case, the unsigned arithmetic will just silently overflow
and you end up allocating a probably unexpected amount of
memory.

Could you please point to something in §5.3.4/12 (or elsewhere)
that says anything about "unsigned arithmetic".

I qualified my statement by "I think" simply because the standard is vague
to me. However, it says for instance

new T[5] results in a call of operator new[](sizeof(T)*5+x),

and operator new takes its argument at std::size_t. Now, whenever any
arithmetic type is converted to std::size_t, I would expect [4.7/2] to
apply since size_t is unsigned. When the standard does not say that usual
conversion rules do not apply in the evaluation of the expression

sizeof(T)*5+x

what am I to conclude?
I only have a
recent draft here, but it doesn't say anything about using
unsigned arithmetic, or that the rules of unsigned arithmetic
apply for this calcule, or even that there is a calcule.

It gives the formula above. It does not really matter whether you interpret

sizeof(T)*5+x

as unsigned arithmetic or as plain math. A conversion to std::size_t has to
happen at some point because of the signature of the allocation function.
If [4.7/2] is not meant to apply to that conversion, the standard should
say that somewhere.
(It is
a bit vague, I'll admit, since it says "A new-expression passes
the amount of space requested to the allocation function as the
first argument of type std:: size_t." It doesn't really say
what happens if the "amount of space" isn't representable in a
size_t.

So you see: taken litterally, the standard guarantees something impossible
to happen.
But since it's clear that the request can't be honored,
the only reasonable interpretation is that you get a bad_alloc.)

Hm, that is a mixure of common sense and wishfull thinking :)

I agree that a bad_alloc is clearly what I would _want_ to get. I do not
see, however, how to argue from the wording of the standard that I _will_
get that.


Best

Kai-Uwe Bux
 
J

Jerry Coffin

[ ... ]
The standard says that for too large allocations std::bad_alloc must be
thrown. In the user code there is no unsigned arithmetic done, thus no
wraparound can occur. I would say that if the implementation does not
check for the overflow and silently wraps the result, the implementation
does not conform to the standard. It is irrelevant if the implementation
uses unsigned arithmetics inside, or e.g. double.

I have not studied the standard in detail, so this is just my opinion how
it should work.

Though it's in a non-normative note, the standard says ($5.3.4/12):

new T[5] results in a call of operator new[](sizeof(T)*5+x)

Even though that's a note, I think it's going to be hard to say it's
_wrong_ for an implementation to do exactly what that says -- and if
sizeof(T) is the maximum value for size_t, the expression above will
clearly wraparound...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,043
Latest member
CannalabsCBDReview

Latest Threads

Top