May the size argument of operator new overflow?

Discussion in 'C++' started by Angel Tsankov, Jun 18, 2008.

  1. Hello!

    Does the C++ standard define what happens when the size argument of void*
    operator new(size_t size) cannot represent the total number of bytes to be
    allocated? For example:

    struct S
    {
    char a[64];
    };

    S* allocate(int size)
    {
    return new S[size]; // What happens here?
    }

    int main()
    {
    allocate(0x7FFFFFFF);
    }
    Angel Tsankov, Jun 18, 2008
    #1
    1. Advertising

  2. Angel Tsankov

    Ian Collins Guest

    Angel Tsankov wrote:
    > Hello!
    >
    > Does the C++ standard define what happens when the size argument of void*
    > operator new(size_t size) cannot represent the total number of bytes to be
    > allocated? For example:
    >

    size_t will always be wide enough to represent the maximum memory range
    on a given system.

    If the system can't supply the requested size, new throws std::bad_alloc.

    --
    Ian Collins.
    Ian Collins, Jun 18, 2008
    #2
    1. Advertising

  3. Angel Tsankov

    joseph cook Guest

    On Jun 18, 5:44 am, "Angel Tsankov" <-sofia.bg> wrote:
    > Hello!
    >
    > Does the C++ standard define what happens when the size argument of void*
    > operator new(size_t size) cannot represent the total number of bytes to be
    > allocated? For example:


    Yes. You cannot exceed numeric_limits<size_t>::max(). The same is
    true for array size.
    joseph cook, Jun 18, 2008
    #3
  4. > Hello!
    >
    > Does the C++ standard define what happens when the size argument of void*
    > operator new(size_t size) cannot represent the total number of bytes to be
    > allocated? For example:


    Yes. You cannot exceed numeric_limits<size_t>::max(). The same is
    true for array size.

    OK, but what happens in the example that you have cut off?
    Angel Tsankov, Jun 18, 2008
    #4
  5. >> Hello!
    >>
    >> Does the C++ standard define what happens when the size argument of void*
    >> operator new(size_t size) cannot represent the total number of bytes to
    >> be
    >> allocated? For example:
    >>

    > size_t will always be wide enough to represent the maximum memory range
    > on a given system.
    >
    > If the system can't supply the requested size, new throws std::bad_alloc.


    This is not an answer to the question what happens in the example you have
    cut off.
    Angel Tsankov, Jun 18, 2008
    #5
  6. Angel Tsankov

    Daniel T. Guest

    On Jun 18, 8:44 am, "Angel Tsankov" <-sofia.bg> wrote:

    > > If the system can't supply the requested size, new throws std::bad_alloc.

    >
    > This is not an answer to the question what happens in the example you have
    > cut off.


    Either the system will supply the requested size, or std::bad_alloc
    will be thrown. That is what happens in the example that was cut off.
    Daniel T., Jun 18, 2008
    #6
  7. Angel Tsankov

    James Kanze Guest

    On Jun 18, 11:44 am, "Angel Tsankov" <-sofia.bg> wrote:
    > Does the C++ standard define what happens when the size
    > argument of void* operator new(size_t size) cannot represent
    > the total number of bytes to be allocated?


    > For example:


    > struct S
    > {
    > char a[64];
    > };


    > S* allocate(int size)
    > {
    > return new S[size]; // What happens here?
    > }


    > int main()
    > {
    > allocate(0x7FFFFFFF);
    > }


    Supposing that all values in an int can be represented in a
    size_t (i.e. that size_t is unsigned int or larger---very, very
    probably), then you should either get the memory, or get a
    bad_alloc exception (which you don't catch). That's according
    to the standard; a lot of implementations seem to have bugs
    here.

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, Jun 18, 2008
    #7
  8. Angel Tsankov

    Kai-Uwe Bux Guest

    James Kanze wrote:

    > On Jun 18, 11:44 am, "Angel Tsankov" <-sofia.bg> wrote:
    >> Does the C++ standard define what happens when the size
    >> argument of void* operator new(size_t size) cannot represent
    >> the total number of bytes to be allocated?

    >
    >> For example:

    >
    >> struct S
    >> {
    >> char a[64];
    >> };

    >
    >> S* allocate(int size)
    >> {
    >> return new S[size]; // What happens here?
    >> }

    >
    >> int main()
    >> {
    >> allocate(0x7FFFFFFF);
    >> }

    >
    > Supposing that all values in an int can be represented in a
    > size_t (i.e. that size_t is unsigned int or larger---very, very
    > probably), then you should either get the memory, or get a
    > bad_alloc exception (which you don't catch). That's according
    > to the standard; a lot of implementations seem to have bugs
    > here.


    I think, you are missing a twist that the OP has hidden within his posting:
    the size of S is at least 64. The number of S objects that he requests is
    close to numeric_limits<size_t>::max(). So when new S[size] is translated
    into raw memory allocation, the number of bytes (not the number of S
    objects) requested might exceed numeric_limits<size_t>::max().

    I think (based on my understanding of [5.3.4/12]) that in such a case, the
    unsigned arithmetic will just silently overflow and you end up allocating a
    probably unexpected amount of memory.


    Best

    Kai-Uwe Bux
    Kai-Uwe Bux, Jun 18, 2008
    #8
  9. Angel Tsankov

    Bo Persson Guest

    Kai-Uwe Bux wrote:
    > James Kanze wrote:
    >
    >> On Jun 18, 11:44 am, "Angel Tsankov" <-sofia.bg>
    >> wrote:
    >>> Does the C++ standard define what happens when the size
    >>> argument of void* operator new(size_t size) cannot represent
    >>> the total number of bytes to be allocated?

    >>
    >>> For example:

    >>
    >>> struct S
    >>> {
    >>> char a[64];
    >>> };

    >>
    >>> S* allocate(int size)
    >>> {
    >>> return new S[size]; // What happens here?
    >>> }

    >>
    >>> int main()
    >>> {
    >>> allocate(0x7FFFFFFF);
    >>> }

    >>
    >> Supposing that all values in an int can be represented in a
    >> size_t (i.e. that size_t is unsigned int or larger---very, very
    >> probably), then you should either get the memory, or get a
    >> bad_alloc exception (which you don't catch). That's according
    >> to the standard; a lot of implementations seem to have bugs
    >> here.

    >
    > I think, you are missing a twist that the OP has hidden within his
    > posting: the size of S is at least 64. The number of S objects that
    > he requests is close to numeric_limits<size_t>::max(). So when new
    > S[size] is translated into raw memory allocation, the number of
    > bytes (not the number of S objects) requested might exceed
    > numeric_limits<size_t>::max().
    >
    > I think (based on my understanding of [5.3.4/12]) that in such a
    > case, the unsigned arithmetic will just silently overflow and you
    > end up allocating a probably unexpected amount of memory.


    Here is what one compiler does - catch the overflow and wrap it back
    to numeric_limits<size_t>::max().

    int main()
    {
    allocate(0x7FFFFFFF);
    00401000 xor ecx,ecx
    00401002 mov eax,7FFFFFFFh
    00401007 mov edx,40h
    0040100C mul eax,edx
    0040100E seto cl
    00401011 neg ecx
    00401013 or ecx,eax
    00401015 push ecx
    00401016 call operator new[] (401021h)
    0040101B add esp,4
    }
    0040101E xor eax,eax
    00401020 ret


    Bo Persson
    Bo Persson, Jun 18, 2008
    #9
  10. Angel Tsankov

    Jerry Coffin Guest

    In article <g3alej$tb$>, -sofia.bg says...
    > Hello!
    >
    > Does the C++ standard define what happens when the size argument of void*
    > operator new(size_t size) cannot represent the total number of bytes to be
    > allocated? For example:
    >
    > struct S
    > {
    > char a[64];
    > };
    >
    > S* allocate(int size)
    > {
    > return new S[size]; // What happens here?
    > }
    >
    > int main()
    > {
    > allocate(0x7FFFFFFF);
    > }


    Chances are pretty good that at some point, you get something like:

    void *block = ::new(0x7FFFFFFF*64);

    On an implementation with a 32-bit size_t, that'll wraparound, and it'll
    attempt to allocate 0xffffffc0 bytes instead of 0x1fffffffc0 bytes.
    Chances are that allocation will immediately fail since that number is
    _barely_ short of 4 gigabytes, and no 32-bit system I know of wiil have
    that much contiguous address space available.

    If, OTOH, you picked numbers where the wraparound produced a relatively
    small number, chances are that the allocation would succeed, but when
    you attempted to access what appeared to be successfully allocated
    memory, you'd quickly go past the end of the real allocation, and get
    undefined behavior.

    --
    Later,
    Jerry.

    The universe is a figment of its own imagination.
    Jerry Coffin, Jun 18, 2008
    #10
  11. >>>> Does the C++ standard define what happens when the size
    >>>> argument of void* operator new(size_t size) cannot represent
    >>>> the total number of bytes to be allocated?
    >>>
    >>>> For example:
    >>>
    >>>> struct S
    >>>> {
    >>>> char a[64];
    >>>> };
    >>>
    >>>> S* allocate(int size)
    >>>> {
    >>>> return new S[size]; // What happens here?
    >>>> }
    >>>
    >>>> int main()
    >>>> {
    >>>> allocate(0x7FFFFFFF);
    >>>> }
    >>>
    >>> Supposing that all values in an int can be represented in a
    >>> size_t (i.e. that size_t is unsigned int or larger---very, very
    >>> probably), then you should either get the memory, or get a
    >>> bad_alloc exception (which you don't catch). That's according
    >>> to the standard; a lot of implementations seem to have bugs
    >>> here.

    >>
    >> I think, you are missing a twist that the OP has hidden within his
    >> posting: the size of S is at least 64. The number of S objects that
    >> he requests is close to numeric_limits<size_t>::max(). So when new
    >> S[size] is translated into raw memory allocation, the number of
    >> bytes (not the number of S objects) requested might exceed
    >> numeric_limits<size_t>::max().


    Thanks for pointing this out, I though it would be obvious to everyone.
    The following example might be a little bit less confusing:

    struct S
    {
    char a[64]; // Any size greater than 1 would do.
    };

    S* allocate(std::size_t size)
    {
    return new S[size]; // How many bytes of memory must the new operator
    allocate if size equals std::numeric_limits<size_t>::max()?
    }

    >> I think (based on my understanding of [5.3.4/12]) that in such a
    >> case, the unsigned arithmetic will just silently overflow and you
    >> end up allocating a probably unexpected amount of memory.

    >
    > Here is what one compiler does - catch the overflow and wrap it back to
    > numeric_limits<size_t>::max().
    >
    > int main()
    > {
    > allocate(0x7FFFFFFF);
    > 00401000 xor ecx,ecx
    > 00401002 mov eax,7FFFFFFFh
    > 00401007 mov edx,40h
    > 0040100C mul eax,edx
    > 0040100E seto cl
    > 00401011 neg ecx
    > 00401013 or ecx,eax
    > 00401015 push ecx
    > 00401016 call operator new[] (401021h)
    > 0040101B add esp,4
    > }
    > 0040101E xor eax,eax
    > 00401020 ret


    Yes, the size requested is rounded to the maximum allocatable size, but is
    this standard-compliant behavior? And if it is, how is client code notified
    of the rounding?
    Angel Tsankov, Jun 18, 2008
    #11
  12. Angel Tsankov

    Ian Collins Guest

    Angel Tsankov wrote:
    >>> Hello!
    >>>
    >>> Does the C++ standard define what happens when the size argument of void*
    >>> operator new(size_t size) cannot represent the total number of bytes to
    >>> be
    >>> allocated? For example:
    >>>

    >> size_t will always be wide enough to represent the maximum memory range
    >> on a given system.
    >>
    >> If the system can't supply the requested size, new throws std::bad_alloc.

    >
    > This is not an answer to the question what happens in the example you have
    > cut off.
    >

    What more is there to say other than "If the system can't supply the
    requested size, new throws std::bad_alloc"? If the system had 32GB
    free, new would succeed, otherwise it would fail.

    --
    Ian Collins.
    Ian Collins, Jun 18, 2008
    #12
  13. Angel Tsankov

    Ian Collins Guest

    Angel Tsankov wrote:

    [please don't snip attributions]

    > Bo Persson wrote:


    >> Here is what one compiler does - catch the overflow and wrap it back to
    >> numeric_limits<size_t>::max().
    >>
    >> int main()
    >> {
    >> allocate(0x7FFFFFFF);
    >> 00401000 xor ecx,ecx
    >> 00401002 mov eax,7FFFFFFFh
    >> 00401007 mov edx,40h
    >> 0040100C mul eax,edx
    >> 0040100E seto cl
    >> 00401011 neg ecx
    >> 00401013 or ecx,eax
    >> 00401015 push ecx
    >> 00401016 call operator new[] (401021h)
    >> 0040101B add esp,4
    >> }
    >> 0040101E xor eax,eax
    >> 00401020 ret

    >
    > Yes, the size requested is rounded to the maximum allocatable size, but is
    > this standard-compliant behavior? And if it is, how is client code notified
    > of the rounding?
    >

    Your question has nothing to do with operator new() and everything to do
    with integer overflow.

    The reason some of us answered the way we did is probably because we are
    used to systems where sizeof(int) == 4 and sizeof(size_t) == 8, so your
    original code would simply have requested 32GB, not a lot on some systems.

    --
    Ian Collins.
    Ian Collins, Jun 18, 2008
    #13
  14. Angel Tsankov

    Bo Persson Guest

    Angel Tsankov wrote:
    >>>>> Does the C++ standard define what happens when the size
    >>>>> argument of void* operator new(size_t size) cannot represent
    >>>>> the total number of bytes to be allocated?
    >>>>
    >>>>> For example:
    >>>>
    >>>>> struct S
    >>>>> {
    >>>>> char a[64];
    >>>>> };
    >>>>
    >>>>> S* allocate(int size)
    >>>>> {
    >>>>> return new S[size]; // What happens here?
    >>>>> }
    >>>>
    >>>>> int main()
    >>>>> {
    >>>>> allocate(0x7FFFFFFF);
    >>>>> }
    >>>>
    >>>> Supposing that all values in an int can be represented in a
    >>>> size_t (i.e. that size_t is unsigned int or larger---very, very
    >>>> probably), then you should either get the memory, or get a
    >>>> bad_alloc exception (which you don't catch). That's according
    >>>> to the standard; a lot of implementations seem to have bugs
    >>>> here.
    >>>
    >>> I think, you are missing a twist that the OP has hidden within his
    >>> posting: the size of S is at least 64. The number of S objects
    >>> that he requests is close to numeric_limits<size_t>::max(). So
    >>> when new S[size] is translated into raw memory allocation, the
    >>> number of bytes (not the number of S objects) requested might
    >>> exceed numeric_limits<size_t>::max().

    >
    > Thanks for pointing this out, I though it would be obvious to
    > everyone. The following example might be a little bit less
    > confusing:
    > struct S
    > {
    > char a[64]; // Any size greater than 1 would do.
    > };
    >
    > S* allocate(std::size_t size)
    > {
    > return new S[size]; // How many bytes of memory must the new
    > operator allocate if size equals std::numeric_limits<size_t>::max()?
    > }
    >
    >>> I think (based on my understanding of [5.3.4/12]) that in such a
    >>> case, the unsigned arithmetic will just silently overflow and you
    >>> end up allocating a probably unexpected amount of memory.

    >>
    >> Here is what one compiler does - catch the overflow and wrap it
    >> back to numeric_limits<size_t>::max().
    >>
    >> int main()
    >> {
    >> allocate(0x7FFFFFFF);
    >> 00401000 xor ecx,ecx
    >> 00401002 mov eax,7FFFFFFFh
    >> 00401007 mov edx,40h
    >> 0040100C mul eax,edx
    >> 0040100E seto cl
    >> 00401011 neg ecx
    >> 00401013 or ecx,eax
    >> 00401015 push ecx
    >> 00401016 call operator new[] (401021h)
    >> 0040101B add esp,4
    >> }
    >> 0040101E xor eax,eax
    >> 00401020 ret

    >
    > Yes, the size requested is rounded to the maximum allocatable size,
    > but is this standard-compliant behavior? And if it is, how is
    > client code notified of the rounding?


    Requesting a numeric_limits<size_t>::max() allocation size is pretty
    much assured to fail with a std::bad_alloc exception.


    Bo Persson
    Bo Persson, Jun 18, 2008
    #14
  15. Angel Tsankov

    James Kanze Guest

    On Jun 18, 5:40 pm, Kai-Uwe Bux <> wrote:
    > James Kanze wrote:
    > > On Jun 18, 11:44 am, "Angel Tsankov" <-sofia.bg> wrote:
    > >> Does the C++ standard define what happens when the size
    > >> argument of void* operator new(size_t size) cannot represent
    > >> the total number of bytes to be allocated?


    > >> For example:


    > >> struct S
    > >> {
    > >> char a[64];
    > >> };


    > >> S* allocate(int size)
    > >> {
    > >> return new S[size]; // What happens here?
    > >> }


    > >> int main()
    > >> {
    > >> allocate(0x7FFFFFFF);
    > >> }


    > > Supposing that all values in an int can be represented in a
    > > size_t (i.e. that size_t is unsigned int or larger---very, very
    > > probably), then you should either get the memory, or get a
    > > bad_alloc exception (which you don't catch). That's according
    > > to the standard; a lot of implementations seem to have bugs
    > > here.


    > I think, you are missing a twist that the OP has hidden within
    > his posting: the size of S is at least 64. The number of S
    > objects that he requests is close to
    > numeric_limits<size_t>::max().


    It's not on the systems I usually use, but that's not the point.

    > So when new S[size] is translated into raw memory allocation,
    > the number of bytes (not the number of S objects) requested
    > might exceed numeric_limits<size_t>::max().


    And? That's the implementation's problem, not mine. I don't
    see anything in the standard which authorizes special behavior
    in this case.

    > I think (based on my understanding of [5.3.4/12]) that in such
    > a case, the unsigned arithmetic will just silently overflow
    > and you end up allocating a probably unexpected amount of
    > memory.


    Could you please point to something in §5.3.4/12 (or elsewhere)
    that says anything about "unsigned arithmetic". I only have a
    recent draft here, but it doesn't say anything about using
    unsigned arithmetic, or that the rules of unsigned arithmetic
    apply for this calcule, or even that there is a calcule. (It is
    a bit vague, I'll admit, since it says "A new-expression passes
    the amount of space requested to the allocation function as the
    first argument of type std:: size_t." It doesn't really say
    what happens if the "amount of space" isn't representable in a
    size_t. But since it's clear that the request can't be honored,
    the only reasonable interpretation is that you get a bad_alloc.)

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, Jun 18, 2008
    #15
  16. Angel Tsankov

    James Kanze Guest

    On Jun 18, 9:16 pm, Ian Collins <> wrote:
    > Angel Tsankov wrote:


    > > Bo Persson wrote:
    > >> Here is what one compiler does - catch the overflow and
    > >> wrap it back to numeric_limits<size_t>::max().


    > >> int main()
    > >> {
    > >> allocate(0x7FFFFFFF);
    > >> 00401000 xor ecx,ecx
    > >> 00401002 mov eax,7FFFFFFFh
    > >> 00401007 mov edx,40h
    > >> 0040100C mul eax,edx
    > >> 0040100E seto cl
    > >> 00401011 neg ecx
    > >> 00401013 or ecx,eax
    > >> 00401015 push ecx
    > >> 00401016 call operator new[] (401021h)
    > >> 0040101B add esp,4
    > >> }
    > >> 0040101E xor eax,eax
    > >> 00401020 ret


    > > Yes, the size requested is rounded to the maximum
    > > allocatable size, but is this standard-compliant behavior?


    If the implementation can be sure that the call to operator
    new[] will fail, it's probably the best solution. (This would
    be the case, for example, if it really was impossible to
    allocate that much memory.)

    > > And if it is, how is client code notified of the rounding?


    It doesn't have to be.

    > Your question has nothing to do with operator new() and
    > everything to do with integer overflow.


    His question concerned operator new. Not unsigned integral
    arithmetic.

    > The reason some of us answered the way we did is probably
    > because we are used to systems where sizeof(int) == 4 and
    > sizeof(size_t) == 8, so your original code would simply have
    > requested 32GB, not a lot on some systems.


    Or because we take the standard literally.

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, Jun 18, 2008
    #16
  17. Angel Tsankov

    James Kanze Guest

    On Jun 18, 7:53 pm, Jerry Coffin <> wrote:
    > In article <g3alej$>, -sofia.bg says...
    > > Does the C++ standard define what happens when the size
    > > argument of void* operator new(size_t size) cannot represent
    > > the total number of bytes to be allocated? For example:


    > > struct S
    > > {
    > > char a[64];
    > > };


    > > S* allocate(int size)
    > > {
    > > return new S[size]; // What happens here?
    > > }


    > > int main()
    > > {
    > > allocate(0x7FFFFFFF);
    > > }


    > Chances are pretty good that at some point, you get something
    > like:


    > void *block = ::new(0x7FFFFFFF*64);


    There are a lot of implementations that do that. Luckily,
    there's nothing in the standard which allows it.

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, Jun 18, 2008
    #17
  18. Angel Tsankov

    James Kanze Guest

    On Jun 18, 9:24 pm, Paavo Helde <> wrote:
    > Jerry Coffin <> kirjutas:


    [...]
    > The standard says that for too large allocations
    > std::bad_alloc must be thrown. In the user code there is no
    > unsigned arithmetic done, thus no wraparound can occur. I
    > would say that if the implementation does not check for the
    > overflow and silently wraps the result, the implementation
    > does not conform to the standard. It is irrelevant if the
    > implementation uses unsigned arithmetics inside, or e.g.
    > double.


    > I have not studied the standard in detail, so this is just my
    > opinion how it should work.


    I have studied the standard in some detail, and your analysis is
    clearly correct. Whether this is actually what the authors
    meant to say is another question, but it is clearly what the
    standard says. It is also obviously how it should work, from a
    quality of implementation point of view. Anything else more or
    less makes array new unusable. (On the other hand: who cares?
    In close to twenty years of C++ programming, I've yet to find a
    use for array new.)

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, Jun 18, 2008
    #18
  19. Angel Tsankov

    Kai-Uwe Bux Guest

    James Kanze wrote:

    > On Jun 18, 5:40 pm, Kai-Uwe Bux <> wrote:
    >> James Kanze wrote:
    >> > On Jun 18, 11:44 am, "Angel Tsankov" <-sofia.bg> wrote:
    >> >> Does the C++ standard define what happens when the size
    >> >> argument of void* operator new(size_t size) cannot represent
    >> >> the total number of bytes to be allocated?

    >
    >> >> For example:

    >
    >> >> struct S
    >> >> {
    >> >> char a[64];
    >> >> };

    >
    >> >> S* allocate(int size)
    >> >> {
    >> >> return new S[size]; // What happens here?
    >> >> }

    >
    >> >> int main()
    >> >> {
    >> >> allocate(0x7FFFFFFF);
    >> >> }

    >
    >> > Supposing that all values in an int can be represented in a
    >> > size_t (i.e. that size_t is unsigned int or larger---very, very
    >> > probably), then you should either get the memory, or get a
    >> > bad_alloc exception (which you don't catch). That's according
    >> > to the standard; a lot of implementations seem to have bugs
    >> > here.

    >
    >> I think, you are missing a twist that the OP has hidden within
    >> his posting: the size of S is at least 64. The number of S
    >> objects that he requests is close to
    >> numeric_limits<size_t>::max().

    >
    > It's not on the systems I usually use, but that's not the point.
    >
    >> So when new S[size] is translated into raw memory allocation,
    >> the number of bytes (not the number of S objects) requested
    >> might exceed numeric_limits<size_t>::max().

    >
    > And? That's the implementation's problem, not mine. I don't
    > see anything in the standard which authorizes special behavior
    > in this case.


    The question is what behavior is "special". I do not see which behavior the
    standard requires in this case.


    >> I think (based on my understanding of [5.3.4/12]) that in such
    >> a case, the unsigned arithmetic will just silently overflow
    >> and you end up allocating a probably unexpected amount of
    >> memory.

    >
    > Could you please point to something in §5.3.4/12 (or elsewhere)
    > that says anything about "unsigned arithmetic".


    I qualified my statement by "I think" simply because the standard is vague
    to me. However, it says for instance

    new T[5] results in a call of operator new[](sizeof(T)*5+x),

    and operator new takes its argument at std::size_t. Now, whenever any
    arithmetic type is converted to std::size_t, I would expect [4.7/2] to
    apply since size_t is unsigned. When the standard does not say that usual
    conversion rules do not apply in the evaluation of the expression

    sizeof(T)*5+x

    what am I to conclude?

    > I only have a
    > recent draft here, but it doesn't say anything about using
    > unsigned arithmetic, or that the rules of unsigned arithmetic
    > apply for this calcule, or even that there is a calcule.


    It gives the formula above. It does not really matter whether you interpret

    sizeof(T)*5+x

    as unsigned arithmetic or as plain math. A conversion to std::size_t has to
    happen at some point because of the signature of the allocation function.
    If [4.7/2] is not meant to apply to that conversion, the standard should
    say that somewhere.

    > (It is
    > a bit vague, I'll admit, since it says "A new-expression passes
    > the amount of space requested to the allocation function as the
    > first argument of type std:: size_t." It doesn't really say
    > what happens if the "amount of space" isn't representable in a
    > size_t.


    So you see: taken litterally, the standard guarantees something impossible
    to happen.

    > But since it's clear that the request can't be honored,
    > the only reasonable interpretation is that you get a bad_alloc.)


    Hm, that is a mixure of common sense and wishfull thinking :)

    I agree that a bad_alloc is clearly what I would _want_ to get. I do not
    see, however, how to argue from the wording of the standard that I _will_
    get that.


    Best

    Kai-Uwe Bux
    Kai-Uwe Bux, Jun 18, 2008
    #19
  20. Angel Tsankov

    Jerry Coffin Guest

    In article <Xns9AC1E3EAF7668nobodyebiee@216.196.97.131>,
    says...

    [ ... ]

    > The standard says that for too large allocations std::bad_alloc must be
    > thrown. In the user code there is no unsigned arithmetic done, thus no
    > wraparound can occur. I would say that if the implementation does not
    > check for the overflow and silently wraps the result, the implementation
    > does not conform to the standard. It is irrelevant if the implementation
    > uses unsigned arithmetics inside, or e.g. double.
    >
    > I have not studied the standard in detail, so this is just my opinion how
    > it should work.


    Though it's in a non-normative note, the standard says ($5.3.4/12):

    new T[5] results in a call of operator new[](sizeof(T)*5+x)

    Even though that's a note, I think it's going to be hard to say it's
    _wrong_ for an implementation to do exactly what that says -- and if
    sizeof(T) is the maximum value for size_t, the expression above will
    clearly wraparound...

    --
    Later,
    Jerry.

    The universe is a figment of its own imagination.
    Jerry Coffin, Jun 19, 2008
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Sam Iam
    Replies:
    0
    Views:
    436
    Sam Iam
    Jan 31, 2004
  2. Rahmi Acar
    Replies:
    5
    Views:
    411
    Karl Heinz Buchegger
    Jul 28, 2003
  3. Aahz
    Replies:
    0
    Views:
    377
  4. kingchuffalo
    Replies:
    0
    Views:
    860
    kingchuffalo
    Sep 21, 2008
  5. Replies:
    5
    Views:
    336
Loading...

Share This Page