Re: Subtracting two big unsigned numbers

Discussion in 'C Programming' started by Thomas Richter, Jun 9, 2011.

  1. On 09.06.2011 22:10, pozz wrote:
    > What happens when I sum or subtract two unsigned integers?


    The result is computed modulo 2^N.

    > I think the
    > operands are considered unsigned, because they are of the same type.


    Yes.

    > When I sum two unsigned integers the only problem could be an overflow.


    Not in the sense that the result is undefined. The result will
    wrap-around in a well-defined way. unsigned integers implement modulo
    arithmetic. Overflows happen with signed arithmetic, and what happens
    there is up to the implementation (including possibly a trap).

    > When I subtract a lower from a bigger number, I won't have any problem.
    >
    > The problem is when I subtract a bigger from a lower number. If they are
    > "small" numbers (lower or equal than INT_MAX), I can cast to (signed
    > int). What happens when I subtract INT_MAX+1 from INT_MAX+2?


    INT_MAX is not an unsigned number. You can first convert it to an
    unsigned number, and then subtract. In that case, the result is +1,
    always, and defined to be +1 for any conforming compiler. Note that this
    is not guaranteed for signed integers.

    > unsigned int x = INT_MAX + 2U;
    > unsigned int y = INT_MAX + 1U;
    > int diff = y - x;


    This is not what you're writing above. Here you subtract INT_MAX
    converted to unsigned, plus 2, from a INT_MAX+1. The result is the
    largest possible unsigned number. You then convert it to a signed type
    that cannot represent this value. What happens then is not defined by
    the ANSI standard, but rather implementation defined. In your specific
    case, with your specific compiler, complement arithmetic is used and the
    result is interpreted as negative number modulo a power of two. But this
    is, as said, not guaranteed by ANSI C, but (should be) by your compiler.

    > printf("diff=%d\n", diff);
    >
    > I tried with one compiler and I obtain the correct result: diff=-1.
    > Is this a standard behaviour or is it implementation defined/undefined
    > behaviour?


    It is implementation defined. x-y = 1U and y-x=MAX_UINT is standard defined.

    > (y - x) is a binary expression between two unsigned variables, so they
    > shouldn't be changed into signed variables.


    So why do you convert it to a signed variable in first place? (-; You
    are quite correct, the difference is indeed an unsigned int.

    > So the result should be
    > always unsigned... so what happens to -1?


    A conversion of a valid value of an unsigned int which is not
    representable by signed int. Up to the implementation to define what
    then happens. For a machine using sign-magnitude representations for
    ints, the result might possibly be the smallest (largest in absolute
    value) negative number reprsentable by int, not -1.

    unsigned int diff = y - x;

    gives the right result, namely -1 + 2^N, for a suitable value of N.
    Where N is, of course, defined by the implementation. However, *that*
    you have modulo arithmetic here is, indeed, ensured by ANSI-C.

    HTHH,
    Thomas
    Thomas Richter, Jun 9, 2011
    #1
    1. Advertising

  2. Thomas Richter

    James Kuyper Guest

    On 06/09/2011 05:07 PM, pozz wrote:
    ....
    > This problem happened when I tried to write a simple timer driver for an
    > embedded platform. I have a volatile unsigned int variable that
    > increments every clock tick (inside an interrupt):
    >
    > volatile unsigned int ticks;
    >
    > ticks is a global variable that goes from 0x0000 to 0xFFFF and
    > wrap-around again to 0x0000 (on 16 bit platforms).
    > If I want to wait for 1000 clock ticks, starting from now, I can do this:
    >
    > extern volatile unsigned int ticks;
    > unsigned int timer = ticks + 1000;
    > while (ticks < timer) {
    > /* wait... */
    > }
    >
    > Apparently this work, but it doesn't. Suppose ticks is 65000, so timer
    > is 66000 - 65536 = 464. Even immediately, ticks is greater than timer
    > and the wait isn't performed as desired.
    >
    > The solution could be:
    >
    > while ((int)(ticks - timer) < 0) {
    > /* wait... */
    > }
    >
    > Considering the example above, 65000(ticks) - 464(timer) = 64536. This
    > value is converted in an implementation defined way to a signed int, but
    > usually it will be 64536 - 65536 = -1000. This value is negative and the
    > wait is performed as desired.
    > As soon as ticks will be 464, the result of the difference will be 0 and
    > the while test condition will be false (the waiting period will be expired).
    >
    > So this approach solves my initial problem, but it is implementation
    > defined (even if I think it'll work on several machines).
    > Do you suggest a better approach without implementation defined code?
    >
    > Of course, the maximum allowable waiting period will be 32767.
    >
    > I'm thinking about this:
    >
    > while (ticks - timer > (unsigned)INT_MAX) {
    > /* wait... */
    > }
    >
    > What do you think?


    This is an improvement, in that (unsigned)INT_MAX always has a
    well-defined value for any given implementation; but that value could,
    in principle, differ from one implementation to another. In practice, of
    course, if UINT_MAX is 0xFFFF, then INT_MAX is almost certainly going to
    be 0x7FFF. [1] However, what you want to do has nothing to do with the
    properties of signed int, so you shouldn't be expressing it in terms of
    INT_MAX, even if that connection were perfectly guaranteed.

    The key point to consider is why the timer wraps around at 0xFFFF. That
    happens to be UINT_MAX on the 16-bit implementation you're describing.
    Would it be inappropriate to port this code to a implementation where
    UINT_MAX was larger than that? Then I'd recommend adding the following:

    #if UINT_MAX > 0xFFFF
    #error Your explanation of why this code should not be ported.
    #endif

    If it would be appropriate to port this code to an implementation where
    UINT_MAX > 0xFFFF, do you still want the time to wrap around at 0xFFFF?
    In that case, you should use

    while((ticks - timer) & 0xFFFF > 0x7FFF)

    That may look wasteful, but any decent compiler will optimize the mask
    operation away, on platforms where UINT_MAX == 0xFFFF.

    Alternatively, would you want the calculation to wrap around at
    UINT_MAX, even if UINT_MAX > 0xFFFF? then you should use

    while((ticks - timer) > UINT_MAX/2)

    [1] In principle, 'int' could be a 17-bit type, and the sign bit of
    'int' could be a padding bit for unsigned int, but it's exceedingly
    unlikely that such an implementation would ever be created for any
    reason other than to prove that principle.
    --
    James Kuyper
    James Kuyper, Jun 10, 2011
    #2
    1. Advertising

  3. Thomas Richter

    James Kuyper Guest

    On 06/10/2011 04:01 PM, pozz wrote:
    > Il 10/06/2011 13:04, James Kuyper ha scritto:

    ....
    >> then you should use
    >>
    >> while((ticks - timer)> UINT_MAX/2)

    >
    > This is somewhat similar to my solution, except you use UINT_MAX/2 while
    > I have used INT_MAX. I thought they were the same value.


    They usually are, but they're not required to be. The only relevant
    connection between those two values that is imposed by the C standard is
    that UINT_MAX >= INT_MAX.

    How would you want your code to behave, if compiled by an implementation
    where UINT_MAX/2 and INT_MAX are not the same? You don't have to worry
    about that question if you don't want to; it an extremely implausible
    possibility. But if that possibility were to come up, I think that
    UINT_MAX/2 would be more appropriate than INT_MAX.
    --
    James Kuyper
    James Kuyper, Jun 11, 2011
    #3
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Subtracting unsigned ints

    , Jun 20, 2006, in forum: C Programming
    Replies:
    10
    Views:
    651
    Eric Sosman
    Jun 20, 2006
  2. Subtracting unsigned entities

    , May 12, 2008, in forum: C Programming
    Replies:
    9
    Views:
    379
    Harald van Dijk
    May 13, 2008
  3. Shaguf
    Replies:
    0
    Views:
    317
    Shaguf
    Dec 24, 2008
  4. pozz
    Replies:
    12
    Views:
    699
    Tim Rentsch
    Mar 20, 2011
  5. Keith Thompson

    Re: Subtracting two big unsigned numbers

    Keith Thompson, Jun 9, 2011, in forum: C Programming
    Replies:
    4
    Views:
    1,274
    Keith Thompson
    Jun 15, 2011
Loading...

Share This Page