All C guarantees here is "at least" 32 bits, but here it does
not really matter.
You could write these easer this way:
i = 65535u + 3u;
Actually, this is potentially quite different.
If ANSI/ISO C used the *correct* rules (according to me

)
it would be precisely the same, but we are stuck with quite
bogus widening rules due to a mistaken decision in the 1980s:
"when a narrow unsigned integer type widens, the resulting
type is signed if all the unsigned values fit, otherwise
it is unsigned".
In this particular case, unsigned short widens to either
unsigned int or signed int. Which one we get depends on the
properties of the implementation. This is a really dumb idea,
made in an attempt to be "less surprising" than the "right"
way ("narrow unsigned widens to unsigned"), that actually
turns out to be *more* surprising. But again we are stuck
with the wrong decision -- so let me define it.
What you must do is look in <limits.h> (perhaps by writing a
small C program, since the header may not exist) and compare
the values of USHRT_MAX and INT_MAX. One of the following two
cases will necessarily hold:
a) USHRT_MAX > INT_MAX.
This occurs on, e.g., the 16-bit PDP-11 and old 16-bit
MS-DOS C compilers. Here USHRT_MAX is 65535 while INT_MAX
is 32767.
b) USHRT_MAX <= INT_MAX.
This occurs on, e.g., today's 16-bit-short 32-bit-int C
compilers. Here USHRT_MAX is 65535 while INT_MAX is
2147483647.
In case (a), an unsigned short expression -- no matter what its
actual value is -- that appears in an arithmetic expression is
widened to unsigned int. Thus (unsigned short)65535 is
identical to (unsigned int)65535 or 65535U.
In case (b), howver, an unsigned short -- no matter what its actual
value is -- is widened to a *signed* int. Thus (unsigned short)65535
is identical to (int)65535 or 65535.
If we have two "unsigned short"s, values 65535 and 3 respectively,
and go to add them, we continue to have "case a" and "case b".
In case (a), the sum is 65535U + 3U, which has type unsigned int
and value 2. In case (b), the sum is 65535 + 3, which has type
signed int and value 65538.
In either case, when storing the final values back into an unsigned
short, it is reduced mod (USHRT_MAX+1), so that i becomes 2. The
place where this becomes a problem is not when we stuff the result
back into an unsigned variable, but rather when we compare it in
what the original 1989 C rationale called a "questionably signed"
expression.
Suppose we have the following code fragment:
unsigned short us = 65535;
int i = -1;
if (us > i)
printf("65535 > -1\n");
else
printf("65535 <= -1\n");
According to ANSI C's ridiculous rules (which we must obey anyway),
we decide whether this comparison uses "unsigned int" or "signed
int" based on whether USHRT_MAX exceeds INT_MAX. Once again, we
have the two cases:
case (a), USHRT_MAX > INT_MAX (PDP-11): "us" widens to an
unsigned int, value 65535U; i widens to an unsigned int,
value 65535U. 65535U > 65535U is false and we print
"65535 <= -1". This is, supposedly, "surprising" -- but
it happens!
case (b), USHRT_MAX < INT_MAX (VAX etc): "us" widens to
a signed int, value 65535; i remains signed int, value
-1. 65535 > -1 is true and we print "65535 > -1". This
is supposedly "not surprising" (which is probably true),
but in fact it is only SOMETIMES true.
As far as I am concerned, it is *much* better to be "predictably
surprising" than "unpredictably surprising based on the relative
values of USHRT_MAX and INT_MAX". The reason is that, while C
programmers do get surprised, they get surprised *once*, the *first*
time they mix signed and unsigned this way. This gives them the
opportunity to learn that the results are surprising; from then
on, they have no excuse to be surprised. Moreover, the logic
is trivial to follow: "unsigned widens to unsigned" means "put
an unsigned into an expression and it takes over."
Instead, we have a language where the code "works as expected" --
until it is moved to a machine where case (a) holds instead of case
(b). Programmers learn that mixing signed and unsigned is harmless
and "never surprises", only to find someday that, no, the language
is considerably more perverse than that. The logic is difficult
as well: "unsigned takes over except when it doesn't, based on the
relative values of the corresponding MAXes."
It's difficult to produce undefined behavior with unsigned values.
As long as you stick with unsigned int or unsigned long, anyway,
so that the broken widening rules do not trick you into accidentally
using signed values.
(Unless you divide by zero, maybe. I don't actually know what the
standard says about that, which surprises me.)
Division by zero produces undefined behavior, even for 1U / 0U and
the like.
Overflow doesn't occur
with unsigned types, but it's possible for unsigned types that are
narrower than int to be promoted to (signed) int, which may allow
overflow (and undefined behavior) to occur.
Yes. I claim that this rule is a terrible one; but I note that we
are stuck with it. The best approach is to avoid it -- make sure
you explicitly widen your narrow unsigned types to wider unsigned
types if the result (overflow or result of "questionably signed"
comparison) can matter. This kind of code is undeniably ugly, but
then, working around broken portions of any language (not just C)
is usually ugly.