CBFalconer said:
For example, a 2's complement 16 bit machine could set MIN_INT to
(INT_MIN, as RH pointed out)
-32767 and reserve the bit pattern 0x8000 as a trap representation
for uninitialized memory. Any int access that yielded that value
would cause a system trap.
[...]
Yes, that's one example, but the system trap is not necessary for it
to be considered a trap representation. In fact, the phrase "trape
representation" is a bit misleading. Attempting to access a trap
representation just causes undefined behavior.
For example, assuming 16-bit two's complement int, you could simply
define INT_MIN as (-32767) and INT_MAX as 32767, without changing
anything else. All operations (except those that refer to the INT_MIN
macro) will happen to work exactly as they would if INT_MIN were
defined as (-32768). But -32768 would magically become a trap
representation, and an attempt to access an int object with that value
would invoke undefined behavior -- not because it would cause any kind
of trap (it wouldn't), but simply because the implementation chooses
not to define the behavior.
(In the above, I misuse the term "value". I'm too lazy to fix the
wording, but you get the idea.)