That won't work; IP addresses can be 128 bits, so that should be
"typedef uint128_t ipaddr_t;".
Kleuske must have a system with CHAR_BIT>=32, which is the only way his
ipaddr_t can hold any IP address _and_ sizeof(unsigned int)==4 can be true.
If you incorrectly assumed an IP address was always 32 bits, though,
this shows yet another benefit to using typedefs to reflect the logical
data type: it helps you find (and correct) all the places in your code
that only work with IPv4.
"All" is over-optimistic, I think. A difficulty is that
typedef aliases are in fact just aliases, not types in their own
right. Given `typedef int Foo;' and `typedef int Bar;', I still
have only the single type `int', not three types. For example,
I can pass a `Foo*' argument to a function's `Bar*' parameter.
Add C's promiscuity about silent conversion between numeric
types, and a typedef-aliased scalar turns out to be a rather weak
enforcer of purity. IP addresses may start out in `ipaddr_t' places,
but they won't stay there: By the time the code base gets to version
2.0 (maybe even 1.5), you will inevitably find "leakage" into types
not so helpfully named. If the compiler doesn't forbid something,
some programmer will do it.
Structs (and unions), by contrast, offer more in the way of
enforcability. If you make the `ipaddr_t' some kind of a struct,
the compiler will object to any attempts at smuggling it around as
some other struct, even a struct with the same element types and
layout. It's still not 100% bulletproof because `void*' is nearly
as promiscuous as numeric types are, but you'll get considerably
better "coverage" against type leaks.