On a hosted environment, the OS dictates the size. In order to work
with the system's headers and libraries, compilers have to follow the
platform's standards.
An OS, especially one not written in C in the first place (this
applies to the first version of UNIX, too, which was originally
written in assembly language), may come with *NO* system headers
(at least not in C) and *NO* C-callable libraries (it might instead
use assembly-language macros or routines written in some other
language). In that case, it's up to the C implementor to supply
those headers (possibly translated from another language) and
libraries (possibly just "glue" interface routines). If there's
not a popular C API to the OS, the implementor might have to supply
one.
It can still take guidance from the OS (although the more important
guidance is likely from the CPU architecture). Some systems provide
several different choices on the same OS, particularly with respect
to pointer sizes, and provide different sets of libraries for each.
See, for example, the different MS-DOS memory models, which have
different combinations of 16-bit and 32-bit data and function
pointers. I think some compilers also provide a 32-bit mode with
32-bit ints for MS-DOS. You can't mix compiled or standard code
from different models (implementations): they all have to come from
the same one (non-standard features involving the "near" and "far"
pseudo-keywords made some mixing possible).