But how does an OS like linux or windows know that it's installed on a
computer with a 32 or 64 bit processor?
The OS is implementation, and so is allowed to do things that
C leaves undefined or unspecified.
When an OS is compiled, it already has certain hardware assumptions
built into it -- assumptions such as what the machine language of
the processor looks like. OSes may have access to processor status
registers that C does not define; the status registers may even
require special instructions to access. Since the OS knows what
kind of processor (generally) it is running on, it knows how to
interpret the status registers to determine processor capabilities
(such as whether there is hardware audio support instructions.)
Some people define "32 or 64 bit processor" according to the
instructions that are supported. That's not the best of definitions,
though, because you also need to know things like how many data
bits may be transfered at a time on the databus: a particular
processor model might support several different databus widths
and hide all the details from the users. A processor might
have an instruction to multiply two 32 bit numbers and produce
a 64 bit number, but it might transfer those 64 bits to memory
16 bits at a time. Do you define "32 or 64 bit" by the instruction
set, or do you define it by what the hardware actually does?
Note: generally, the processor reads some cpu pins hardwired on the
motherboard in order to figure out bus widths.
The C language itself provides no mechanisms to access hardware
directly, and provides almost no restrictions on how the
hardware operates. The C langauge standard provide user facilities
to write -portable- code, and leaves the details of the portable
facilties to the implementation. If you have something that
depends on whether the processor is 32 or 64 bit (whatever that
might mean), then you have something that is almost certainly
not portable C.