Tim Rentsch said:
Depending on how this statement is meant, the proposition is
either trivially true or wrong.
Clearly a C implementation can use BCD to encode C values
using the simple trick of having one BCD digit per bit.
One octet (two nibbles) would hold two C "bits", or four
octets per char (with CHAR_BIT == 8).
That's not what I had in mind.
However, if "representation" is meant in the sense that
the Standard uses the term (as in "object representation",
for example), integers cannot be represented using
BCD, as that is not consistent with other requirements
for representation of integers.
What I was thinking of was an implementation that uses BCD for integer
representation (so that a stored bit pattern of 0001 0101 represents
the value 15), but that goes through substantial gyrations to make it
*appear* to be using a pure binary representation. If you examined
the underlying representations of objects using, say, a debugger,
you'd see the BCD, but the BCD representation wouldn't leak out to
become visible in the behavior of any program that avoids undefined
behavior. The implementation would satisfy the "least requirements"
of C99 5.1.2.3p5.
This would be a very difficult and silly thing to do, but I'm not
convinced it would violate the standard.
The counterargument is that the representation requirements of 6.2.6.1
must be met by the underlying representation, and that satisfying
5.1.2.3p5 is insufficient.
To put it another way, it should be possible to implement C on top of
a binary virtual machine running on BCD hardware.