Keith Thompson said:
[...]
I would argue that alternatively we could eliminate this mixed message
of void both implicitly having a size and not having a size by changing
the C standard to give void a size, i.e. one. sizeof(void) == 1.
Nothing would break -- this would just add to the things that can be
expressed, in a way that is consistent with how library functions use
void * arguments. (The void type would still not be allowed by itself,
and so function(void) would still mean no arguments.) In fact, in the
evil gcc, n = sizeof(void); sets n to 1, and it seems to work fine.
It works, but it's dumbing things down -- it's encouraging sloppy
thinking
about pointer types, and that ain't good.
it makes sense to the CPU, and in the end, it is this which is what
matters.
What matters more is what makes sense to the programmer and to the
reader of the code. What matters is constructing a consistent model
for computation, and letting the compiler map that model onto some
particular hardware.
If we only cared about what makes sense to the CPU, we wouldn't have
more than a handful of types, all defined by the system, and portable
code would be a pipe dream.
granted, but these are still secondary, since if the CPU could not
understand the code or the data, the program would not run.
no matter how elegant the conceptual model, it would be pointless to have SW
which doesn't run...
Sure, but the particular abstractions we're talking about, such as
having void* be a raw pointer type that doesn't let you access what it
points to without first converting the pointer, don't prevent the
software from running. Implementers and programmers are entirely
capable of writing working software using the abstractions defined by
the C language.
the CPU is inescapable, as noted by how people may still end up needing to
fall back to using assembler in many situations, and just the same, it is
sometimes / often needed to make use of these low-level details of how data
is represented on particular HW to have much of any real hope of making the
app work effectively...
I don't think that's as common as you imply. I don't remember the
last time I needed to resort to assembler. (For a lot of my own
programming, I don't even resort to C, but that's another story.)
universal portability for non-trivial apps IS a "pipe dream", apart from the
fact that luckily most HW tends to represent most things in similar enough
ways that one can gloss over the details in the majority of cases (and fall
back to good old "#ifdef" for most of the rest...).
and so, abstraction is a tower built on top of the HW, and not the other way
around.
The abstraction level provided by standard C seems to be just about
right for a lot of purposes, and I don't find that it gets in the way
of writing efficient code in most cases. And in cases where it does,
you can often write non-portable code, making additional
implementation-specific assumptions, without leaving the language.
I really wouldn't want to make C any lower level than it already is.
none of this really changes the end case, that in the end it is the CPU
which matters...
if what were important were appeasing the mind of the reader of the code,
then people would be authors, not programmers, and the end result would be a
novella, rather than a codebase...
For the software I work on, I spend a lot more time maintaining it
than running it. It's not a work of literature, but clarity and
legibility are vitally important if I'm going to finish the job
quickly enough for performance on the CPU to matter.