Programmers are enamoured with streams of bytes. Is that abstraction
(i.e., generality) too rigidly modeled? I say, all IO (I/O?) is not
alike, so why model it as if it is. What say you (y'all)?
Keywords and key phrases (why is "keyword" a compound word and "key
phrase" two words?): stream of bytes, record IO, formatted IO, ASCII
unit/record/group separator, compromise, designed for all cases but good
for none (nothing),... other.
collections of bytes (globs/streams/arrays/...) are generally the most
generic bottom-level representation for data.
other data representations:
text, XML, arrays, structures, ... although often much more closely
related to the data in question, are nowhere near as general-purpose,
and so are ill suited as a "fundamental representation".
this then is why file-formats are often composed on a number of
"layers", each with various levels of abstraction, various magic
numbers, ...
also, it makes sense, as the world would be far more complex (and not
nearly so functional) if all of the file-systems, network HW, memory
storage, ... each had to deal with a wide range of possible data
representations. since they only see bytes, their job is generally easier.
probably about the only other likely "fundamental" model would be
abstract reference-based high-level data types... however... this model
itself has a notably different problem: it doesn't really map well to
traditional static-typed languages (such as C, C++, Java, ...). even
then, it doesn't matter much, since this other model (more commonly used
in languages like Lisp/Scheme/Erlang/...) still can be mapped fairly
effectively to plain old bytes.
so, seemingly, there is nothing really that doesn't map acceptably well
to bytes.