Not at all.
By that definition the concept of "record-based" vs. "not-record-based"
becomes completely meaningless.
It is pretty much meaningless unless you're referring to the way a
programs handles data. Consider a file containing nothing but printable
characters:
- if a C or Java program reads the file byte by byte or parses it
by reading words separated by whitespace then line delimiters are
utterly meaningless and the program doesn't care whether the file
contains records or not.
- OTOH if a different program reads the same file a line at a time, e.g
C using fgets(), Java using BufferedReader.readLine(), then this is
pure record-level access.
But most of us use "records" to mean a structure that involves out-of-
band boundaries of some sort.
Not necessarily. A CSV file is generally treated as containing a fixed
number of variable length fields with the last field terminated by a
newline. In this case, both commas and newlines are out-of-band (and so
are some quote marks if the implementation allows fields to contain
commas).
However, fixed length records made up of fixed length fields contain no
out-of-band structure. You want an example? How about the two magnetic
stripe tracks on a credit card - 40 bytes and containing fields whose
content and meaning are defined by their position.
That's text plus file metadata.
Indeed it is. Technically it is made up of fixed length fields with no
delimiters. Apart from the record description that forms part of every
file and the member separators the only metadata is similar to a UNIX
directory entry plus the i-node. OS/400 and Z/OS text files are closer to
a tar or zip file than what a Unix or Windows user considers to be a text
file because you can store many separate chunks of text in a single text
file.
What makes it not *quite* a legitimate text file is that the file's
actual content contains a line break that is distinct from 0x0A, 0x0D,
No it doesn't. The editor won't let you put newlines into an OS/400 text
file - it automatically starts another text line record and assigns a
line number to it.
Database rows need an ID field so there's something you can uniquely key
on, and you said the system stores text in database rows, so there's
your explanation. The thing that makes no sense is it storing text in
database rows instead of as native text.
Nice guess, but that's not how it works. That role is taken by the line
number (which can be a decimal value - when you add lines between lines
0002 and 0003 they'll be numbered 0002.01, 0002.02 etc until yo ask the
editor to renumber the member - unlike Unix and Windows systems the line
numbers in compilation errors aren't screwed up by editing the source.
The ID is a complete mystery - most people and programs don't use it and
IIRC its not accessible via the editor so you can't change it, though it
may be possible to ask the editor to maintain it.
Actually C is already broken here even on "normal" systems, because C
strings can't properly represent text containing NUL characters.
By definition they can't be included in 'text files' - they can be
handled perfectly well in files via the read() and write() functions.
Nope; see above. If everything you've told me is accurate then it is
possible to write an OS/400 "text file" that encodes some information
that will be destroyed in a copy made by simply reading it character by
character through a java.io.Reader and outputting it character by
character, unaltered, through a java.io.Writer.
Incorrect assumption because you can't put non-printable characters in an
OS/400 source file member - the editor and other programs won't let you.
The OS/400 is a database machine. There are no files that aren't
databases. Every file has defining metadata which is automatically
generated for standard file types, e.g. source files and compiled
binaries. The field types control what byte values can appear in every
field, so you might limit a text field to upper case. Violating these
rules generally causes an exception which, of course, can be caught and
acted on.