Files not writing, closing files, finalize()

A

Andreas Leitgeb

Wojtek said:
Lew wrote :
Stone carvings? I think that they are even more stable than paper.

You mean: carved with some *electronic* chiseling tool?

I don't think we've got enough really longtime (>1000 years)
experience so far with that.
 
J

Joshua Cranmer

Wojtek said:
Lew wrote :

Stone carvings? I think that they are even more stable than paper.

Depends on where you store them. I think there's an Egyptian obelisk in
NYC or somewhere similar whose markings became unreadable within a century.
 
M

Martin Gregorie

RAM and hard drives are not the only two ways to store data
even today,
Yes, of course. However, among the computer-readable media, so far only
punched cards, paper tape and mag.tape are the only forms to reliably
outlast the drives that wrote and read them.
I have yet to have a hard drive last one decade, much less multiple
decades, so that part of the argument leaves me cold.
I wasn't meaning to imply that they were, though I do own a 3.5" 120 MB
WD drive that is installed in a machine that has been run almost every
day since about 1992.

My point was that the interface implementations are long lived and so far
it has always been possible to copy the data from a HDD to a newer HDD.

The media formats and recording standards for mag tape and optical media
have not been nearly so durable with the result that irreplacable data
has been lost from both media. NASA has tapes they can't read because
either usable drives no longer exist and/or modern computers don't have
interfaces or drivers for them. Similarly, the BBC's Doomesday Project
database was almost lost because of a lack of usable drives (and finally
lost because the bungling idiots at the National Archive have lost the
archived copies - twice [both the original and the recovered version]).
So far no electronic medium exceeds the read-only lifetime of ink on
paper.
Agreed, but I thought we were talking about fairly high volume computer-
readable data.
 
M

Martin Gregorie

Wrong. Because tape's going to make a comeback - I GUARANTEE IT!
Maybe so, but we need a lot more attention paid to using thick enough
substrate material to minimize print-through for really permanent
recordings. The other requirement would seem to be to archive at least
one compatible tape drive plus paper copies of its specs and manual. So
far nobody seems to have considered that aspect of long-term storage of
material in formats that can't be read directly by humans.
 
L

Lew

Joshua said:
Predictions of obsolescence tend to be woefully underestimated. After
all, the magnetic tape has not died its death yet.

I predict that fifty years from now, some people will be using hard
disks, by necessity. It may evolve into a niche role (like magnetic
tape), but it will still have one nonetheless.

You are correct.
 
L

Lew

Joshua said:
Depends on where you store them. I think there's an Egyptian obelisk in
NYC or somewhere similar whose markings became unreadable within a century.

My buddy Jim used to stop in Northern New Jersey on his way up from points
south, step out of his car, take a deep sniff, and exclaim,
"Ahhh! Air you can see!" It was a vital homecoming ritual for him.

There are things that can damage paper, too. So the rule is, keep hard disks
away from powerful magnets, paper away from solvents and obelisks away from NYC.
 
M

Martin Gregorie

Paper counts there, too. It's not as compact, is all.
I can think of three candidates, of which cards are probably the most
durable[1]:
- cards
- paper tape
- printed paper (for reinput with an OCR)

Out of curiosity, what form of paper were you thinking of?

[1] cards have been described as the most inefficient medium ever
invented on the grounds that the information is entirely recorded in the
holes, so the rest of the card is totally redundant.
 
A

Arne Vajhøj

Roedy said:
Looks ok. Try being more explicit where the file is written by
specifying drive and dir. Dump out the CWD, it may not be where you
expect. That it where test.txt will show up.

Does that relate to the question asked ?

Arne
 
A

Arne Vajhøj

J. Davidson said:
Joshua said:
Note that this feature is actually relatively common in high-level
languages, buffering output data. Remember that something like disk
access is actually very expensive, so buffering is a tremendous boost
in speed.

Anyone want to know why there's two layers of buffering in Java?
It's not that Java doesn't trust the OS buffering. It's because each
trip through JNI to call an OS API routine is expensive.

So Java buffers because each JNI call is expensive. Then the OS buffers
because each disk write is expensive.

Another fifty years from now we'll probably have a big teetering tower
of abstractions and I/O will get buffered at six or seven layers instead
of just two.

Wait, make that three. I think most modern disk controllers do some
buffering of their own, because waiting for the right spot on a platter
to rotate under the write head is expensive, and waiting for the head to
move to a different cylinder is even more expensive.

There could already today potentially be:
- Java class buffer
- C RTL buffer
- OS IO buffer
- OS disk cache
- disk cache

Arne
 
A

Arne Vajhøj

Martin said:
Maybe so, but we need a lot more attention paid to using thick enough
substrate material to minimize print-through for really permanent
recordings. The other requirement would seem to be to archive at least
one compatible tape drive plus paper copies of its specs and manual. So
far nobody seems to have considered that aspect of long-term storage of
material in formats that can't be read directly by humans.

That is a standard problem in archival context.

My understanding is that most places they drop the idea of keeping the
original media and simple copy to new media every X years - and
thereby always are on current media type.

Arne
 
M

Martin Gregorie

That is a standard problem in archival context.

My understanding is that most places they drop the idea of keeping the
original media and simple copy to new media every X years - and thereby
always are on current media type.
Sure. That's my usual approach. So far the only place I've almost caught
myself out is ss/sd and ds/dd floppies. As I still have a Flex9 box I can
read the disks if I need to: so far I haven't needed to. However, moving
the contents any place else would require soldering a board together
since that box doesn't have serial ports.
 
J

John W Kennedy

Arne said:
Does that relate to the question asked ?

The question is awkwardly worded, and can easily be misread as "I can't
find the output file." (I still can't figure out why the original
questioner says the file "might" be truncated to zero bytes.)
 
J

J. Davidson

Joshua said:
So? There's already high-level API buffering, filesystem buffering, and
probably disk-level buffering as well. As long as they can be reasonably
guarded against concurrency issues, there's no problem.

Oh, I wasn't meaning to say there was a problem. Just that multiple
levels of buffering will become even more common, with even more levels,
and it will always be important to close the topmost level of stream in
the tower, rather than a lower down one. I've personally debugged Java
code with subtle bugs, like truncated output, caused by things like
closing an OutputStream that had a Writer attached instead of closing
the Writer, and I'm sure I'll be seeing more like that in the future.

On the other hand, too many layers of indirection is the one problem you
can't solve by adding a layer of indirection, or so I've heard. :)

- jenny
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,565
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top