File read error win9x winNT

D

dosworldguy

I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption. WinNT clients inlcuding windows2000 & XP do not have this
issue. The program is complied in VC++, console mode.

I am unable to understand the cause. I flush the files before the read
and still have this issue. The problem is aggrevated if the write was
from another win9x client and subsequent read if from another win9x
client: this results in a dirty read.
 
?

=?ISO-8859-1?Q?=22Nils_O=2E_Sel=E5sdal=22?=

dosworldguy said:
I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption. WinNT clients inlcuding windows2000 & XP do not have this
issue. The program is complied in VC++, console mode.

I am unable to understand the cause. I flush the files before the read
and still have this issue. The problem is aggrevated if the write was
from another win9x client and subsequent read if from another win9x
client: this results in a dirty read.
fread can cache data.
That means if you fread one record, the system might actually read 3 and
a half records, caching the remaining 2.5 records.

flushing input streams is undefined behavior.

Your question anyway sounds windows specific, ask in
a windows related programming channel.
 
A

Ancient_Hacker

dosworldguy said:
I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption.


This will never work reliably. What if the app writes a record, but
the reader happens to read that area in the middle of the write? No
OS that I know of guarantees that disk I/O is "atomic". The OS is free
to reorder disk writes as it sees fit. For example, if a file is
spread out over disparate areas of the disk, many OS's have an
"elevator" algorithm-- they reorder read/write operations to minimize
the distance the disk heads have to travel. It's called the "elevator"
algorithm as the intent is to reorder the operations so the disk head
sweeps back and forth with a minimum number of changes direction.

Also if yo're accessing the file across the network, the disk blocks
may arrive out of order due to typical network protocols.

You need to implement some kind of record or file locking. For a C
program, see the "flock" or "lockf" library routines. If you're
doing this jsut for Windows, there are some non-portable WIndows file
locking API's that probably will give better granularity and
performance than the C lib functions.
 
?

=?ISO-8859-1?Q?=22Nils_O=2E_Sel=E5sdal=22?=

Ancient_Hacker said:
This will never work reliably. What if the app writes a record, but
the reader happens to read that area in the middle of the write? No
OS that I know of guarantees that disk I/O is "atomic". The OS is free
to reorder disk writes as it sees fit. For example, if a file is
spread out over disparate areas of the disk, many OS's have an
"elevator" algorithm-- they reorder read/write operations to minimize
the distance the disk heads have to travel. It's called the "elevator"
algorithm as the intent is to reorder the operations so the disk head
sweeps back and forth with a minimum number of changes direction.
An OS can have all the knowledge of all read /writes by all
applications, and often have a cache aiding in keeping
things consistent even though the actual disk IO might be reordered.
Thus it can syncronize read/writes to the same file.

You do ofcourse need to synchronize readers and writes anyway, but
usually not for reasons that have to do with the actual IO gynmastics
of the platform.
 
A

Ancient_Hacker

Nils said:
An OS can have all the knowledge of all read /writes by all
applications, and often have a cache aiding in keeping
things consistent even though the actual disk IO might be reordered.
Thus it can syncronize read/writes to the same file.

I realize this would be a Good Thing, but do we know for sure all the
major OS's do in fact guarantee this? Your use of the word "can"
leaves a lot of wiggle-room. I'm pretty sure this isnt guaranteed by
some very popular remote-disk protocols, like NFS. The poor OP is
probably looking for something dependable and standard.
 
K

Kenneth Brody

dosworldguy said:
I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption. WinNT clients inlcuding windows2000 & XP do not have this
issue. The program is complied in VC++, console mode.

I am unable to understand the cause. I flush the files before the read
and still have this issue. The problem is aggrevated if the write was
from another win9x client and subsequent read if from another win9x
client: this results in a dirty read.

If you're using stream I/O (ie: fopen/fread/fwrite/fclose, which are
the only ones discussed here) then you have the problem that the other
processes may have already read, and buffered, the old data before you
write the new data. (And flushing input streams is not defined.)

If you're using the POSIX open/read/write/close functions (which are
not discussed here), then you may need to ask a Windows group about
something called "opportunistic locking", which can cause the O/S
itself to locally cache data from a file on another server, causing
updates to be missed. (BTDTGTTS)

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:[email protected]>
 
D

dosworldguy

Thank you all for your thoughts.

To add:

Locking is implemented via "co-operative" means: when a client wants to
write, it requests a server for permission, gets the "go ahead",
performs write and hands back the flag to the server. No other client
can write during that time.

After one write, another client askes for the write token. Now when
this client reads a record written earlier, it gets to read 'dirty
data'.

This is not consistent. Also, open, read & write have been used, not
fopen & etc.

I am looking for 'C' help. Instead of Flush, will it help to close the
files and re-open?
 
A

Ancient_Hacker

dosworldguy said:
Thank you all for your thoughts.

To add:

Locking is implemented via "co-operative" means: when a client wants to
write, it requests a server for permission, gets the "go ahead",
performs write and hands back the flag to the server.

How do you ensure that the client write has happened and completed and
flushed to the server disk? OS's, particularly on network
connections, do extensive file buffering. the client program may have
done a write(), but that just puts the data in a kernel buffer. Even
doing an explicit C close() on the file descriptor does not guarantee
the data is at the server and consistent to other readers or writers.
And file packets can arrive out of order, so even an explicit close
might have to wait until allpackets have arrived and been acknowledged
and reverse acknowledged by the client.

There's nothing in standard C to help with this... You're going to have
to pore thru the OS's network file server API, there's almost certainly
a
"ReallyReallyFlushThisDataToTheNetworkDiskAndEnsureAllTheDiskCachesGotFlushedAndThe
ToDiskAndTheWriteSuceededAndTheFileCloseWentOkayAndNeverGiveMeAPrematureAndOverlyOptimisticAllIsOkay(
"\\\\Server\\Path\\FileName.Ext" );
 
C

Chris Torek

This will never work reliably.

Not in general, no.
What if the app writes a record, but the reader happens to read
that area in the middle of the write? No OS that I know of
guarantees that disk I/O is "atomic".

HRFS, in vxWorks 6.x, does. (Well, the on-disk writes are not
atomic, but at the file I/O level, they *appear* to be, as they
are "transactionalized". The file system guarantees that, even if
the power fails in the middle of a write() call, the write() is
either not-started-at-all or completely-done when the file is
examined later. This does depend on certain [common] disk drive
characteristics; in particular the disk itself must not fail due
to the power-off.)
You need to implement some kind of record or file locking.

Or some other OS-specific method, certainly. So the solution is
off-topic in comp.lang.c, alas. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top