M
Michael Malone
Hi All,
I have a setup where I am writing to a pipe in one process and reading
in another. I am closing the end I'm not using etc, but I have a new
problem. Having recently read that I was just getting lucky when it
came to my IO.write completing each time (probably due to Ruby's green
thread implementation in v1.8 and 'nice' scheduling) but now that the OS
has taken over scheduling, it tends to interrupt things when it damn
well feels like it, so I believe some of my write calls aren't
completing, so I've set up a loop like this:
rescue Exception => error
ex_string = Marshal.dump(error)
ex_size = ex_string.bytesize
bytes_written = 0
while bytes_written < ex_size
bytes_written += write_end.write(ex_string.slice!(bytes_written))
end
ensure
write_end.close
end
end
and in the other process, I currently make just one call to
read_end.read() so my question is, is this guaranteed to get all of the
bytes written through multiple calls to write or do I need to set up a
similar loop on the read end? like
while !(read_end.eof?)
string += read_end.read
end
Or does read_end.read block until it finds EOF?
Can anyone tell me what happens when read_end.read is re-scheduled
partway through? Or is that guaranteed to finish?
I'm running this in Linux, so posix rules probably apply.
Thanks in advance,
Michael
=======================================================================
This email, including any attachments, is only for the intended
addressee. It is subject to copyright, is confidential and may be
the subject of legal or other privilege, none of which is waived or
lost by reason of this transmission.
If the receiver is not the intended addressee, please accept our
apologies, notify us by return, delete all copies and perform no
other act on the email.
Unfortunately, we cannot warrant that the email has not been
altered or corrupted during transmission.
=======================================================================
I have a setup where I am writing to a pipe in one process and reading
in another. I am closing the end I'm not using etc, but I have a new
problem. Having recently read that I was just getting lucky when it
came to my IO.write completing each time (probably due to Ruby's green
thread implementation in v1.8 and 'nice' scheduling) but now that the OS
has taken over scheduling, it tends to interrupt things when it damn
well feels like it, so I believe some of my write calls aren't
completing, so I've set up a loop like this:
rescue Exception => error
ex_string = Marshal.dump(error)
ex_size = ex_string.bytesize
bytes_written = 0
while bytes_written < ex_size
bytes_written += write_end.write(ex_string.slice!(bytes_written))
end
ensure
write_end.close
end
end
and in the other process, I currently make just one call to
read_end.read() so my question is, is this guaranteed to get all of the
bytes written through multiple calls to write or do I need to set up a
similar loop on the read end? like
while !(read_end.eof?)
string += read_end.read
end
Or does read_end.read block until it finds EOF?
Can anyone tell me what happens when read_end.read is re-scheduled
partway through? Or is that guaranteed to finish?
I'm running this in Linux, so posix rules probably apply.
Thanks in advance,
Michael
=======================================================================
This email, including any attachments, is only for the intended
addressee. It is subject to copyright, is confidential and may be
the subject of legal or other privilege, none of which is waived or
lost by reason of this transmission.
If the receiver is not the intended addressee, please accept our
apologies, notify us by return, delete all copies and perform no
other act on the email.
Unfortunately, we cannot warrant that the email has not been
altered or corrupted during transmission.
=======================================================================