java.net.SocketException: Insufficient buffer space

J

Jeff

I did post the same content last night, but did not include the error in the
subject line.

Does anyone have more detail on what this exception means? The JVM is
JRockit.

Our server experiences this error when it's client sends heavy load (7 or 8
Megabits/sec). In netstat, we see the tcp receive buffer growing large,
implying our server is falling behind. That should result in tcp closing
the advertised window to the client. It shouldn't throw an error so we're
not sure which buffer space is insufficient: tcp's or our
buffereInputStream.

Socket probeSocket = sock.accept();
probeSocket.setReceiveBufferSize(32768);

BufferedInputStream is = new BufferedInputStream(
probeSocket.getInputStream(), 024000 );

Anybody have some experience with this error?

Thanks
 
J

John C. Bollinger

Jeff said:
Our server experiences this error when it's client sends heavy load (7 or 8
Megabits/sec). In netstat, we see the tcp receive buffer growing large,
implying our server is falling behind. That should result in tcp closing
the advertised window to the client. It shouldn't throw an error so we're
not sure which buffer space is insufficient: tcp's or our
buffereInputStream.

The problem is not the BufferedInputStream. In the first place, it
cannot exhaust its buffer because it is a *read* buffer, wholly or
partially filled from the underlying input stream as needed. In the
second place, it would not generate a java.net.SocketException under any
circumstances, although it might allow such an exception thrown by the
wrapped InputStream to propogate outward. If you want to verify, it's
not that hard to replace a BufferedInputStream's buffering behavior with
hand-rolled buffering.

As I wrote elsewhere, a SocketException almost surely reflects an error
condition reported by the native network stack.


John Bollinger
(e-mail address removed)
 
E

Esmond Pitt

John said:
As I wrote elsewhere, a SocketException almost surely reflects an error
condition reported by the native network stack.

I agree with John's diagnosis. You haven't actually told us, but I
suspect the exception is being thrown by ServerSocket.accept() when the
underlying network stack is trying to allocate the accepted socket's
send/receive buffers. Make them smaller, and as John said your
BufferedInputStream buffer is much too high too. There isn't much point
in making it any larger than the socket receive buffer: all you're doing
is fooling the sending TCP/IP stack into sending faster, or should I say
earlier, than really necessary.

EJP
 
J

Jeff

Esmond Pitt said:
I agree with John's diagnosis. You haven't actually told us, but I suspect
the exception is being thrown by ServerSocket.accept() when the underlying
network stack is trying to allocate the accepted socket's send/receive
buffers. Make them smaller, and as John said your BufferedInputStream
buffer is much too high too. There isn't much point in making it any
larger than the socket receive buffer: all you're doing is fooling the
sending TCP/IP stack into sending faster, or should I say earlier, than
really necessary.

EJP
The exception was thrown by the BufferedInputStream read. The
ServerSocket.accept() completed and we are reading from the input stream
when the problem occurs.

We made the BufferedInputStream one meg to reduce the number of reads and
off-load message reassembly. Our proprietary messages can be up to one meg.
Rather than do multiple reads, then reassemble the packet, we let
BufferedInputStream assemble the packet. Faster and less to debug.

Once the tcp receive buffer fills, it should not accept more packets. That
should be communicated to the sender by reducing the size of the receive
window in the tcp header. This is an ancient, reliable mechanism.

I think the problem lies in how the JVM accesses the tcp receive buffer. I
hoped to find more information on the interaction of the JVM with the tcp
stack.

To the most recent post, we are running on SuSE linux. We use JRockit.

I appreciate the continued contribution by newsgroup members. These are
good questions that help us.
 
T

Thomas Weidenfeller

Jeff said:
To the most recent post, we are running on SuSE linux. We use JRockit.

Well, have you contacted BEA's support regarding the problem?

Can you replace the VM? If yes, have you tried with Sun's VM or
blackdown's VM?

/Thomas
 
J

John C. Bollinger

Jeff said:
The exception was thrown by the BufferedInputStream read. The
ServerSocket.accept() completed and we are reading from the input stream
when the problem occurs.

It is conceivable that it is the network stack's packet queue that is
filling up. The network stack may be signaling that or some other error
condition on the socket that is not directly related to the current read
attempt.

It is also conceivable that the native network stack does not signal a
failure to allocate the send and receive buffers until the first attempt
to read from the socket, or that the Java Socket implementation does not
pass on the error until a read attempt it made.
We made the BufferedInputStream one meg to reduce the number of reads and
off-load message reassembly. Our proprietary messages can be up to one meg.
Rather than do multiple reads, then reassemble the packet, we let
BufferedInputStream assemble the packet. Faster and less to debug.

But it doesn't work that way. The BufferedInputStream will never be
able to read more from the socket at one go than the capacity of the
socket's receive buffer. The BufferedInputStream will not request
additional bytes from the socket until it needs to satisfy a request for
more bytes than are waiting unread in its own internal buffer. Thus, as
Esmond said, it is of negligible value to give the BufferedInputStream a
buffer larger than the socket's receive buffer.

That doesn't mean you can't perform an efficient read without copying.
This is a perfect case for performing your own buffering instead of
using a BufferedInputStream. Here's an example that actually does what
you thought your BufferedInputStream was doing for you.

final static int BUF_SIZ = 1024000;

[...]

InputStream bytesIn = mySocket.getInputStream();
byte[] buffer = new byte[BUF_SIZ];
int total = 0;

for (int numRead = bytesIn.read(buffer, total, BUF_SIZ - totalRead);
numRead > 0;
numRead = bytesIn.read(buffer, total, BUF_SIZ - totalRead)) {

total += numRead;
}

After buffering the whole message you can hand it off as the byte array
+ number of bytes, or wrap it up in a suitably configured
ByteArrayInputStream to package those into a single object. Do note
that if messages tend to be smaller than the maximum then you are
wasting memory. Unless you can determine the message size before
allocating the buffer, you have an unavoidable space / speed tradeoff
going on here.
Once the tcp receive buffer fills, it should not accept more packets. That
should be communicated to the sender by reducing the size of the receive
window in the tcp header. This is an ancient, reliable mechanism.

Which is apparently not working.
I think the problem lies in how the JVM accesses the tcp receive buffer. I
hoped to find more information on the interaction of the JVM with the tcp
stack.

To the most recent post, we are running on SuSE linux. We use JRockit.

I can't speak specifically to JRockit, but it is highly unlikely to be
messing about with the TCP implementation. If it accesses the network
stack by anything other than the standard system API then I would
complain to the vendor. It is conceivable that the VM or the
ServerSocket implementation is setting TCP options that you did not
explicitly ask for, but before I spent much effort in that direction I
would try to reduce the probability that the problem is at system level.
It shouldn't be too hard to write a bit bucket TCP service in C
against which you could run your probe to see whether the network stack
behaves similarly.


John Bollinger
(e-mail address removed)
 
J

John C. Bollinger

I said:
wasting memory. Unless you can determine the message size before
allocating the buffer, you have an unavoidable space / speed tradeoff
going on here.

Which is true as far as it goes, but upon further reflection I realize
that if the typical message is considerably smaller than the largest
possible message, and if there are a lot of messages, then the time to
allocate (and later GC) a large amount of unused buffer space many times
may trump the time it takes to copy bytes around in a scheme that uses a
smaller buffer by default and expands it as necessary for large
messages. Such an adaptation scheme could be incorporated into my
example code, but I'll spare you the details.

This all comes back around to the point that you need to _test_ to
determine where your performance problems are, and you need to _test_ to
determine whether any hand optimization you come up with actually
improves performance.


John Bollinger
(e-mail address removed)
 
J

Jeff

John C. Bollinger said:
Which is true as far as it goes, but upon further reflection I realize
that if the typical message is considerably smaller than the largest
possible message, and if there are a lot of messages, then the time to
allocate (and later GC) a large amount of unused buffer space many times
may trump the time it takes to copy bytes around in a scheme that uses a
smaller buffer by default and expands it as necessary for large messages.
Such an adaptation scheme could be incorporated into my example code, but
I'll spare you the details.

This all comes back around to the point that you need to _test_ to
determine where your performance problems are, and you need to _test_ to
determine whether any hand optimization you come up with actually improves
performance.


John Bollinger
(e-mail address removed)

We deal with the variable message size by sending/reading a fixed length
header which contains the message size. Then we allocate a buffer of that
message size and read the rest of the message.

We did get rid of BufferedInputStream based on this newsgroup's input. The
application runs much faster. Using one of our basic load tests, the
application runs 3x faster. Clearly the BufferedInputStream was a
bottleneck. It also seems to have eliminated the bug that started this
thread, but we need more testing to verify that.

John's read loop was much better than my old one so I swapped his in, with a
few adaptions. Here it is.

// assume we've read the message header and determined the msgLength
msgBuffer = new byte[msgLength];
bytesRead = 0;

for ( int numRead = inputStream.read( msgBuffer , bytesRead ,
msgBuffer.length - bytesRead );
numRead > 0;
numRead = inputStream.read( msgBuffer , bytesRead , msgBuffer.length -
bytesRead ) )
{
if ( numRead == -1 ) { // I need to call a
clean up method if the probe sends -1
finalize( "bytes Read = -1" );
} else bytesRead += numRead;
}

On profiling, we use both OptimizeIt and JRockit's JRA to profile. However,
it's hard to identify where code waits for input.

Thanks for the help!
 
J

John C. Bollinger

Jeff said:
We deal with the variable message size by sending/reading a fixed length
header which contains the message size. Then we allocate a buffer of that
message size and read the rest of the message.

We did get rid of BufferedInputStream based on this newsgroup's input. The
application runs much faster. Using one of our basic load tests, the
application runs 3x faster. Clearly the BufferedInputStream was a
bottleneck. It also seems to have eliminated the bug that started this
thread, but we need more testing to verify that.

I'm glad things are working better now. I posit, however, that the
underlying problem has not been solved, but rather that the application
is now fast enough that the underlying problem does not manifest. That
may be sufficient for your purposes.
John's read loop was much better than my old one so I swapped his in, with a
few adaptions. Here it is.

// assume we've read the message header and determined the msgLength
msgBuffer = new byte[msgLength];
bytesRead = 0;

for ( int numRead = inputStream.read( msgBuffer , bytesRead ,
msgBuffer.length - bytesRead );
numRead > 0;
numRead = inputStream.read( msgBuffer , bytesRead , msgBuffer.length -
bytesRead ) )
{
if ( numRead == -1 ) { // I need to call a
clean up method if the probe sends -1
finalize( "bytes Read = -1" );
} else bytesRead += numRead;
}

Looks like a pretty straightforward adaptation. One style comment: I
dislike the use of the name "finalize" for your cleanup method, as it
might cause confusion (to a human) with the no-arg finalize method
inherited from Object. That method has special semantics with regard to
GC, and although the VM will not have a problem distinguishing between
the two, future maintainers of the code might.


John Bollinger
(e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top