Trouble with Connector/J and ResultSet()

L

lord0

Hello,

I've got some trouble which I hope you can shed some light on.

I am using mySQL's Connector/J version 3.0.10-stable (the latest) to allow
Tomcat and mySQL to "talk" to each other - all works well.

However...when I perform "select" queries which return a large(ish) set of
data the servlet goes belly up. Perusing the error logs I can see:

Ajp13Processor[8009][0] process: IOException Broken pipe // Tomcat log
Error ajp13:is_input_event: error during select [0] // mod_jk log

After googling and surfing for a bit the only things I could find were:
"My fisrt guess should be that the size of your
resultSet is over then the default driver size (1027Ko
if I remember correctly)
look at you driver spécification to modify the size of
the stream."

http://archives.real-time.com/pipermail/tomcat-users/2000-September/009064.html

Which sounds right but I don't know how to "modify the size of the
stream"......ooooer

and:
"By default, ResultSets are completely retrieved and stored in memory. In
most cases this is the most efficient way to operate, and due to the design
of >theMySQL network protocol is easier to implement. If you are working
with ResultSets that have a large number of rows or large values, and can
not allocate >heap space in your JVM for the memory required, you can tell
the driver to 'stream' the results back one row at-a-time.
To enable this functionality, you need to create a Statement instance in
the following manner:
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);>The combination of a forward-only,
read-only result set, with a fetch size of Integer.MIN_VALUE serves as a
signal to the driver to "stream" result sets row-by->row. After this any
result sets created with the statement will be retrieved row-by-row.
There are some caveats with this approach. You will have to read all of the
rows in the result set (or close it) before you can issue any other queries
on the >connection, or an exception will be thrown. Also, any tables
referenced by the query that created the streaming result will be locked
until all of the results have >been read or the connection closed."

http://www.mysql.com/documentation/connector-j/index.html#id2800171

Which didn't work and to me seems a bit convoluted for only a couple of K of
data.

Anyhelp would be appreciated

Lord0
 
T

Tony Morris

lord0 said:
Hello,

I've got some trouble which I hope you can shed some light on.

I am using mySQL's Connector/J version 3.0.10-stable (the latest) to allow
Tomcat and mySQL to "talk" to each other - all works well.

However...when I perform "select" queries which return a large(ish) set of
data the servlet goes belly up. Perusing the error logs I can see:

Ajp13Processor[8009][0] process: IOException Broken pipe // Tomcat log
Error ajp13:is_input_event: error during select [0] // mod_jk log

After googling and surfing for a bit the only things I could find were:
"My fisrt guess should be that the size of your
resultSet is over then the default driver size (1027Ko
if I remember correctly)
look at you driver spécification to modify the size of
the stream."
http://archives.real-time.com/pipermail/tomcat-users/2000-September/009064.html

Which sounds right but I don't know how to "modify the size of the
stream"......ooooer

and:
"By default, ResultSets are completely retrieved and stored in memory. In
most cases this is the most efficient way to operate, and due to the design
of >theMySQL network protocol is easier to implement. If you are working
with ResultSets that have a large number of rows or large values, and can
not allocate >heap space in your JVM for the memory required, you can tell
the driver to 'stream' the results back one row at-a-time.
To enable this functionality, you need to create a Statement instance in
the following manner:
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);>The combination of a forward-only,
read-only result set, with a fetch size of Integer.MIN_VALUE serves as a
signal to the driver to "stream" result sets row-by->row. After this any
result sets created with the statement will be retrieved row-by-row.
There are some caveats with this approach. You will have to read all of
the
rows in the result set (or close it) before you can issue any other queries
on the >connection, or an exception will be thrown. Also, any tables
referenced by the query that created the streaming result will be locked
until all of the results have >been read or the connection closed."

http://www.mysql.com/documentation/connector-j/index.html#id2800171

Which didn't work and to me seems a bit convoluted for only a couple of K of
data.

Anyhelp would be appreciated

Lord0

Post a source code test case that demonstrates the problem.
Not all drivers and ResultSet types return the data as an in-memory cache
copy.

--
Tony Morris
(BInfTech, Cert 3 I.T., SCJP[1.4], SCJD)
Software Engineer
IBM Australia - Tivoli Security Software
(2003 VTR1000F)
 
L

lord0

Hello,

I've got some trouble which I hope you can shed some light on.

I am using mySQL's Connector/J version 3.0.10-stable (the latest) to allow
Tomcat and mySQL to "talk" to each other - all works well.

However...when I perform "select" queries which return a large(ish) set of
data the servlet goes belly up. Perusing the error logs I can see:

Ajp13Processor[8009][0] process: IOException Broken pipe // Tomcat log
Error ajp13:is_input_event: error during select [0] // mod_jk log

After googling and surfing for a bit the only things I could find were:
"My fisrt guess should be that the size of your
resultSet is over then the default driver size (1027Ko
if I remember correctly)
look at you driver spécification to modify the size of
the stream."
http://archives.real-time.com/pipermail/tomcat-users/2000-September/009064.html
Which sounds right but I don't know how to "modify the size of the
stream"......ooooer

and:
In
most cases this is the most efficient way to operate, and due to the design
of >theMySQL network protocol is easier to implement. If you are working
with ResultSets that have a large number of rows or large values, and can
not allocate >heap space in your JVM for the memory required, you can tell
the driver to 'stream' the results back one row at-a-time. in
the following manner:
forward-only,
read-only result set, with a fetch size of Integer.MIN_VALUE serves as a
signal to the driver to "stream" result sets row-by->row. After this any
result sets created with the statement will be retrieved row-by-row.
the
rows in the result set (or close it) before you can issue any other queries
on the >connection, or an exception will be thrown. Also, any tables
referenced by the query that created the streaming result will be locked
until all of the results have >been read or the connection closed."

http://www.mysql.com/documentation/connector-j/index.html#id2800171

Which didn't work and to me seems a bit convoluted for only a couple of
K
of
data.

Anyhelp would be appreciated

Lord0

Post a source code test case that demonstrates the problem.
Not all drivers and ResultSet types return the data as an in-memory cache
copy.

### Okay some source ####

// To get connection

public void init()
throws ServletException{
try{
Context ctx = new InitialContext();
if(ctx == null )
throw new Exception("Boom - No Context");

DataSource
ds=(DataSource)ctx.lookup("java:comp/env/jdbc/TestDB");

if (ds != null)
conn = ds.getConnection();
}
catch(Exception e) {
e.printStackTrace();
}
}

// To perform query and get result set
// this works fine as long as the ResultSet is not over a couple of K
// ie "SELECT * FROM sometable LIMIT 10" - cool
// "SELECT * FROM sometable" - boom!!!

private ResultSet doSelect(String sqlString)
throws SQLException{
Statement stmt = conn.createStatement();
ResultSet rst = stmt.executeQuery(sqlString);
return rst;
}
 
B

Berlin Brown

lord0 said:
Hello,

I've got some trouble which I hope you can shed some light on.

I am using mySQL's Connector/J version 3.0.10-stable (the latest) to
allow
Tomcat and mySQL to "talk" to each other - all works well.

However...when I perform "select" queries which return a large(ish) set
of
data the servlet goes belly up. Perusing the error logs I can see:

Ajp13Processor[8009][0] process: IOException Broken pipe // Tomcat log
Error ajp13:is_input_event: error during select [0] // mod_jk log

After googling and surfing for a bit the only things I could find were:


"My fisrt guess should be that the size of your
resultSet is over then the default driver size (1027Ko
if I remember correctly)
look at you driver spécification to modify the size of
the stream."
http://archives.real-time.com/pipermail/tomcat-users/2000-September/009064.html
Which sounds right but I don't know how to "modify the size of the
stream"......ooooer

and:


"By default, ResultSets are completely retrieved and stored in memory.
In
most cases this is the most efficient way to operate, and due to the
design

of >theMySQL network protocol is easier to implement. If you are working
with ResultSets that have a large number of rows or large values, and
can
not allocate >heap space in your JVM for the memory required, you can
tell
the driver to 'stream' the results back one row at-a-time.

To enable this functionality, you need to create a Statement instance
in
the following manner:


stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,

java.sql.ResultSet.CONCUR_READ_ONLY);

stmt.setFetchSize(Integer.MIN_VALUE);>The combination of a
forward-only,
read-only result set, with a fetch size of Integer.MIN_VALUE serves as a
signal to the driver to "stream" result sets row-by->row. After this any
result sets created with the statement will be retrieved row-by-row.


There are some caveats with this approach. You will have to read all of
the

rows in the result set (or close it) before you can issue any other
queries

on the >connection, or an exception will be thrown. Also, any tables
referenced by the query that created the streaming result will be locked
until all of the results have >been read or the connection closed."

http://www.mysql.com/documentation/connector-j/index.html#id2800171

Which didn't work and to me seems a bit convoluted for only a couple of
K
of

data.

Anyhelp would be appreciated

Lord0

Post a source code test case that demonstrates the problem.
Not all drivers and ResultSet types return the data as an in-memory cache
copy.


### Okay some source ####

// To get connection

public void init()
throws ServletException{
try{
Context ctx = new InitialContext();
if(ctx == null )
throw new Exception("Boom - No Context");

DataSource
ds=(DataSource)ctx.lookup("java:comp/env/jdbc/TestDB");

if (ds != null)
conn = ds.getConnection();
}
catch(Exception e) {
e.printStackTrace();
}
}

// To perform query and get result set
// this works fine as long as the ResultSet is not over a couple of K
// ie "SELECT * FROM sometable LIMIT 10" - cool
// "SELECT * FROM sometable" - boom!!!

private ResultSet doSelect(String sqlString)
throws SQLException{
Statement stmt = conn.createStatement();
ResultSet rst = stmt.executeQuery(sqlString);
return rst;
}


How is your connection pool or datasource configured through tomcat, ie
in your .xml file. Probably limiting the queries.
 
L

lord0

How is your connection pool or datasource configured through tomcat, ie
in your .xml file. Probably limiting the queries.

You are absolutely correct - the worker.properties files for apj13 was set
with a tiny timeout so it was losing the connection during the query
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,024
Latest member
ARDU_PROgrammER

Latest Threads

Top