Debian MySQL Perl DBI - connection terminates unexpectedly after 100 secs.

J

John

Hi

I'm updating a MySQL table with about 2 million records. After about 100
seconds the program ends with no error message. If I use LIMIT 800000,50000
the part where it stopped is OK so the UPDATE part is OK.

Is there a time out parameter? Or am I running out of memory. Can the
memory be increased?

Regards
John
 
J

J. Gleixner

John said:
Hi

I'm updating a MySQL table with about 2 million records. After about 100
seconds the program ends with no error message. If I use LIMIT 800000,50000
the part where it stopped is OK so the UPDATE part is OK.

Is there a time out parameter? Or am I running out of memory. Can the
memory be increased?

Check line 24.
 
J

John

J. Gleixner said:
Check line 24.

Line 24 says:-

our $old_fh=select(STDOUT); $|=1; select($old_fh); # Make standard output
socket hot

Can't see how that can affect it.

Regards
John
 
J

Jens Thoms Toerring

Line 24 says:-
our $old_fh=select(STDOUT); $|=1; select($old_fh); # Make standard output
socket hot
Can't see how that can affect it.

I rather guess it was a gentle hint that your problem description
is too vague to allow to deduce what's going wrong. But your sub-
ject line may contain a hint: it looks as if the update is done
from within a CGI script or something similar, running on a web
server. And normally there's a CPU time limit set for such scripts
to catch e.g. script that run in an endless loop. That's what I
would check for first (note: if it's not set then an OS default
limit may be used!). If you are using Apache it might be a RLimitCPU
directive. But there could also be a limit on the memory consumption
of such scripts (RLimitMEM with Apache2)... Perhaps a look at the log
files of the web server will give you some more hints.

Regards, Jens
 
T

Tad J McClellan

John said:
Line 24 says:-

our $old_fh=select(STDOUT); $|=1; select($old_fh); # Make standard output
socket hot

Can't see how that can affect it.


Check line 17.
 
X

Xho Jingleheimerschmidt

John said:
Hi

I'm updating a MySQL table with about 2 million records. After about 100
seconds the program ends with no error message.

Are you sure? If there were an error message, do you know where it
would go and do you know you could find it?
If I use LIMIT 800000,50000
the part where it stopped is OK so the UPDATE part is OK.

Is there a time out parameter?

I don't think there is one in DBD::mysql. The program that calls your
perl script could have a time out, or your MySQL server might have a
time out, but neither of those are Perl issues

Or am I running out of memory. Can the
memory be increased?

Unlike a select, there is no reason that large update should consume
much memory in the Perl client. And I don't use Debian specifically,
but on other linux systems running out of memory does produce an error
somewhere, but not always in the first place one might think to look.

Xho
 
J

John

Jens Thoms Toerring said:
I rather guess it was a gentle hint that your problem description
is too vague to allow to deduce what's going wrong. But your sub-
ject line may contain a hint: it looks as if the update is done
from within a CGI script or something similar, running on a web
server. And normally there's a CPU time limit set for such scripts
to catch e.g. script that run in an endless loop. That's what I
would check for first (note: if it's not set then an OS default
limit may be used!). If you are using Apache it might be a RLimitCPU
directive. But there could also be a limit on the memory consumption
of such scripts (RLimitMEM with Apache2)... Perhaps a look at the log
files of the web server will give you some more hints.

Regards, Jens

Many thanks. That was very helpful. I have maxed both RLimitCPU and
RLimitMEM in Apache2 and that has had led to better performance. There are
severe problems in using MySQL when you start hitting more than 2 million
rows irrespective how well you tweak your settings.

Regards
John
 
J

John

Xho Jingleheimerschmidt said:
Are you sure? If there were an error message, do you know where it would
go and do you know you could find it?

I don't think there is one in DBD::mysql. The program that calls your
perl script could have a time out, or your MySQL server might have a time
out, but neither of those are Perl issues



Unlike a select, there is no reason that large update should consume much
memory in the Perl client. And I don't use Debian specifically, but on
other linux systems running out of memory does produce an error somewhere,
but not always in the first place one might think to look.

Xho

Yes I have now found that error log. There is a special log for slow
queries. So evidently MySQL are aware that some constructs do take a long
time.
I am not certain whether it is a Perl problem. My feeling now it is a MySQL
problem and that they are aware of similar problems.

Regards
John
 
S

sln

Many thanks. That was very helpful. I have maxed both RLimitCPU and
RLimitMEM in Apache2 and that has had led to better performance. There are
severe problems in using MySQL when you start hitting more than 2 million
rows irrespective how well you tweak your settings.

Regards
John
I am for hire. I've never done your fancy server names stuff.
But I can assure you I am an expert at Data Base and Perl.
The learning curve is like the next day.

Pay me and you get quality software! Price's are negotiable.
I'm looking for full-time at 140K but will work at $120/hr at
minimal time.

-sln
 
X

Xho Jingleheimerschmidt

John said:
Many thanks. That was very helpful. I have maxed both RLimitCPU and
RLimitMEM in Apache2 and that has had led to better performance. There are
severe problems in using MySQL when you start hitting more than 2 million
rows irrespective how well you tweak your settings.

I have MySQL tables with 150 times that many rows. Obviously I don't
try to run unrestricted update DML on them. It isn't how many rows you
have, it is what you are trying to do with them.

Xho
 
J

J. Gleixner

John said:
[...]


Yes I have now found that error log. There is a special log for slow
queries. So evidently MySQL are aware that some constructs do take a long
time.

I'm confused.. Are you doing a query or an update?

A long running query should be noted so it can possibly be fixed
by adding indexes, temp tables, etc.
I am not certain whether it is a Perl problem. My feeling now it is a MySQL
problem and that they are aware of similar problems.

It's not a perl problem or a MySQL 'problem'. They both can handle
running for a looooong time.

If you're going to run a loooooong update, and have it kicked off by
an Apache process (which, BTW you didn't mention in the beginning),
then don't mess with Apache, simply fork the MySQL update process
so Apache doesn't have to keep the connection open.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top