S
skeldoy
Hey! I am working on an application that features a huge database
(mysql) and some cgi (perl) for listing, sorting, searching,
dupechecking and more. I see that the configuration for the mysql is
pretty much spot on - most of the data is cached and so the mysqld-
process isn't really doing much in terms of causing bottlenecks. But I
believe that the perl-code may be the bottleneck here. I have turned
off buffering completely and I render pretty much just the things that
are needed. Still it can take up to a minute to print (in html) a
query that returns in a second on the mysql-console.
The output from the cgi is around 15MB for every operation the user
does, so I see the potential for slowness right there, in the sheer
amount of data that has to be produced and transmitted over the net,
but still I don't really understand what I have done to make the cgi
so slow.
The cgi is mostly taking a parameter like $query and doing a "select
from db where value like "%$query%"" and returning that in pretty <td>
$_</td>-form. That seems to work reasonably fast. But when I do a
"select * from db" things tend to get really slow when dealing with
15000-entries++ (even though mysql has it all cached and spits it out
in a split second). The cgi-process sits there, spitting out html to
the client, using up 95% of the cpu-time of one of the cores and using
50MBs of memory or so. I have no idea what it does. I have replaced
most of the " with '. And I can't really see that I am doing something
that needn't be done. Is there an issue with creating multiple
database-connections (DBD::mysql) that I should be aware of?
If somebody has experience in doing huge db's with perl, can you
please give me some pointers? Is this a code issue or is it a network-
issue or is it a browser issue? Does anyone have any tips for doing
huge databases with perl?
(mysql) and some cgi (perl) for listing, sorting, searching,
dupechecking and more. I see that the configuration for the mysql is
pretty much spot on - most of the data is cached and so the mysqld-
process isn't really doing much in terms of causing bottlenecks. But I
believe that the perl-code may be the bottleneck here. I have turned
off buffering completely and I render pretty much just the things that
are needed. Still it can take up to a minute to print (in html) a
query that returns in a second on the mysql-console.
The output from the cgi is around 15MB for every operation the user
does, so I see the potential for slowness right there, in the sheer
amount of data that has to be produced and transmitted over the net,
but still I don't really understand what I have done to make the cgi
so slow.
The cgi is mostly taking a parameter like $query and doing a "select
from db where value like "%$query%"" and returning that in pretty <td>
$_</td>-form. That seems to work reasonably fast. But when I do a
"select * from db" things tend to get really slow when dealing with
15000-entries++ (even though mysql has it all cached and spits it out
in a split second). The cgi-process sits there, spitting out html to
the client, using up 95% of the cpu-time of one of the cores and using
50MBs of memory or so. I have no idea what it does. I have replaced
most of the " with '. And I can't really see that I am doing something
that needn't be done. Is there an issue with creating multiple
database-connections (DBD::mysql) that I should be aware of?
If somebody has experience in doing huge db's with perl, can you
please give me some pointers? Is this a code issue or is it a network-
issue or is it a browser issue? Does anyone have any tips for doing
huge databases with perl?