mod_perl - dbi - DBD:Pg performance test, old vs new


Pablo S

Hi there mod_perl/Pg folks,

I have 2 systems, one OLD,
(linux 2.4 running postgresql-7.2.1-5 with a perl web db on
Apache/1.3.23 mod_perl 1.26 and dbi 1.37 with peristent db connections
via apache:dbi. )

The other system is NEW, which i am demoing
(running the same web app on linux 2.6 postgresql-7.4.2-1 w/
Apache/2.0.49 & mod_perl/1.99_12 & apache:dbi. (this one fedora core

Both systems have almost identical hardware, and have had the same
tweaks made to pg - at least all I can rack out of my brain –

<pg stuff>
their SYSV shared mem increased to 128mb
shared_buffers = 15200
sort_mem = 32168
effective_cache_size = 4000
</pg stuff>

I have one page where I run the same query 155 times. (Whew! I know
it sucks, but it is what raises the question.) On the old server, it
loads in 1.5 seconds. On the new server, with the same data, it takes
6. I am trying to find out why.

I think this has something to do with either DBI or Apache or
mod_perl(is that all?), as it is a pretty simple loop, DBhandle is
there the whole time, i just prepare a new sth and execute it, read
back the rows, rinse repeat. It is the same code.

I know pg performance tuning better than mod_perl, so heres that info:

It is your standard join query:

explain select, ts.start, to_hours(ts.hours) as hours,
ts.emp_id, ts.finish, ts.so_id, ts.lowdown, so.customer, so.summary,
so.status, ts.status as ts_status from ts inner join so on where emp_id=? and start between ? and ?

My table is ~10MB, so I am expecting pg should be able to get it all
buffered in shared mem. If I EXPLAIN it they are both doing the same
thing – using an index on each table.

7.2 –

Nested Loop (cost=0.00..12.04 rows=1 width=176)
-> Index Scan using ts_empid_start_idx on ts (cost=0.00..6.02
rows=1 width=76)
-> Index Scan using so_id_key on so (cost=0.00..6.01 rows=1

7.4 –
Nested Loop (cost=0.00..12.09 rows=2 width=172)
-> Index Scan using ts_empid_start_idx on ts (cost=0.00..6.02
rows=1 width=76)
Index Cond: ((emp_id = 1) AND ("start" >= '2004-01-01
00:00:00'::timestamp with time zone) AND ("start" <= '2004-01-04
00:00:00'::timestamp with time zone))
-> Index Scan using so_id_key on so (cost=0.00..6.01 rows=1
Index Cond: ( = "outer".so_id)

Watching processes, pg is doing a lot of work on the new box - apache
also seems to be logging more cycles but it is 70% Pg. So I am
looking for experts to hopefully point me in the right direction to
why this may be happening. One thing I did notice, my old (faster)
server has pg processes using 37mb of RAM, each. On the new server,
they are only using 8. The table is larger than 8MB but I thought I
read that table caching is done in IPC cache mem, not process mem.

Thank you for your interest!! Any comments welcome.

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Latest member

Latest Threads