ruby-mysql issues and persistence

A

Andres Salomon

I have some code that triggers a race somewhere in mysql. Basically, what
happens is this (all in ruby code):

1) using ruby-dbi w/ the mysql DBD, a connection is made to a database.
2) fork() is called
3) the child process redirects std{in,out,err} to /dev/null, and then runs
a SQL query; SET AUTOCOMMIT=1
4) the child process runs another SQL query; this one is an INSERT.
5) the parent process exits. This exit calls all finalizers on objects in
the parent's memory, including free_mysql() (from mysql-ruby, free_mysql
is set as a callback inside init() via Data_Make_Struct()). free_mysql()
closes the mysql connection.
6) if timed correctly, the INSERT from step 4 hangs indefinitely.

The race itself is another issue (a bug in libmysqlclient or innodb,
presumably). However, the reason why the race occurs is something that
I'd like to solve. Is there any suggestions for what I should do?
Basically, I need to force the mysql object to not be destroyed in the
parent process. Simply closing and reopening the database handle in the
child doesn't work, as there's a chance that the parent may want to do
some sql calls (and if the child closes the connection.. well, the problem
just moves there, instead).

So, I need a way to basically call undefine_finalizer for the callback
supplied to Data_Make_Struct(); or, I need a way to properly clone/dup the
mysql connection in the child, such that I destroy the original connect
to the database (that is shared w/ the parent) w/out sending a QUIT to
the mysqld, and then replace it w/ a new connection to the db.

A generic solution would be great. A persistence framework would solve
this problem for me, as well; something that allows me to open one DBI
connection, and share that amongst multiple ruby scripts and processes.
The DBI object should be reconstituted from memory/file/socket
somewhere, w/ the idea that destruction of the object is disabled (unless
a timeout occurs). A timeout should be set somewhere, so that if the DBI
connection is not used for 600 seconds or so, the object is destroyed, and
any script needing it after that has it automatically created. Does
anything like this exist? Being able to use it for other things would be
useful, as well. For example, I currently load XSL templates from
multiple files, and form an XSL object.. If I could keep this in memory,
via persistence, that would be ideal.

Note that this is being called via an interpreter inside of an apache
module; mod_ruby isn't really an option, nor would I want to add this
functionality to my apache module if I can get around it.
 
F

Florian G. Pflug

Andres said:
I have some code that triggers a race somewhere in mysql. Basically, what
happens is this (all in ruby code):

1) using ruby-dbi w/ the mysql DBD, a connection is made to a database.
2) fork() is called
3) the child process redirects std{in,out,err} to /dev/null, and then runs
a SQL query; SET AUTOCOMMIT=1
4) the child process runs another SQL query; this one is an INSERT.
5) the parent process exits. This exit calls all finalizers on objects in
the parent's memory, including free_mysql() (from mysql-ruby, free_mysql
is set as a callback inside init() via Data_Make_Struct()). free_mysql()
closes the mysql connection.
6) if timed correctly, the INSERT from step 4 hangs indefinitely.
I don't think its a good idea to use the same database connection from
two different processes.
Either create a new connection for the child, or (if it absolutly _has_
to be the same connection) connect to some
query-multiplexer from both processes.

I guess what happens is that when the first process shuts down, it
signals the mysql backend "end of connection".
The backend thus believe the connection to be lost, and never reports a
status back to the second process - thus it hangs forever, waiting
for a sucess or failure message concerning the insert.

greetings, Florian Pflug
 
A

Andres Salomon

I don't think its a good idea to use the same database connection from
two different processes.
Either create a new connection for the child, or (if it absolutly _has_
to be the same connection) connect to some
query-multiplexer from both processes.

Right; the problem is, how does one handle that, post-fork? I don't think
closing the Mysql connection before the fork, and then opening two
connections to the Mysql server after the fork is the optimal solution.
If the connection is left open before the fork, after the fork whichever
one of the processes exits first will destroy the connection for the other
process.
I guess what happens is that when the first process shuts down, it
signals the mysql backend "end of connection". The backend thus believe
the connection to be lost, and never reports a status back to the second
process - thus it hangs forever, waiting for a sucess or failure message
concerning the insert.


It's actually inside mysql that the connection hangs, though. Doing a
'show processlist;' on the database shows the active connection (0 seconds
idle), so mysqld thinks it's doing *something*.
 
H

Heikki Tuuri

Hi!

Hmm... I assume you are only running queries on the connection in the child
process. That is ok. But all kinds of bad things can happen if the parent
process closes the connection while the child is running queries on it. Data
structures disappear under the child.

Best regards,

Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,050
Latest member
AngelS122

Latest Threads

Top