MySQL and SQLite question

N

nephish

Lo there all.

I have been giving some thought to moving to SQLite from MySQL. We are
moving our site from php to rails for what we call "the big rewrite"
and since these changes are going on. I really like that SQLite is
stored in flat files that can easily be moved around, like from one
computer to a backup computer. I also like that it sounds like it is
faster. I know that the SQL does not support all of the same abilities
that MySQL does... but, is there much difference with regard to how it
is hanled by ActiveRecord ? I was just curious as to how they may
compare from a rails point of view, or if there are some gotchas out
there i may need to be aware of.

thanks for any tips
sk
 
N

nephish

Paul said:
FWIW, MySQL also can create portable flat files, if that is how you have set
up your database. I have a rather large set of databases for various
purposes, and I can easily export them all into a convenient, plain-text,
flat file format. This depends on what sort of database you have, whether
it exploits advanced database features.

Also, MySQL's native database format is also quite portable, without taking
any conversion steps.

This is only one observation, it doesn't address any of your other
discussion points.

i have used the export feature to create some portable files, but they
take so long to import into another db. One of my tables is 1.5 gig
i had also had some trouble moving the three files per table in one
database to another computer and having it recognized as a ligit table.
Maybe i was doing something wrong. I don't know of another way for
mysql to create a flat file that i could transfer to another computer,
or backup or whatever.

please let me know if i am missing somethiing, and thanks for your
time, Paul.
sk
 
D

David Vallner

--------------enig3BD27CA271309EC1BD8FFB7B
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
I know that the SQL does not support all of the same abilities
that MySQL does...=20

=46rom my own experience, some 80% of MySQL users are stuck at versions
3.x in their heads and don't use fancy features like transactions or
referential integrity enforcement anyway.
but, is there much difference with regard to how it
is hanled by ActiveRecord ?=20

ActiveRecord has very basic requirements on the underlying database
engine. Either should be sufficient.
I was just curious as to how they may
compare from a rails point of view, or if there are some gotchas out
there i may need to be aware of.
=20

ActiveRecord has a very strongly programmer-centric approach to
persistence, so it might not be all too gentle to the database. Using
SQLite will remove the network communication overhead, but I -think-
SQLite only uses rather coarse transaction isolation, table-level, or
even file-level locking. This is very bad for scalability with
write-access users and could literally blow up performance.

Either way, Rails lets you switch underlying databases with reckless
abandon. Do the obvious sensible thing, do the benchmark against
anything you feel comfortable with (I'd go SQLite for development
because it's simpler to get running, YMMV), and then do some load
testing with both the database engines, and in the case of MySQL, with
memcached or another caching mechanism to mitigate the network
connection overhead. (That shouldn't be too noticeable with a local
connection.)

Also, if you have a lot of write-access, you should benchmark both
MyISAM tables and InnoDB tables. The former should be overall faster on
grounds of supporting less SQL features (which you don't need with AR
anyway unless you drop out to SQL often. In which case, ActiveRecord
probably isn't the cleanest choice anyway and you might want to consider
RBatis or something similar). However, the latter supports row-level
locking, which should scale better with higher amounts of concurrent
write-access transactions. Either way, a benchmark is what can tell you
what will really happen.

David Vallner


--------------enig3BD27CA271309EC1BD8FFB7B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (MingW32)

iD8DBQFFdJG6y6MhrS8astoRAiX8AJ9Qvestd3/g288MczvFCHKToDFe4ACeITSp
nwqrnlEZGpaK1GSHhqcmAWk=
=zny9
-----END PGP SIGNATURE-----

--------------enig3BD27CA271309EC1BD8FFB7B--
 
N

nephish

David said:
From my own experience, some 80% of MySQL users are stuck at versions
3.x in their heads and don't use fancy features like transactions or
referential integrity enforcement anyway.


ActiveRecord has very basic requirements on the underlying database
engine. Either should be sufficient.


ActiveRecord has a very strongly programmer-centric approach to
persistence, so it might not be all too gentle to the database. Using
SQLite will remove the network communication overhead, but I -think-
SQLite only uses rather coarse transaction isolation, table-level, or
even file-level locking. This is very bad for scalability with
write-access users and could literally blow up performance.

Either way, Rails lets you switch underlying databases with reckless
abandon. Do the obvious sensible thing, do the benchmark against
anything you feel comfortable with (I'd go SQLite for development
because it's simpler to get running, YMMV), and then do some load
testing with both the database engines, and in the case of MySQL, with
memcached or another caching mechanism to mitigate the network
connection overhead. (That shouldn't be too noticeable with a local
connection.)

Also, if you have a lot of write-access, you should benchmark both
MyISAM tables and InnoDB tables. The former should be overall faster on
grounds of supporting less SQL features (which you don't need with AR
anyway unless you drop out to SQL often. In which case, ActiveRecord
probably isn't the cleanest choice anyway and you might want to consider
RBatis or something similar). However, the latter supports row-level
locking, which should scale better with higher amounts of concurrent
write-access transactions. Either way, a benchmark is what can tell you
what will really happen.

David Vallner


--------------enig3BD27CA271309EC1BD8FFB7B
Content-Type: application/pgp-signature
Content-Disposition: inline;
filename="signature.asc"
Content-Description: OpenPGP digital signature
X-Google-AttachSize: 188

Wow, thanks for all of this guys,
i will do a benchmark, but just to check performance. I am using MyISAM
tables, and will not be migrating to anything else. Our SQL server and
HTTP server are the same machine. The backup server that i want to
back everything up to is going to be identical everything except that
the hardware is not as powerful. Hopefully never used at all, but if we
blow a disc.....
The SQL is MySQL ver 5. I have read that its much faster than 3. I have
been looking at other pros about staying with it because i would not
have to rewrite as much stuff on the data input site ( mostly python,
and they are different ).
So, having said all that, i appreciate the help. Being able to move the
database from one place to another without the hassle is what i was
after.

thanks much
sk
 
J

jzakiya

David said:
From my own experience, some 80% of MySQL users are stuck at versions
3.x in their heads and don't use fancy features like transactions or
referential integrity enforcement anyway.


ActiveRecord has very basic requirements on the underlying database
engine. Either should be sufficient.


ActiveRecord has a very strongly programmer-centric approach to
persistence, so it might not be all too gentle to the database. Using
SQLite will remove the network communication overhead, but I -think-
SQLite only uses rather coarse transaction isolation, table-level, or
even file-level locking. This is very bad for scalability with
write-access users and could literally blow up performance.

Either way, Rails lets you switch underlying databases with reckless
abandon. Do the obvious sensible thing, do the benchmark against
anything you feel comfortable with (I'd go SQLite for development
because it's simpler to get running, YMMV), and then do some load
testing with both the database engines, and in the case of MySQL, with
memcached or another caching mechanism to mitigate the network
connection overhead. (That shouldn't be too noticeable with a local
connection.)

Also, if you have a lot of write-access, you should benchmark both
MyISAM tables and InnoDB tables. The former should be overall faster on
grounds of supporting less SQL features (which you don't need with AR
anyway unless you drop out to SQL often. In which case, ActiveRecord
probably isn't the cleanest choice anyway and you might want to consider
RBatis or something similar). However, the latter supports row-level
locking, which should scale better with higher amounts of concurrent
write-access transactions. Either way, a benchmark is what can tell you
what will really happen.

David Vallner


--------------enig3BD27CA271309EC1BD8FFB7B
Content-Type: application/pgp-signature
Content-Disposition: inline;
filename="signature.asc"
Content-Description: OpenPGP digital signature
X-Google-AttachSize: 188


This may answer many of your questions. I was just browsing
through it at a Borders bookstore today.

"The Definitive Guide to SQLite" by Mike Owens, Apress 2006

http://www.apress.com/book/bookDisplay.html?bID=10130
http://www.apress.com/book/supplementDownload.html?bID=10130&sID=3520
 
D

David Vallner

--------------enigE7759B2834E67F1DB875C4B4
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
The SQL is MySQL ver 5. I have read that its much faster than 3.=20

If you make use of the new features over 3, it can also be notably
slower. However, databases are fickle things: there are multiple ways
they can pose as a bottleneck in an application, and the pure
hypothethical query engine execution speed might not help much if
transaction isolation keeps most of your users waiting on a lock that
wouldn't occur with "slower" SQL code / table type. Some servers also
scale differently to different dataset volumes; e.g. I think Oracle is
known for being comparably sluggish under 2 GB, when other servers might
tend to start choking at that DB size, etc.

The only way to determine what database is fastest for you is
benchmarking and load testing. There's just no way around this, usage
patterns of the database affect performance -that- much for any
intrinsic qualities of the product to matter regardless of circumstances.=


Of course, if you have an existing setup with a recent MySQL server
that's been deemed Fast Enough, odds are there's no need to fret around
with it unless you need to improve performance. A Rails switch shouldn't
affect performance detrimentally in a tricky way the confirmation
benchmark you mentioned couldn't catch.

David Vallner


--------------enigE7759B2834E67F1DB875C4B4
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (MingW32)

iD8DBQFFdQx9y6MhrS8astoRAjsoAJ97+qTdL8O8AK1LN32h6hGQXm637ACfYWON
b+OlJmiCMsfaA1v7tjq2YMU=
=lEWS
-----END PGP SIGNATURE-----

--------------enigE7759B2834E67F1DB875C4B4--
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,787
Messages
2,569,630
Members
45,338
Latest member
41Pearline46

Latest Threads

Top