A memcached-like server in Ruby - feasible?

T

Tom Machinski

Hi group,

I'm running a very high-load website done in Rails.

The number and duration of queries per-page is killing us. So we're
thinking of using a caching layer like memcached. Except we'd like
something more sophisticated than memcached.

Allow me to explain.

memcached is like an object, with a very limited API: basically
#get_value_by_key and #set_value_by_key.

One thing we need, that isn't supported by memcached, is to be able to
store a large set of very large objects, and then retrieve only a few
of them by certain parameters. For example, we may want to store 100K
Foo instances, and retrieve only the first 20 - sorted by their
#created_on attribute - whose #bar attribute equal 23.

We could store all those 100K Foo instances normally on the memcached
server, and let the Rails process retrieve them on each request. Then
the process could perform the filtering itself. Problem is that it's
very suboptimal, because we'd have to transfer a lot of data to each
process on each request, and very little of that data is actually
needed after the processing. I.e. we would pass 100K large objects,
while the process only really needs 20 of them.

Ideally, we could call:

memcached_improved.fetch_newest( :attributes => { :bar => 23 }, :limit
=> 20 )

and have the improved_memcached server filter and return only the
required 20 objects by itself.

Now the question is:

How expensive would it be to write memcached_improved?

On the surface, this might seem easy to do with something like
Daemons[1] in Ruby (as most of our programmers are Rubyists). Just
write a simple class, have it run a TCP server and respond to
requests. Yet I'm sure it's not that simple, otherwise memcached would
have been trivial to write. There are probably stability issues for
multiple concurrent clients, multiple simultaneous read/write requests
(race conditions etc.) and heavy loads.

So, what do you think:

1) How would you approach the development of memcached_improved?

2) Is this task doable in Ruby? Or maybe only a Ruby + X combination
(X probably being C)?

3) How much time / effort / people / expertise should such a task
require? Is it feasible for a smallish team (~4 programmers) to put
together as a side-project over a couple of weeks?

Thanks,
-Tom
 
L

Lionel Bouton

Tom said:
Hi group,

I'm running a very high-load website done in Rails.

The number and duration of queries per-page is killing us. So we're
thinking of using a caching layer like memcached. Except we'd like
something more sophisticated than memcached.

Allow me to explain.

memcached is like an object, with a very limited API: basically
#get_value_by_key and #set_value_by_key.

One thing we need, that isn't supported by memcached, is to be able to
store a large set of very large objects, and then retrieve only a few
of them by certain parameters. For example, we may want to store 100K
Foo instances, and retrieve only the first 20 - sorted by their
#created_on attribute - whose #bar attribute equal 23.

It looks like the job a database would do for you. Retrieving 20 large
objects with such conditions should be a piece of cake for any properly
tuned database. Did you try this with PostgreSQL or MySQL with indexes
on created_on and bar? How much memory did you give your database to
play with ? If the size of the objects is so bad it takes too much time
to extract from the DB (or the trafic is too much for the DB to use its
own disk cache efficiently) you could only retrieve the ids in the first
pass with hand-crafted SQL and then fetch the whole objects using
memcache (and only go to the DB if memcache doesn't have the object you
are looking for).

Lionel.
 
T

Tom Machinski

It looks like the job a database would do for you. Retrieving 20 large
objects with such conditions should be a piece of cake for any properly
tuned database. Did you try this with PostgreSQL or MySQL with indexes
on created_on and bar?

Yes, I'm using MySQL 5, and all query columns are indexed.
How much memory did you give your database to
play with?

Not sure right now, I'll ask my admin and reply.
If the size of the objects is so bad it takes too much time
to extract from the DB (or the trafic is too much for the DB to use its
own disk cache efficiently) you could only retrieve the ids in the first
pass with hand-crafted SQL and then fetch the whole objects using
memcache (and only go to the DB if memcache doesn't have the object you
are looking for).

Might be a good idea.

Long term, my goal is to minimize the amount of queries that hit the
database. Some of the queries are more complex than the relatively
simple example I've given here. And I don't think I could optimize
them much beyond 0.01 secs per query.

I was hoping to alleviate with memcached_improved some of the pains
associated with database scaling, e.g. building a replicating cluster
etc. Basically what memcached does for you, except as demonstrated,
memcached by itself seems insufficient for our needs.

Thanks,
-Tom
 
Y

Yohanes Santoso

Tom Machinski said:
Long term, my goal is to minimize the amount of queries that hit the
database. Some of the queries are more complex than the relatively
simple example I've given here. And I don't think I could optimize
them much beyond 0.01 secs per query.

I was hoping to alleviate with memcached_improved some of the pains
associated with database scaling, e.g. building a replicating cluster
etc. Basically what memcached does for you, except as demonstrated,
memcached by itself seems insufficient for our needs.

The other thing you can play with is using sqlite as the local (one
per app server) cache engine.


YS.
 
A

ara.t.howard

Hi group,

I'm running a very high-load website done in Rails.

The number and duration of queries per-page is killing us. So we're
thinking of using a caching layer like memcached. Except we'd like
something more sophisticated than memcached.

Allow me to explain.

memcached is like an object, with a very limited API: basically
#get_value_by_key and #set_value_by_key.

One thing we need, that isn't supported by memcached, is to be able to
store a large set of very large objects, and then retrieve only a few
of them by certain parameters. For example, we may want to store 100K
Foo instances, and retrieve only the first 20 - sorted by their
#created_on attribute - whose #bar attribute equal 23.

We could store all those 100K Foo instances normally on the memcached
server, and let the Rails process retrieve them on each request. Then
the process could perform the filtering itself. Problem is that it's
very suboptimal, because we'd have to transfer a lot of data to each
process on each request, and very little of that data is actually
needed after the processing. I.e. we would pass 100K large objects,
while the process only really needs 20 of them.
<snip>

i'm reading this as

- need query
- need readonly
- need sorting
- need fast
- need server

and thinking: how isn't this a readonly slave database? i think that
mysql can either do this with a readonly slave *or* it cannot be done
with modest resources.

my 2cts.



a @ http://codeforpeople.com/
 
M

M. Edward (Ed) Borasky

ara.t.howard said:
<snip>

i'm reading this as

- need query
- need readonly
- need sorting
- need fast
- need server

and thinking: how isn't this a readonly slave database? i think that
mysql can either do this with a readonly slave *or* it cannot be done
with modest resources.

my 2cts.

Add "large set of very large (binary?) objects". So ... yes, at least
*one* database/server. This is exactly the sort of thing you *can* throw
hardware at. I guess I'd pick PostgreSQL over MySQL for something like
that, but unless you're a billionaire, I'd be doing it from disk and not
from RAM. RAM-based "databases" look really attractive on paper, but
they tend to look better than they really are for a lot of reasons:

1. *Good* RAM -- the kind that doesn't fall over in a ragged heap when
challenged with "memtest86" -- is not inexpensive. Let's say the objects
are "very large" -- how about a typical CD length of 700 MB? OK ... too
big -- how about a three minute video highly compressed. How big are
those puppies? Let's assume a megabyte. 100K of those is 100 GB. Wanna
price 100 GB of *good* RAM? Even with compression, it doesn't take much
stuff to fill up a 160 GB iPod, right?

2. A good RDBMS design / query planner is amazingly intelligent, and you
can give it hints. It might take you a couple of weeks to build your
indexes but your queries will run fast afterwards.

3. RAID 10 is your friend. Mirroring preserves your data when a disk
dies, and striping makes it come into RAM quickly.

4. Enterprise-grade SANs have lots of buffering built in. And for that
stuff, you don't have to be a billionaire -- just a plain old millionaire.

"Premature optimization is the root of all evil?" Bullshit! :)
 
S

Stefan Schmiedl

1. *Good* RAM -- the kind that doesn't fall over in a ragged heap when
challenged with "memtest86" -- is not inexpensive.

A few weeks ago, I had 2 1GB RAM modules, which were fine with running
memtest86 over the weekend. But I could not get gcc 4.1.1 to compile
itself while they were present. The error message even hinted at
using defective hardware. After exchanging them, it worked. So
nowadays, I prefer compiling gcc to memtest86.

s.
 
T

Tom Machinski

The other thing you can play with is using sqlite as the local (one
per app server) cache engine.

Thanks, but if I'm already caching at the local process level, I might
as well cache to in-memory Ruby objects; the entire data-set isn't
that huge for a high-end server RAM capacity: about 500 MB all in all.

-Tom
 
T

Tom Machinski

i'm reading this as

- need query
- need readonly
- need sorting
- need fast
- need server

and thinking: how isn't this a readonly slave database? i think that
mysql can either do this with a readonly slave *or* it cannot be done
with modest resources.

The problem is that for a perfectly normalized database, those queries
are *heavy*.

We're using straight, direct SQL (no ActiveRecord calls) there, and
several DBAs have already looked into our query strategy. Bottom line
is that each query on the normalized database is non-trivial, and they
can't reduce it to less than 0.2 secs / query. As we have 5+ of these
queries per page, we'd need one MySQL server for every
request-per-second we want to serve. As we need at least 50 reqs/sec,
we'd need 50 MySQL servers (and probably something similar in terms of
web servers). We can't afford that.

We can only improve the queries TTC by replicating data inside the
database, i.e. de-normalizing it with internal caching at the table
level (basically, that amounts to replicating certain columns from
table `bars` in table `foos`, thus saving some very heavy JOINs).

But if we're already de-normalizing, caching and replicating data, we
might as well create another layer of de-normalized, processed data
between the database and the Rails servers. That way, we will need
less MySQL servers, output requests faster (as the layer would hold
the data in an already processed state), and save a much of the
replication / clustering overhead.

-Tom
 
T

Tom Machinski

I don't recommend you use this project, I haven't used it myself for quite a
while and it has a number of issues I haven't addressed. You may find it a
helpful basis implementation if you should decided to go the pure Ruby
route.

Thanks, Ian!

Would you mind sharing - here, or by linking a blog / text, or
privately if you prefer - some information about these issues?

I'm asking for two reasons:

1) To learn about possible pitfalls / complications / costs involved
in taking the pure Ruby route.

2) We may decide to adopt your project and try to address those issues
to use the patched Boogaloo in production.

Thanks,
-Tom
 
A

Andreas S.

Tom said:
The problem is that for a perfectly normalized database, those queries
are *heavy*.

We're using straight, direct SQL (no ActiveRecord calls) there, and
several DBAs have already looked into our query strategy. Bottom line
is that each query on the normalized database is non-trivial, and they
can't reduce it to less than 0.2 secs / query.

Try enabling the MySQL query cache. For many applications even a few MB
can work wonders.
 
T

Tom Machinski

Try enabling the MySQL query cache. For many applications even a few MB
can work wonders.

Thanks, that's true, and we already do that. We have a very large
cache in fact (~500 MB) and it does improve performance, though not
enough.

-Tom
 
S

Stanislav Sedov

2) Is this task doable in Ruby? Or maybe only a Ruby + X combination
(X probably being C)?

I beleive, you can achive a high efficiency in server design by using
event-driven design. There're some event libraries for ruby available,
e.g. eventmachine. In this case the scalability of the server should
be comparable with the C version.

Thread will have a huge overhead in case of many clients.

BTW, the original memcached uses event-driven design too, IIRC.
 
T

Tom Machinski

I beleive, you can achive a high efficiency in server design by using
event-driven design. There're some event libraries for ruby available,
e.g. eventmachine. In this case the scalability of the server should
be comparable with the C version.

Thread will have a huge overhead in case of many clients.

BTW, the original memcached uses event-driven design too, IIRC.

Yes, memcached (including latest) uses libevent.

I'm not completely sure whether a production-grade server of this sort
is feasible in Ruby. Many people, both here and elsewhere, seem to
think it should be done in C for better stability / efficiency /
resource consumption.

Thanks,
-Tom
 
T

Tom Machinski

Add "large set of very large (binary?) objects". So ... yes, at least
*one* database/server. This is exactly the sort of thing you *can* throw
hardware at. I guess I'd pick PostgreSQL over MySQL for something like
that, but unless you're a billionaire, I'd be doing it from disk and not
from RAM. RAM-based "databases" look really attractive on paper, but
they tend to look better than they really are for a lot of reasons:

1. *Good* RAM -- the kind that doesn't fall over in a ragged heap when
challenged with "memtest86" -- is not inexpensive. Let's say the objects
are "very large" -- how about a typical CD length of 700 MB? OK ... too
big -- how about a three minute video highly compressed. How big are
those puppies? Let's assume a megabyte. 100K of those is 100 GB. Wanna
price 100 GB of *good* RAM? Even with compression, it doesn't take much
stuff to fill up a 160 GB iPod, right?

I might have impressed you with a somewhat inflated view of how large
our data-set is :)

We have about 100K objects, occupying ~500KB per object. So all in
all, the total weight of our dataset is no more than 500MBs. We might
grow to maybe twice that in the next 2 years. But that's it.

So it's very feasible to keep the entire data-set in *good* RAM for a
reasonable cost.
2. A good RDBMS design / query planner is amazingly intelligent, and you
can give it hints. It might take you a couple of weeks to build your
indexes but your queries will run fast afterwards.

Good point. Unfortunately, MySQL 5 doesn't appear to be able to take
hints. We've analyzed our queries and there's some strategies there we
could definitely improve by manual hinting, but alas we'd need to
switch to an RDBMS that supports those.
3. RAID 10 is your friend. Mirroring preserves your data when a disk
dies, and striping makes it come into RAM quickly.

4. Enterprise-grade SANs have lots of buffering built in. And for that
stuff, you don't have to be a billionaire -- just a plain old millionaire.

We had some bad experience with a poor SAN setup, though we might have
been victims of improper installation.

Thanks,
-Tom
 
T

Tom Machinski

So alter memcached to accept a 'query' in the form of arbitrary ruby (or
perhaps a pre-defined ruby) that a peer-daemon is to execute over the set of
results a particular memcached node contains.

Yeah, I thought of writing a Ruby daemon that "wraps" memcached.

But then the wrapper would have to deal with all the performance
challenges that a full replacement to memcached has to deal with,
namely: handling multiple concurrent clients, multiple simultaneous
read/write requests
(race conditions etc.) and heavy loads.

A naive implementation of memcached itself would be trivial to write;
memcached's real merits are not its rather limited featureset, but
its performance, stability, and robustness - i.e., its capability to
overcome the above challenges.

The only way I could use memcached to do complex queries is by
patching memcached to accept and handle complex queries. Such a patch
won't have anything to do with Ruby itself, would probably be very
non-trivial, and will have to significantly extend memcached's
architecture. I doubt I have the time to roll out something like that.

-Tom
 
M

marc spitzer

Thanks, but if I'm already caching at the local process level, I might
as well cache to in-memory Ruby objects; the entire data-set isn't
that huge for a high-end server RAM capacity: about 500 MB all in all.

What would happen if you used two stages of mysql databases? What I mean
is that you have your production db with all your nice clean structure for
writing new data to and as a master source for your horrible demoralized
db then you have a job that pushes changes every N minutes to the evil
ugly read only db. It is a new step in production, but it does allow you
to stick with the same tech mix you are using now.

marc
 
A

aemadrid

What I would try is using a slave to replicate just the tables you
need (actually the indexes if that were possible) and memcached to
keep copies of all those objects. I've been using memcached for years
and I can swear by it. But keeping indexes in memcached is not easy/
reliable to do and mysql would do a better job. So then you would
query the slave DB for the conditions you need but only to return the
ids. And then you would ask memcached for those objects. I've been
doing something similar in my CMS and it has worked great for me. Here
is an article that might explain better where I'm coming from [1]. And
if mysql clusters make you feel a little dizzy simple slave
replication and mysql-proxy [2] might help out too.

Hope it helps,


Adrian Madrid

[1] http://blog.methodmissing.com/2007/...iverecord-instantiation-when-using-memcached/
[2] http://forge.mysql.com/wiki/MySQL_Proxy

Hi group,

I'm running a very high-load website done in Rails.

The number and duration of queries per-page is killing us. So we're
thinking of using a caching layer like memcached. Except we'd like
something more sophisticated than memcached.

Allow me to explain.

memcached is like an object, with a very limited API: basically
#get_value_by_key and #set_value_by_key.

One thing we need, that isn't supported by memcached, is to be able to
store a large set of very large objects, and then retrieve only a few
of them by certain parameters. For example, we may want to store 100K
Foo instances, and retrieve only the first 20 - sorted by their
#created_on attribute - whose #bar attribute equal 23.

We could store all those 100K Foo instances normally on the memcached
server, and let the Rails process retrieve them on each request. Then
the process could perform the filtering itself. Problem is that it's
very suboptimal, because we'd have to transfer a lot of data to each
process on each request, and very little of that data is actually
needed after the processing. I.e. we would pass 100K large objects,
while the process only really needs 20 of them.

Ideally, we could call:

memcached_improved.fetch_newest( :attributes => { :bar => 23 }, :limit
=> 20 )

and have the improved_memcached server filter and return only the
required 20 objects by itself.

Now the question is:

How expensive would it be to write memcached_improved?

On the surface, this might seem easy to do with something like
Daemons[1] in Ruby (as most of our programmers are Rubyists). Just
write a simple class, have it run a TCP server and respond to
requests. Yet I'm sure it's not that simple, otherwise memcached would
have been trivial to write. There are probably stability issues for
multiple concurrent clients, multiple simultaneous read/write requests
(race conditions etc.) and heavy loads.

So, what do you think:

1) How would you approach the development of memcached_improved?

2) Is this task doable in Ruby? Or maybe only a Ruby + X combination
(X probably being C)?

3) How much time / effort / people / expertise should such a task
require? Is it feasible for a smallish team (~4 programmers) to put
together as a side-project over a couple of weeks?

Thanks,
-Tom
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Staff online

Members online

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,008
Latest member
Rahul737

Latest Threads

Top