A memcached-like server in Ruby - feasible?

Discussion in 'Ruby' started by Tom Machinski, Oct 27, 2007.

  1. Hi group,

    I'm running a very high-load website done in Rails.

    The number and duration of queries per-page is killing us. So we're
    thinking of using a caching layer like memcached. Except we'd like
    something more sophisticated than memcached.

    Allow me to explain.

    memcached is like an object, with a very limited API: basically
    #get_value_by_key and #set_value_by_key.

    One thing we need, that isn't supported by memcached, is to be able to
    store a large set of very large objects, and then retrieve only a few
    of them by certain parameters. For example, we may want to store 100K
    Foo instances, and retrieve only the first 20 - sorted by their
    #created_on attribute - whose #bar attribute equal 23.

    We could store all those 100K Foo instances normally on the memcached
    server, and let the Rails process retrieve them on each request. Then
    the process could perform the filtering itself. Problem is that it's
    very suboptimal, because we'd have to transfer a lot of data to each
    process on each request, and very little of that data is actually
    needed after the processing. I.e. we would pass 100K large objects,
    while the process only really needs 20 of them.

    Ideally, we could call:

    memcached_improved.fetch_newest( :attributes => { :bar => 23 }, :limit
    => 20 )

    and have the improved_memcached server filter and return only the
    required 20 objects by itself.

    Now the question is:

    How expensive would it be to write memcached_improved?

    On the surface, this might seem easy to do with something like
    Daemons[1] in Ruby (as most of our programmers are Rubyists). Just
    write a simple class, have it run a TCP server and respond to
    requests. Yet I'm sure it's not that simple, otherwise memcached would
    have been trivial to write. There are probably stability issues for
    multiple concurrent clients, multiple simultaneous read/write requests
    (race conditions etc.) and heavy loads.

    So, what do you think:

    1) How would you approach the development of memcached_improved?

    2) Is this task doable in Ruby? Or maybe only a Ruby + X combination
    (X probably being C)?

    3) How much time / effort / people / expertise should such a task
    require? Is it feasible for a smallish team (~4 programmers) to put
    together as a side-project over a couple of weeks?

    Thanks,
    -Tom
    --
    [1] http://daemons.rubyforge.org/
     
    Tom Machinski, Oct 27, 2007
    #1
    1. Advertising

  2. Tom Machinski wrote:
    > Hi group,
    >
    > I'm running a very high-load website done in Rails.
    >
    > The number and duration of queries per-page is killing us. So we're
    > thinking of using a caching layer like memcached. Except we'd like
    > something more sophisticated than memcached.
    >
    > Allow me to explain.
    >
    > memcached is like an object, with a very limited API: basically
    > #get_value_by_key and #set_value_by_key.
    >
    > One thing we need, that isn't supported by memcached, is to be able to
    > store a large set of very large objects, and then retrieve only a few
    > of them by certain parameters. For example, we may want to store 100K
    > Foo instances, and retrieve only the first 20 - sorted by their
    > #created_on attribute - whose #bar attribute equal 23.
    >


    It looks like the job a database would do for you. Retrieving 20 large
    objects with such conditions should be a piece of cake for any properly
    tuned database. Did you try this with PostgreSQL or MySQL with indexes
    on created_on and bar? How much memory did you give your database to
    play with ? If the size of the objects is so bad it takes too much time
    to extract from the DB (or the trafic is too much for the DB to use its
    own disk cache efficiently) you could only retrieve the ids in the first
    pass with hand-crafted SQL and then fetch the whole objects using
    memcache (and only go to the DB if memcache doesn't have the object you
    are looking for).

    Lionel.
     
    Lionel Bouton, Oct 27, 2007
    #2
    1. Advertising

  3. On 10/28/07, Lionel Bouton <> wrote:
    > It looks like the job a database would do for you. Retrieving 20 large
    > objects with such conditions should be a piece of cake for any properly
    > tuned database. Did you try this with PostgreSQL or MySQL with indexes
    > on created_on and bar?


    Yes, I'm using MySQL 5, and all query columns are indexed.

    > How much memory did you give your database to
    > play with?


    Not sure right now, I'll ask my admin and reply.

    > If the size of the objects is so bad it takes too much time
    > to extract from the DB (or the trafic is too much for the DB to use its
    > own disk cache efficiently) you could only retrieve the ids in the first
    > pass with hand-crafted SQL and then fetch the whole objects using
    > memcache (and only go to the DB if memcache doesn't have the object you
    > are looking for).


    Might be a good idea.

    > Lionel.


    Long term, my goal is to minimize the amount of queries that hit the
    database. Some of the queries are more complex than the relatively
    simple example I've given here. And I don't think I could optimize
    them much beyond 0.01 secs per query.

    I was hoping to alleviate with memcached_improved some of the pains
    associated with database scaling, e.g. building a replicating cluster
    etc. Basically what memcached does for you, except as demonstrated,
    memcached by itself seems insufficient for our needs.

    Thanks,
    -Tom
     
    Tom Machinski, Oct 28, 2007
    #3
  4. "Tom Machinski" <> writes:

    > Long term, my goal is to minimize the amount of queries that hit the
    > database. Some of the queries are more complex than the relatively
    > simple example I've given here. And I don't think I could optimize
    > them much beyond 0.01 secs per query.
    >
    > I was hoping to alleviate with memcached_improved some of the pains
    > associated with database scaling, e.g. building a replicating cluster
    > etc. Basically what memcached does for you, except as demonstrated,
    > memcached by itself seems insufficient for our needs.


    The other thing you can play with is using sqlite as the local (one
    per app server) cache engine.


    YS.
     
    Yohanes Santoso, Oct 28, 2007
    #4
  5. Tom Machinski

    ara.t.howard Guest

    On Oct 27, 2007, at 4:31 PM, Tom Machinski wrote:

    > Hi group,
    >
    > I'm running a very high-load website done in Rails.
    >
    > The number and duration of queries per-page is killing us. So we're
    > thinking of using a caching layer like memcached. Except we'd like
    > something more sophisticated than memcached.
    >
    > Allow me to explain.
    >
    > memcached is like an object, with a very limited API: basically
    > #get_value_by_key and #set_value_by_key.
    >
    > One thing we need, that isn't supported by memcached, is to be able to
    > store a large set of very large objects, and then retrieve only a few
    > of them by certain parameters. For example, we may want to store 100K
    > Foo instances, and retrieve only the first 20 - sorted by their
    > #created_on attribute - whose #bar attribute equal 23.
    >
    > We could store all those 100K Foo instances normally on the memcached
    > server, and let the Rails process retrieve them on each request. Then
    > the process could perform the filtering itself. Problem is that it's
    > very suboptimal, because we'd have to transfer a lot of data to each
    > process on each request, and very little of that data is actually
    > needed after the processing. I.e. we would pass 100K large objects,
    > while the process only really needs 20 of them.

    <snip>

    i'm reading this as

    - need query
    - need readonly
    - need sorting
    - need fast
    - need server

    and thinking: how isn't this a readonly slave database? i think that
    mysql can either do this with a readonly slave *or* it cannot be done
    with modest resources.

    my 2cts.



    a @ http://codeforpeople.com/
    --
    it is not enough to be compassionate. you must act.
    h.h. the 14th dalai lama
     
    ara.t.howard, Oct 28, 2007
    #5
  6. ara.t.howard wrote:
    >
    > On Oct 27, 2007, at 4:31 PM, Tom Machinski wrote:
    >
    >> Hi group,
    >>
    >> I'm running a very high-load website done in Rails.
    >>
    >> The number and duration of queries per-page is killing us. So we're
    >> thinking of using a caching layer like memcached. Except we'd like
    >> something more sophisticated than memcached.
    >>
    >> Allow me to explain.
    >>
    >> memcached is like an object, with a very limited API: basically
    >> #get_value_by_key and #set_value_by_key.
    >>
    >> One thing we need, that isn't supported by memcached, is to be able to
    >> store a large set of very large objects, and then retrieve only a few
    >> of them by certain parameters. For example, we may want to store 100K
    >> Foo instances, and retrieve only the first 20 - sorted by their
    >> #created_on attribute - whose #bar attribute equal 23.
    >>
    >> We could store all those 100K Foo instances normally on the memcached
    >> server, and let the Rails process retrieve them on each request. Then
    >> the process could perform the filtering itself. Problem is that it's
    >> very suboptimal, because we'd have to transfer a lot of data to each
    >> process on each request, and very little of that data is actually
    >> needed after the processing. I.e. we would pass 100K large objects,
    >> while the process only really needs 20 of them.

    > <snip>
    >
    > i'm reading this as
    >
    > - need query
    > - need readonly
    > - need sorting
    > - need fast
    > - need server
    >
    > and thinking: how isn't this a readonly slave database? i think that
    > mysql can either do this with a readonly slave *or* it cannot be done
    > with modest resources.
    >
    > my 2cts.


    Add "large set of very large (binary?) objects". So ... yes, at least
    *one* database/server. This is exactly the sort of thing you *can* throw
    hardware at. I guess I'd pick PostgreSQL over MySQL for something like
    that, but unless you're a billionaire, I'd be doing it from disk and not
    from RAM. RAM-based "databases" look really attractive on paper, but
    they tend to look better than they really are for a lot of reasons:

    1. *Good* RAM -- the kind that doesn't fall over in a ragged heap when
    challenged with "memtest86" -- is not inexpensive. Let's say the objects
    are "very large" -- how about a typical CD length of 700 MB? OK ... too
    big -- how about a three minute video highly compressed. How big are
    those puppies? Let's assume a megabyte. 100K of those is 100 GB. Wanna
    price 100 GB of *good* RAM? Even with compression, it doesn't take much
    stuff to fill up a 160 GB iPod, right?

    2. A good RDBMS design / query planner is amazingly intelligent, and you
    can give it hints. It might take you a couple of weeks to build your
    indexes but your queries will run fast afterwards.

    3. RAID 10 is your friend. Mirroring preserves your data when a disk
    dies, and striping makes it come into RAM quickly.

    4. Enterprise-grade SANs have lots of buffering built in. And for that
    stuff, you don't have to be a billionaire -- just a plain old millionaire.

    "Premature optimization is the root of all evil?" Bullshit! :)
     
    M. Edward (Ed) Borasky, Oct 28, 2007
    #6
  7. Tom Machinski

    ara.t.howard Guest

    [OT] Re: A memcached-like server in Ruby - feasible?

    On Oct 28, 2007, at 12:48 AM, M. Edward (Ed) Borasky wrote:

    > it doesn't take much stuff to fill up a 160 GB iPod, right?


    http://drawohara.tumblr.com/post/17471102

    couldn't resist...

    cheers.

    a @ http://codeforpeople.com/
    --
    share your knowledge. it's a way to achieve immortality.
    h.h. the 14th dalai lama
     
    ara.t.howard, Oct 28, 2007
    #7
  8. [OT] good RAM, was Re: A memcached-like server in Ruby - feasible?

    On Sun, 28 Oct 2007 15:48:11 +0900
    "M. Edward (Ed) Borasky" <> wrote:

    > 1. *Good* RAM -- the kind that doesn't fall over in a ragged heap when
    > challenged with "memtest86" -- is not inexpensive.


    A few weeks ago, I had 2 1GB RAM modules, which were fine with running
    memtest86 over the weekend. But I could not get gcc 4.1.1 to compile
    itself while they were present. The error message even hinted at
    using defective hardware. After exchanging them, it worked. So
    nowadays, I prefer compiling gcc to memtest86.

    s.
     
    Stefan Schmiedl, Oct 28, 2007
    #8
  9. Tom Machinski

    Bill Kelly Guest

    Re: [OT] Re: A memcached-like server in Ruby - feasible?

    From: "ara.t.howard" <>
    >
    > On Oct 28, 2007, at 12:48 AM, M. Edward (Ed) Borasky wrote:
    >
    >> it doesn't take much stuff to fill up a 160 GB iPod, right?

    >
    > http://drawohara.tumblr.com/post/17471102


    BTW, my wife and I were only able to fit about 3/5ths of our CD
    collection on our 40 gig iPod. (I rip at 320kbps mp3 admittedly.)

    So while a 160 GB iPod would be slightly overkill for us, it
    wouldn't be outrageously so.


    Regards,

    Bill
     
    Bill Kelly, Oct 28, 2007
    #9
  10. On 10/28/07, Yohanes Santoso <-a-geek.org> wrote:
    > The other thing you can play with is using sqlite as the local (one
    > per app server) cache engine.


    Thanks, but if I'm already caching at the local process level, I might
    as well cache to in-memory Ruby objects; the entire data-set isn't
    that huge for a high-end server RAM capacity: about 500 MB all in all.

    > YS.


    -Tom
     
    Tom Machinski, Oct 28, 2007
    #10
  11. On 10/28/07, ara.t.howard <> wrote:
    > i'm reading this as
    >
    > - need query
    > - need readonly
    > - need sorting
    > - need fast
    > - need server
    >
    > and thinking: how isn't this a readonly slave database? i think that
    > mysql can either do this with a readonly slave *or* it cannot be done
    > with modest resources.


    The problem is that for a perfectly normalized database, those queries
    are *heavy*.

    We're using straight, direct SQL (no ActiveRecord calls) there, and
    several DBAs have already looked into our query strategy. Bottom line
    is that each query on the normalized database is non-trivial, and they
    can't reduce it to less than 0.2 secs / query. As we have 5+ of these
    queries per page, we'd need one MySQL server for every
    request-per-second we want to serve. As we need at least 50 reqs/sec,
    we'd need 50 MySQL servers (and probably something similar in terms of
    web servers). We can't afford that.

    We can only improve the queries TTC by replicating data inside the
    database, i.e. de-normalizing it with internal caching at the table
    level (basically, that amounts to replicating certain columns from
    table `bars` in table `foos`, thus saving some very heavy JOINs).

    But if we're already de-normalizing, caching and replicating data, we
    might as well create another layer of de-normalized, processed data
    between the database and the Rails servers. That way, we will need
    less MySQL servers, output requests faster (as the layer would hold
    the data in an already processed state), and save a much of the
    replication / clustering overhead.

    -Tom
     
    Tom Machinski, Oct 28, 2007
    #11
  12. On 10/28/07, Ian Leitch <> wrote:
    > I don't recommend you use this project, I haven't used it myself for quite a
    > while and it has a number of issues I haven't addressed. You may find it a
    > helpful basis implementation if you should decided to go the pure Ruby
    > route.


    Thanks, Ian!

    Would you mind sharing - here, or by linking a blog / text, or
    privately if you prefer - some information about these issues?

    I'm asking for two reasons:

    1) To learn about possible pitfalls / complications / costs involved
    in taking the pure Ruby route.

    2) We may decide to adopt your project and try to address those issues
    to use the patched Boogaloo in production.

    Thanks,
    -Tom
     
    Tom Machinski, Oct 28, 2007
    #12
  13. Tom Machinski

    Andreas S. Guest

    Tom Machinski wrote:
    > On 10/28/07, ara.t.howard <> wrote:
    >> with modest resources.

    > The problem is that for a perfectly normalized database, those queries
    > are *heavy*.
    >
    > We're using straight, direct SQL (no ActiveRecord calls) there, and
    > several DBAs have already looked into our query strategy. Bottom line
    > is that each query on the normalized database is non-trivial, and they
    > can't reduce it to less than 0.2 secs / query.


    Try enabling the MySQL query cache. For many applications even a few MB
    can work wonders.
    --
    Posted via http://www.ruby-forum.com/.
     
    Andreas S., Oct 28, 2007
    #13
  14. On 10/28/07, Andreas S. <> wrote:
    > Try enabling the MySQL query cache. For many applications even a few MB
    > can work wonders.


    Thanks, that's true, and we already do that. We have a very large
    cache in fact (~500 MB) and it does improve performance, though not
    enough.

    -Tom
     
    Tom Machinski, Oct 28, 2007
    #14
  15. On Sun, Oct 28, 2007 at 07:31:30AM +0900 Tom Machinski mentioned:
    >
    > 2) Is this task doable in Ruby? Or maybe only a Ruby + X combination
    > (X probably being C)?
    >


    I beleive, you can achive a high efficiency in server design by using
    event-driven design. There're some event libraries for ruby available,
    e.g. eventmachine. In this case the scalability of the server should
    be comparable with the C version.

    Thread will have a huge overhead in case of many clients.

    BTW, the original memcached uses event-driven design too, IIRC.

    --
    Stanislav Sedov
    ST4096-RIPE
     
    Stanislav Sedov, Oct 28, 2007
    #15
  16. On 10/28/07, Stanislav Sedov <> wrote:
    > I beleive, you can achive a high efficiency in server design by using
    > event-driven design. There're some event libraries for ruby available,
    > e.g. eventmachine. In this case the scalability of the server should
    > be comparable with the C version.
    >
    > Thread will have a huge overhead in case of many clients.
    >
    > BTW, the original memcached uses event-driven design too, IIRC.


    Yes, memcached (including latest) uses libevent.

    I'm not completely sure whether a production-grade server of this sort
    is feasible in Ruby. Many people, both here and elsewhere, seem to
    think it should be done in C for better stability / efficiency /
    resource consumption.

    Thanks,
    -Tom

    > --
    > Stanislav Sedov
    > ST4096-RIPE
     
    Tom Machinski, Oct 28, 2007
    #16
  17. On 10/28/07, M. Edward (Ed) Borasky <> wrote:
    > Add "large set of very large (binary?) objects". So ... yes, at least
    > *one* database/server. This is exactly the sort of thing you *can* throw
    > hardware at. I guess I'd pick PostgreSQL over MySQL for something like
    > that, but unless you're a billionaire, I'd be doing it from disk and not
    > from RAM. RAM-based "databases" look really attractive on paper, but
    > they tend to look better than they really are for a lot of reasons:
    >
    > 1. *Good* RAM -- the kind that doesn't fall over in a ragged heap when
    > challenged with "memtest86" -- is not inexpensive. Let's say the objects
    > are "very large" -- how about a typical CD length of 700 MB? OK ... too
    > big -- how about a three minute video highly compressed. How big are
    > those puppies? Let's assume a megabyte. 100K of those is 100 GB. Wanna
    > price 100 GB of *good* RAM? Even with compression, it doesn't take much
    > stuff to fill up a 160 GB iPod, right?


    I might have impressed you with a somewhat inflated view of how large
    our data-set is :)

    We have about 100K objects, occupying ~500KB per object. So all in
    all, the total weight of our dataset is no more than 500MBs. We might
    grow to maybe twice that in the next 2 years. But that's it.

    So it's very feasible to keep the entire data-set in *good* RAM for a
    reasonable cost.

    > 2. A good RDBMS design / query planner is amazingly intelligent, and you
    > can give it hints. It might take you a couple of weeks to build your
    > indexes but your queries will run fast afterwards.


    Good point. Unfortunately, MySQL 5 doesn't appear to be able to take
    hints. We've analyzed our queries and there's some strategies there we
    could definitely improve by manual hinting, but alas we'd need to
    switch to an RDBMS that supports those.

    > 3. RAID 10 is your friend. Mirroring preserves your data when a disk
    > dies, and striping makes it come into RAM quickly.
    >
    > 4. Enterprise-grade SANs have lots of buffering built in. And for that
    > stuff, you don't have to be a billionaire -- just a plain old millionaire.


    We had some bad experience with a poor SAN setup, though we might have
    been victims of improper installation.

    Thanks,
    -Tom
     
    Tom Machinski, Oct 28, 2007
    #17
  18. On 10/28/07, Jacob Burkhart <> wrote:
    > So alter memcached to accept a 'query' in the form of arbitrary ruby (or
    > perhaps a pre-defined ruby) that a peer-daemon is to execute over the set of
    > results a particular memcached node contains.


    Yeah, I thought of writing a Ruby daemon that "wraps" memcached.

    But then the wrapper would have to deal with all the performance
    challenges that a full replacement to memcached has to deal with,
    namely: handling multiple concurrent clients, multiple simultaneous
    read/write requests
    (race conditions etc.) and heavy loads.

    A naive implementation of memcached itself would be trivial to write;
    memcached's real merits are not its rather limited featureset, but
    its performance, stability, and robustness - i.e., its capability to
    overcome the above challenges.

    The only way I could use memcached to do complex queries is by
    patching memcached to accept and handle complex queries. Such a patch
    won't have anything to do with Ruby itself, would probably be very
    non-trivial, and will have to significantly extend memcached's
    architecture. I doubt I have the time to roll out something like that.

    -Tom
     
    Tom Machinski, Oct 28, 2007
    #18
  19. Tom Machinski

    marc spitzer Guest

    On 2007-10-28, Tom Machinski <> wrote:
    > On 10/28/07, Yohanes Santoso <-a-geek.org> wrote:
    >> The other thing you can play with is using sqlite as the local (one
    >> per app server) cache engine.

    >
    > Thanks, but if I'm already caching at the local process level, I might
    > as well cache to in-memory Ruby objects; the entire data-set isn't
    > that huge for a high-end server RAM capacity: about 500 MB all in all.
    >


    What would happen if you used two stages of mysql databases? What I mean
    is that you have your production db with all your nice clean structure for
    writing new data to and as a master source for your horrible demoralized
    db then you have a job that pushes changes every N minutes to the evil
    ugly read only db. It is a new step in production, but it does allow you
    to stick with the same tech mix you are using now.

    marc
    --

    SDF Public Access UNIX System - http://sdf.lonestar.org
     
    marc spitzer, Oct 28, 2007
    #19
  20. Tom Machinski

    Guest

    What I would try is using a slave to replicate just the tables you
    need (actually the indexes if that were possible) and memcached to
    keep copies of all those objects. I've been using memcached for years
    and I can swear by it. But keeping indexes in memcached is not easy/
    reliable to do and mysql would do a better job. So then you would
    query the slave DB for the conditions you need but only to return the
    ids. And then you would ask memcached for those objects. I've been
    doing something similar in my CMS and it has worked great for me. Here
    is an article that might explain better where I'm coming from [1]. And
    if mysql clusters make you feel a little dizzy simple slave
    replication and mysql-proxy [2] might help out too.

    Hope it helps,


    Adrian Madrid

    [1] http://blog.methodmissing.com/2007/...iverecord-instantiation-when-using-memcached/
    [2] http://forge.mysql.com/wiki/MySQL_Proxy

    On Oct 27, 4:31 pm, Tom Machinski <> wrote:
    > Hi group,
    >
    > I'm running a very high-load website done in Rails.
    >
    > The number and duration of queries per-page is killing us. So we're
    > thinking of using a caching layer like memcached. Except we'd like
    > something more sophisticated than memcached.
    >
    > Allow me to explain.
    >
    > memcached is like an object, with a very limited API: basically
    > #get_value_by_key and #set_value_by_key.
    >
    > One thing we need, that isn't supported by memcached, is to be able to
    > store a large set of very large objects, and then retrieve only a few
    > of them by certain parameters. For example, we may want to store 100K
    > Foo instances, and retrieve only the first 20 - sorted by their
    > #created_on attribute - whose #bar attribute equal 23.
    >
    > We could store all those 100K Foo instances normally on the memcached
    > server, and let the Rails process retrieve them on each request. Then
    > the process could perform the filtering itself. Problem is that it's
    > very suboptimal, because we'd have to transfer a lot of data to each
    > process on each request, and very little of that data is actually
    > needed after the processing. I.e. we would pass 100K large objects,
    > while the process only really needs 20 of them.
    >
    > Ideally, we could call:
    >
    > memcached_improved.fetch_newest( :attributes => { :bar => 23 }, :limit
    > => 20 )
    >
    > and have the improved_memcached server filter and return only the
    > required 20 objects by itself.
    >
    > Now the question is:
    >
    > How expensive would it be to write memcached_improved?
    >
    > On the surface, this might seem easy to do with something like
    > Daemons[1] in Ruby (as most of our programmers are Rubyists). Just
    > write a simple class, have it run a TCP server and respond to
    > requests. Yet I'm sure it's not that simple, otherwise memcached would
    > have been trivial to write. There are probably stability issues for
    > multiple concurrent clients, multiple simultaneous read/write requests
    > (race conditions etc.) and heavy loads.
    >
    > So, what do you think:
    >
    > 1) How would you approach the development of memcached_improved?
    >
    > 2) Is this task doable in Ruby? Or maybe only a Ruby + X combination
    > (X probably being C)?
    >
    > 3) How much time / effort / people / expertise should such a task
    > require? Is it feasible for a smallish team (~4 programmers) to put
    > together as a side-project over a couple of weeks?
    >
    > Thanks,
    > -Tom
    > --
    > [1]http://daemons.rubyforge.org/
     
    , Oct 28, 2007
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Inquiry
    Replies:
    0
    Views:
    397
    Inquiry
    Apr 28, 2009
  2. bigqiang

    How to use memcached in classic asp

    bigqiang, Sep 13, 2008, in forum: ASP General
    Replies:
    3
    Views:
    419
    Mike Brind [MVP]
    Sep 14, 2008
  3. Replies:
    0
    Views:
    112
  4. Andrew Milkowski
    Replies:
    0
    Views:
    211
    Andrew Milkowski
    Feb 23, 2010
  5. Andrew Milkowski
    Replies:
    0
    Views:
    152
    Andrew Milkowski
    Feb 23, 2010
Loading...

Share This Page