OT: why are LAMP sites slow?

M

M.E.Farmer

Paul said:
Yes, good point about html tables, though I'm concerned primarily
about server response. (It's off-topic, but how do you use CSS to get
the effect of tables?)
To emulate a table you use the div and span tag.
(really you can do just about anything with div and span)
Div is a block level tag and span isn't.
You can also group them together and nest them.
div and span have little meaning but the can be styled.
In the CSS declaration we create styles and then just use them later as
needed.
Try to always focus on layout first and actual style later. (CSS is all
about seperation of concerns)
You will find the verbosity of CSS is not as annoying as you might
think it might be, it is quite easy to learn and well worth the effort.
Ok here is a bad example ( do a search on Google for CSS tables ).
( this was torn out of a webapp I am working on )
..<div class='panel'>
.. <div class="archivehead"><strong>
.. <span class="leftspace">Archive</span>
.. <span class="rightspace">Entry Date</span>
.. </strong>
.. </div>
.. <div class="archivelite">
.. <span class="bold">%s</span><span>
.. <strong>%s</strong>
.. </span><span class="right">%s</span>
.. </div>
.. <div class="archivedark">
.. &nbsp &nbsp &nbsp &nbsp posted by: %s
.. <span class="right">
.. <strong>text </strong>
.. <strong> %s</strong>
.. </span>
.. </div>
..</div>
And here is some of the CSS
( these are classes the dot in front of the name tells you that, when
combined with div or span just about anything is possible. )
.. .panel {
.. border: solid thin #666677;
.. margin: 2em 4em 2em 4em;}
.. .leftspaces {
.. letter-spacing:.5em;
.. text-decoration:none;
.. color:#EEEEff;}
.. .rightspaces {
.. letter-spacing:.5em;
.. position: absolute;
.. right: 5em;
.. text-decoration: none;
.. color:#EEEEff;}
..
.. .archivehead {
.. text-indent:.5em;
.. background-color:#333333;}
.. .archivelite {
.. color:#777777;
.. text-indent:.5em;
.. background-color:#222222;}
.. .archivedark {
.. color:#777777; text-indent:.5em;
.. background-color:#111111;}
.. .archivelite a{
.. color:#BBBBDD;
.. text-decortation:none;
.. background-color:#222222;}
.. .archivedark a{
.. color:#BBBBDD;
.. text-decortation:none;
.. background-color:#111111;}
hth,
M.E.Farmer
 
A

aurora

aurora said:
Hmm, as mentioned, I'm not sure what the commercial sites do that's
different. I take the view that the free software world is capable of
anything that the commercial world is capable of, so I'm not awed just
because a site is commercial. And sites like Slashdot have pretty big
budgets by hobbyist standards.


I wouldn't say that. I don't think Apache is a bottleneck compared
with other web servers. Similarly I don't see an inherent reason for
Python (or whatever) to be seriously slower than Java servlets. I
have heard that MySQL doesn't handle concurrent updates nearly as well
as DB2 or Oracle, or for that matter PostgreSQL, so I wonder if busier
LAMP sites might benefit from switching to PostgreSQL (LAMP => LAPP?).

I'm lost. So what do you compares against when you said LAMP is slow? What
is the reference point? Is it just a general observation that slashdot is
slower than we like it to be?

If you are talking about slashdot, there are many ideas to make it faster.
For example they can send all 600 comments to the client and let the user
do querying using DHTML on the client side. This leave the server serving
mostly static files and will certainly boost the performance tremendously.

If you mean MySQL or SQL database in general is slow, there are truth in
it. The best thing about SQL database is concurrent access, transactional
semantics and versatile querying. Turns out a lot of application can
really live without that. If you can rearchitect the application using
flat files instead of database it can often be a big bloom.

A lot of these is just implementation. Find the right tool and the right
design for the job. I still don't see a case that LAMP based solution is
inherently slow.
 
P

Paul Rubin

M.E.Farmer said:
To emulate a table you use the div and span tag.
(really you can do just about anything with div and span)

Hmm, that's pretty interesting, I didn't realize you could specify
width's with CSS. Thanks. http://glish.com/css/9.asp shows a
2-column example.

I don't know that the browser necessarily renders that faster than it
renders a table, but there's surely less crap in the HTML, which is
always good. I may start using that method.

I remember seeing something a long time ago where someone used normal
html tags to create something like tab stops, so he could just place
things where he wanted them without the browser having to read the
whole page to automatically size the columns. But I haven't been able
to figure out what tags he used for that, and don't remember where I
saw it.

As you can probably tell, I'm ok with basic HTML but am no wizard.
I'm more interested in backend implementation.
 
P

Paul Rubin

aurora said:
I'm lost. So what do you compares against when you said LAMP is slow?
What is the reference point? Is it just a general observation that
slashdot is slower than we like it to be?

Yes, that's the basic observation, not specifically Slashdot but for
lots of LAMP sites (some PHPBB sites are other examples) have the same
behavior. You send a url and the server has to grind for quite a
while coming up with the page, even though it's pretty obvious what
kinds of dynamic stuff it needs to find. Just taking a naive approach
with no databases but just doing everything with in-memory structures
(better not ever crash!) would make me expect a radically faster site.
For a site like Slashdot, which gets maybe 10 MB of comments a day,
keeping them all in RAM isn't excessive. (You'd also dump them
serially to a log file, no seeking or index overhead as this happened.
On server restart you'd just read the log file back into ram).
If you mean MySQL or SQL database in general is slow, there are truth
in it. The best thing about SQL database is concurrent access,
transactional semantics and versatile querying. Turns out a lot of
application can really live without that. If you can rearchitect the
application using flat files instead of database it can often be a
big bloom.

This is the kind of answer I had in mind.
A lot of these is just implementation. Find the right tool and the
right design for the job. I still don't see a case that LAMP based
solution is inherently slow.

I don't mean LAMP is inherently slow, I just mean that a lot of
existing LAMP sites are observably slow.
 
T

Tim Daneliuk

Paul said:
Yeah, I've been interested for a while in learning a little bit about
how TPF worked. Does Gray's book that you mention say much about it?

I honestly do not recall. TPF/PAARS is an odd critter unto itself
that may not be covered by much of anything other than IBM docs.
I think that the raw hardware of today's laptops dwarfs the old big
iron. An S/360 channel controller may have basically been a mainframe
in its own right, but even a mainframe in those days was just a few
MIPS. The i/o systems and memory are lots faster too, though not by
nearly the ratio by which storage capacity and cpu speed have
increased. E.g., a laptop disk these days has around 10 msec latency
and 20 MB/sec native transfer speed, vs 50+ msec and a few MB/sec for
a 3330-level drive (does that sound about right?.

Again, I don't know. The stuff we had was much newer (and faster) than
that.
Today I think most seeks can be eliminated by just using ram or SSD
(solid state disks) instead of rotating disks. But yeah, you wouldn't
do that on a laptop.

But that still does not solve the latency problem of session establishment/
teardown over network fabric which is the Achilles Heel of
the web and web services.
For a good overview of TP design, see Jim Gray's book, "Transaction
Processing: Concepts and Techniques".


Thanks, I'll look for this book. Gray of course is one of the
all-time database gurus and that book is probably something every
serious nerd should read. I've probably been a bad boy just for
having not gotten around to it years ago.

P.S. AFAIK the first CRS systems of any note came into being in the 1970s not
the 1960s, but I may be incorrect in the matter.


From <http://en.wikipedia.org/wiki/Sabre_(computer_system)>:

The system [SABRE] was created by American Airlines and IBM in the
1950s, after AA president C. R. Smith sat next to an IBM sales
representative on a transcontinental flight in 1953. Sabre's first
mainframe in Briarcliff Manor, New York went online in 1960. By
1964, Sabre was the largest private data processing system in the
world. Its mainframe was moved to an underground location in
Tulsa, Oklahoma in 1972.

Originally used only by American, the system was expanded to travel
agents in 1976. It is currently used by a number of companies,
including Eurostar, SNCF, and US Airways. The Travelocity website is
owned by Sabre and serves as a consumer interface to the system.

I stand (sit) corrected ;)
 
P

Paul Rubin

Tim Daneliuk said:
[other good stuff from Tim snipped]
Today I think most seeks can be eliminated by just using ram or SSD
(solid state disks) instead of rotating disks. But yeah, you wouldn't
do that on a laptop.

But that still does not solve the latency problem of session
establishment/ teardown over network fabric which is the Achilles
Heel of the web and web services.

Well, HTTP 1.1 keepalives takes care of some of that, but really,
really, most of this problem is server side, like when you browse a
Wikipedia page it might take a few seconds, which isn't good, but when
you update it, it takes most of a minute, which is awful. The
difference is that editing means server side web cache misses followed
by a database update that affects numerous indices.

Wikipedia keeps having these fundraising drives to buy more server CPU
iron (I've donated a few times) but I wonder if they'd be better off
spending it on solid state disks and/or software reorganization.
 
E

EP

This has a lot to do with the latency and speed of the connecting
network. Sites like Ebay, Google, and Amazon are connected
to internet backbone nodes (for speed) and are cached throughout
the network using things like Akami (to reduce latency)...


Akami for services, or better yet, cacheing hardware such as NetCache. Frequently requested data doesn't even have to come from the server disks/database - it's sent from the NetCache.
 
T

Tim Daneliuk

Paul said:
Tim Daneliuk said:
[other good stuff from Tim snipped]
Today I think most seeks can be eliminated by just using ram or SSD
(solid state disks) instead of rotating disks. But yeah, you wouldn't
do that on a laptop.

But that still does not solve the latency problem of session
establishment/ teardown over network fabric which is the Achilles
Heel of the web and web services.


Well, HTTP 1.1 keepalives takes care of some of that, but really,
really, most of this problem is server side, like when you browse a
Wikipedia page it might take a few seconds, which isn't good, but when
you update it, it takes most of a minute, which is awful. The
difference is that editing means server side web cache misses followed
by a database update that affects numerous indices.

Noted and agreed. However, also note that establishing/killing
sessions over a high-latency architecture is generally problematic.
The latency can come from any number of sources including servers
starving for memory/cache misses, as well as the network itself exhbiting
latency problems. One of reasons the older CRS systems were so fast
was that, although the connection *speeds* were low, the networks were
private, polled fabric with very predictable performance characteristics.
Every terminal was guaranteed to be serviced at a regular interval.
Better customers (who were guaranteed better service levels) just got
their terminals serviced more frequently.

When I left Apollo in the mid-
1990s, there were customers running on this private network who were
guaranteed 3 second or better *round-trip* time (from Enter to transaction
results displayed). This was in an environment with nearly 100,000
users and the core system peaking at something like 22,000 TPC/As
per second ... with 2 or 3 minutes of scheduled downtime per year.
This kind of performance comes from looking at the architecture as a whole,
not just pieces and parts. One of the troublesome things about the web
is that this kind of systemic thinking about performance and reliability
seems to be rather rare. For instance, he whole design of SOAP/RPC seems to be
oblivious to the 40+ years of history on these issues that preceded
the web.

With the advent of public/dynamic networks where a lot of discovery and
variable load exists, it is much harder to nail up a latency and
throughput guarantee. This is part of the motiviation for adding QOS
facilities to IPV6. However, as a practical matter, I think the explosion
of last-mile broadband as well as generally more lit up fiber in the backbones
may make QOS less necessary - we'll just throw bigger pipes at the problem.
It is well within the realm of possibility that we'll see OC3 class bandwidth
to the premise for reasonable cost in the foreseeable future.
Wikipedia keeps having these fundraising drives to buy more server CPU
iron (I've donated a few times) but I wonder if they'd be better off
spending it on solid state disks and/or software reorganization.

Since I do not use Wikipedia, I have no meaningful comment. As someone
pointed out here, PHP is also suspect when seeing performance issues.

<OB Python Comment>

One of the things I love about Python is its facility for allowing you to do
things that are time-insensitive in a VHLL and then dive into C or even assembler
if needed for the time-critical stuff. This fixes the language part of the
architecture problem, but does not, in and of itself, amerliorate the larger
systems architeture questions (nor would I expect it to)...
 
J

JanC

Paul Rubin schreef:
I don't know that the browser necessarily renders that faster than it
renders a table,

Simple tables aren't slow, but tables-in-tables-in-tables-in-tables-in-
tables are.
 
S

Steve Holden

Paul said:
I certainly agree about the money and hardware resource comparison,
which is why I thought the comparison with 1960's mainframes was
possibly more interesting. You could not get anywhere near the
performance of today's servers back then, no matter how much money you
spent. Re connectivity, I wonder what kind of network speed is
available to sites like Ebay that's not available to Jane Webmaster
with a colo rack at some random big ISP. Also, you and Tim Danieliuk
both mentioned caching in the network (e.g. Akamai). I'd be
interested to know exactly how that works and how much difference it
makes.
It works by distributing content across end-nodes distributed throughout
the infrastructure. I don't think Akamai make any secret of their
architecture, so Google :)-) can help you there.

Of course it makes a huge difference, otherwise Google wouldn't have
registered their domain name as a CNAME for an Akamai node set.

[OB PyCon] Jeremy Hylton, a Google employee and formerly a PythonLabs
employee, will be at PyCon. Why don;t you come along and ask *him*?
But the problems I'm thinking of are really obviously with the server
itself. This is clear when you try to load a page and your browser
immediately get the static text on the page, followed by a pause while
the server waits for the dynamic stuff to come back from the database.
Serving a Slashdotting-level load of pure static pages on a small box
with Apache isn't too terrible ("Slashdotting" = the spike in hits
that a web site gets when Slashdot's front page links to it). Doing
that with dynamic pages seems to be much harder. Something is just
bugging me about this. SQL servers provide a lot of capability (ACID
for complicated queries and transactions, etc). that most web sites
don't really need. They pay the price in performance anyway.
Well there's nothing wrong with caching dynamic content when the
underlying database isn't terribly volatile and there is no critical
requirement for the absolute latest data. Last I heard Google weren't
promising that their searches are up-to-the-microsecond in terms of
information content.

In terms on a network like Google's talking about "the server" doesn't
really make sense: as Sun Microsystems have been saying for about twenty
years now, "the network is the computer". There isn't "a server", it's
"a distributed service with multiple centers of functionality".
It's at least 100,000 and probably several times that ;-). I've heard
every that search query does billions of cpu operations and crunches
through 100's of megabytes of data (search on "apple banana" and there
are hundreds of millions of pages with each word, so two lists of that
size must be intersected). 100,000 was the published number of
servers several years ago, and there were reasons to believe that they
were purposely understating the real number.

So what's all this about "the server", then? ;-)

regards
Steve
 
K

Kartic

Paul Rubin said the following on 2/3/2005 10:00 PM:
I have the idea that the Wikipedia implementers know what they're doing,
but I haven't looked into it super closely.

Absolutely, hence my disclaimer about not viewing the Mediawiki code.
Hmm, I wasn't aware that Apache 2.x gave any significant speedups
over 1.3 except under Windows. Am I missing something?

Architectural differences. Apache 1.3 spawns a new process for every
request and before you know, it brings your resources to their knees. I
don't know what effect Apache 1.3-Win32 has on Windows. Apache 2.x from
what little I have used is pretty stable on windows and resource
friendly. (I use Freebsd for serious work)
Hmm, I'm not familiar with Nevow. Twisted is pretty neat, though
confusing. I don't see how to scale it to multiple servers though.

I'm asking this question mainly as it relates to midrange or larger
sites, not the very simplest ones (e.g. on my personal site I just use
Python cgi's, which are fine for the few dozen or so hits a week that
they get). So, the complexity of twisted is acceptable.


True - That is why I can't wait for the twisted sessions during PyCon
'05 :)

Yes, good point about html tables, though I'm concerned primarily
about server response. (It's off-topic, but how do you use CSS to get
the effect of tables?)

My thinking is that all these pieces must fit in well. For example, your
server response might have been good but the table-driven site could be
slowing your browser down. The best way to test whether it is the server
response that is slow and/or table-driven page is to load that page in
Lynx.

The CSS way is using <div> placement of the elements. Actually <div>
gives better control over placement than with HTML tables. And with CSS,
since you style the various HTML tags, you can create different "skins"
for your site too. This is definitely OT, like you said, but if you are
interested, please contact me directly. I don't pretend to be a CSS
expert but I can help you as much as I can.

Thanks,
--Kartic
 
A

Al Dykes

Paul Rubin wrote:



I worked for an Airline computer reservation system (CRS) for almost a
decade. There is nothing about today's laptops that remotely comes close
to the power of those CRS systems, even the old ones. CRS systems are
optimized for extremely high performance I/O and use an operating system
(TPF) specifically designed for high-performance transaction processing.

Web servers are very sessions oriented: make a connection-pass the unit
of work-drop the connection. This is inherently slow (and not how high
performance TP is done). Moreover, really high perfomance requires a
very fine level of I/O tuning on the server - at the CRS I worked for,
they performance people actually only populated part of the hard drives
to minimize head seeks.

The point is that *everything* has to be tuned for high performance
TP - the OS, the language constructs (we used assembler for most things),
the protocols, and the overall architecture. THis is why, IMHO,
things like SOAP a laughable - RPC is a poor foundation for reliable,
durable, and high-performance TP. It might be fine for sending an
order or invoice now and then, but sustained throughput of the sort
I think of as "high" performance is likely never going to be accomplished
with session-oriented architectures.

For a good overview of TP design, see Jim Gray's book, "Transaction Processing:
Concepts and Techniques".

P.S. AFAIK the first CRS systems of any note came into being in the 1970s not
the 1960s, but I may be incorrect in the matter.




My recollection is that online reservations were in use ca. 1970, and I
know that the operating system was called ACP, renamed to TPF.
Googleing for that finds that online reservation systems stared in the
50's and ran on 7000 gear in the 60's.

http://www.blackbeard.com/tpf/Sabre_off_TPF/some_highlights_from_sabre_history.htm

I was in banking in th 80's. I recall that circa 1990 hitting 1000 DB
trans/sec was the holy grail on a million $ mainframe. My bank bought
what was called "the last TPF sale" about 1991. It was used as a
"message router" to conect transactions from thousands ATMs and
teller stations to the right backend system necessary to make a bank
merger work.
 
D

Dave Brueck

Steve said:
It works by distributing content across end-nodes distributed throughout
the infrastructure. I don't think Akamai make any secret of their
architecture, so Google :)-) can help you there.

They definitely didn't make it a secret - they patented it. The gist of their
approach was to put web caches all over the place and then have their DNS
servers resolve based on where the request was coming from - when your browser
asks their DNS server where whatever.akamai.com is, they try to respond with a
web cache that is topologically close.
Of course it makes a huge difference, otherwise Google wouldn't have
registered their domain name as a CNAME for an Akamai node set.

Yes and no - web caching can be very beneficial. Web caching with Akamai may or
may not be worth the price; their business was originally centered around the
idea that quality bandwidth is expensive - while still true to a degree, prices
have fallen a ton in the last few years and continue to fall.

And who knows what sort of concessions they made to win the Google contract (not
saying that's bad, just realize that Akamai would probably even take a loss on
the Google contract because having Google as a customer makes people conclude
that their service must make a huge difference ;-) ).

-Dave
 
F

Fredrik Lundh

Tim said:
THis is why, IMHO, things like SOAP a laughable - RPC is a poor
foundation for reliable, durable, and high-performance TP. It might be
fine for sending an order or invoice now and then, but sustained through-
put of the sort I think of as "high" performance is likely never going to be
accomplished with session-oriented architectures.

does 50 gigabytes per day, sustained, count as high performance in your book?

</F>
 
A

Al Dykes

I honestly do not recall. TPF/PAARS is an odd critter unto itself
that may not be covered by much of anything other than IBM docs.

I've seen it covered in some textbook, possibly something by Tanenbaum.
I imagine it's in the ACM literature and the IBM Systems Journal.

If we move this thread to alt.folklore.computers we'll get lots of
good info.
 
A

Al Dykes

Two major problems I've noticed, don't know if they are universal, but they sure
hurt the performance:

1) Some sites have not put any thought into caching - i.e. the application
server is serving up images or every single page is dynamically generated even
though all (or most) of it is static such that most of the requests just aren't
cacheable.

2) Because a database is there, it gets used, even when it shouldn't, and it
often gets used poorly - bad or no connection pooling, many trips to the
database for each page generated, no table indices, bizarro database schemas.

Overall I'd say my first guess is that too much is being generated on the fly,
often because it's just easier not to worry about cacheability, but a good web
cache can provide orders of magnitude improvement in performance, so it's worth
some extra thought.

One project we had involved the users navigating through a big set of data,
narrowing down the set by making choices about different variables. At any point
it would display the choices that had been made, the remaining choices, and the
top few hits in the data set. We initially thought all the pages would have to
be dynamically generated, but then we realized that each page really represented
a distinct and finite state, so we went ahead and built the site with Zope +
Postgres, but made it so that the URLs were input to Zope and told what state to
generate.

The upshot of all this is that we then just ran a web spider against Zope any
time the data changed (once a week or so), and so the site ended up "feeling"
pretty dynamic to a user but pretty much everything came straight out of a cache.

-Dave


A couple years ago the Tomshardware.com website was reengeneered to
cache everything possible with great performance improvement. They wrote
a nice article about the project, which I assume is still online. I don't
 
D

Dave Brueck

Fredrik said:
Tim Daneliuk wrote:




does 50 gigabytes per day, sustained, count as high performance in your book?

In and of itself, no. But really, the number is meaningless in transaction
processing without some idea of the size and content of the messages (i.e. if
it's binary data that has to be base64 encoded just to make it work with SOAP)
because what's most interesting is the number of transactions that can be handled.

If 50 GB/day is just a measurement of the throughput of the data transport
layer, then that's fairly low - less than 5Mbps!

-Dave
 
J

Jeffrey Froman

M.E.Farmer said:
Div is a block level tag and span isn't.
You can also group them  together and nest them.

One caveat here -- I don't believe you can (should) nest a <div> inside a
<span>, or for that matter, nest any block-level element inside an inline
element.

Jeffrey
 
J

Jeffrey Froman

Paul said:
I don't know that the browser necessarily renders that faster than it
renders a table, but there's surely less crap in the HTML, which is
always good.  I may start using that method.

Using tables for layout is also a cardinal faux pas if you care about
accessibility, as such tables can really mess up things like screenreader
software for the sight-impaired.

Jeffrey
 
J

Jack Diederich

[reordered Paul's email a bit]
This is the kind of answer I had in mind.

*ding*ding*ding* The biggest mistake I've made most frequently is using
a database in applications. YAGNI. Using a database at all has it's
own overhead. Using a database badly is deadly. Most sites would
benefit from ripping out the database and doing something simpler.
Refactoring a database on a live system is a giant pain in the ass,
simpler file-based approaches make incremental updates easier.

The Wikipedia example has been thrown around, I haven't looked at the
code either; except for search why would they need a database to
look up an individual WikiWord? Going to the database requires reading
an index when pickle.load(open('words/W/WikiWord')) would seem sufficient.
Yes, that's the basic observation, not specifically Slashdot but for
lots of LAMP sites (some PHPBB sites are other examples) have the same
behavior. You send a url and the server has to grind for quite a
while coming up with the page, even though it's pretty obvious what
kinds of dynamic stuff it needs to find. Just taking a naive approach
with no databases but just doing everything with in-memory structures
(better not ever crash!) would make me expect a radically faster site.
For a site like Slashdot, which gets maybe 10 MB of comments a day,
keeping them all in RAM isn't excessive. (You'd also dump them
serially to a log file, no seeking or index overhead as this happened.
On server restart you'd just read the log file back into ram).

You're preaching to the choir, I don't use any of the fancy stuff in
Twisted but the single threaded nature means I can keep everything in
RAM and just serialize changes to disk (to survive a restart).
This allows you to do very naive things and pay no penalty. My homespun
blogging software isn't as full featured as Pybloxsom but it is a few
hundred times(!) faster. Pybloxsom pays a high price in file stats
because it allows running under CGI. Mine would too as a CGI but it
isn't so *shrug*.
I don't mean LAMP is inherently slow, I just mean that a lot of
existing LAMP sites are observably slow.

A lot of these are just implementation. Going the dumb non-DB way won't
prevent you from making bad choices but if a lot of bad choices are made
simply because of the DB (my assertion) dropping the DB would avoid
some bad choices. I think Sourceforge has one table for all project's
bugs & patches. That means a never used project's bugs take up space
in the index and slow down access to the popular projects. Would a
naive file-based implementation have been just as bad? maybe.

If there is interest I'll follow up with some details on my own LAMP
software which does live reports on gigs of data and - you guessed it -
I regret it is database backed. That story also involves why I started
using Python (the prototype was in PHP).

-Jack
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,527
Members
44,998
Latest member
MarissaEub

Latest Threads

Top