OT: why are LAMP sites slow?

J

Jeremy Bowers

Hmm, I'm not familiar with Nevow. Twisted is pretty neat, though
confusing. I don't see how to scale it to multiple servers though.

Same way you'd scale any webserver, load balancing in hardware, store all
user state in a database, and tell the load balancer to try to "persist"
a user's connection to a machine, so Twisted doesn't even have to go back
to the server then?

I'm going to assume you know about this if you deal with large websites
professionally, so I'm curious as to why this is inadequate for your
needs? (If it's too detailed, no answer requested then, but if it's fast
to explain I'm intrigued.)
 
P

Paul Rubin

LAMP = Linux/Apache/MySQL/P{ython,erl,HP}. Refers to the general
class of database-backed web sites built using those components. This
being c.l.py, if you want, you can limit your interest to the case the
P stands for Python.

I notice that lots of the medium-largish sites (from hobbyist BBS's to
sites like Slashdot, Wikipedia, etc.) built using this approach are
painfully slow even using seriously powerful server hardware. Yet
compared to a really large site like Ebay or Hotmail (to say nothing
of Google), the traffic levels on those sites is just chickenfeed.

I wonder what the webheads here see as the bottlenecks. Is it the
application code? Disk bandwidth at the database side, that could be
cured with more ram caches or solid state disks? SQL just inherently
slow?

I've only worked on one serious site of this type and it was "SAJO"
(Solaris Apache Java Oracle) rather than LAMP, but the concepts are
the same. I just feel like something bogus has to be going on. I
think even sites like Slashdot handle fewer TPS than a 1960's airline
reservation that ran on hardware with a fraction of the power of one
of today's laptops.

How would you go about building such a site? Is LAMP really the right
approach?
 
A

Aahz

I've only worked on one serious site of this type and it was "SAJO"
(Solaris Apache Java Oracle) rather than LAMP, but the concepts are
the same. I just feel like something bogus has to be going on. I
think even sites like Slashdot handle fewer TPS than a 1960's airline
reservation that ran on hardware with a fraction of the power of one
of today's laptops.

Something I saw recently was that XML has inherently horrid performance
for searching precisely because it isn't fixed-length records. Yes, the
old platforms handled more TPS, but they also handled much less data of
a form much more amenable to indexing.
--
Aahz ([email protected]) <*> http://www.pythoncraft.com/

"The joy of coding Python should be in seeing short, concise, readable
classes that express a lot of action in a small amount of clear code --
not in reams of trivial code that bores the reader to death." --GvR
 
S

Simon Wittber

I notice that lots of the medium-largish sites (from hobbyist BBS's to
sites like Slashdot, Wikipedia, etc.) built using this approach are
painfully slow even using seriously powerful server hardware.

Slow is such an ambiguous term. Do you mean the pages are slow to
render in a browser, or slow to be fetched from the server, or the
server is slow to respond to requests? What is slow?
 
P

Paul Rubin

Simon Wittber said:
Slow is such an ambiguous term. Do you mean the pages are slow to
render in a browser, or slow to be fetched from the server, or the
server is slow to respond to requests? What is slow?

The server is slow to respond to requests. Browser rendering is
independent of the server architecture and "slow to be fetched from
the server" sounds like it means low network speed. I'm talking about
the very familiar experience of clicking a link and then waiting,
waiting, waiting for the page to load. You rarely see that happen
with Ebay or Google. It happens all the time with Wikipedia.
 
J

Jeremy Bowers

I understood the Twisted suggestion as meaning avoiding database
traffic by keeping both user and server state resident in the
application. Yes, if you use a database for that, you get multiple
app servers instead of a heavily loaded centralized one. But you now
have a heavily loaded centralized database server instead. You
haven't really solved your scaling problem, you've just moved it back
a layer.

True, but my understanding is that there are load balancing solutions for
database servers too, so in this case moving the problem back one level
actually can be progress.

But I have no experience with this, so I have no idea how well it works.
 
T

Tim Daneliuk

Paul Rubin wrote:

I've only worked on one serious site of this type and it was "SAJO"
(Solaris Apache Java Oracle) rather than LAMP, but the concepts are
the same. I just feel like something bogus has to be going on. I
think even sites like Slashdot handle fewer TPS than a 1960's airline
reservation that ran on hardware with a fraction of the power of one
of today's laptops.

I worked for an Airline computer reservation system (CRS) for almost a
decade. There is nothing about today's laptops that remotely comes close
to the power of those CRS systems, even the old ones. CRS systems are
optimized for extremely high performance I/O and use an operating system
(TPF) specifically designed for high-performance transaction processing.

Web servers are very sessions oriented: make a connection-pass the unit
of work-drop the connection. This is inherently slow (and not how high
performance TP is done). Moreover, really high perfomance requires a
very fine level of I/O tuning on the server - at the CRS I worked for,
they performance people actually only populated part of the hard drives
to minimize head seeks.

The point is that *everything* has to be tuned for high performance
TP - the OS, the language constructs (we used assembler for most things),
the protocols, and the overall architecture. THis is why, IMHO,
things like SOAP a laughable - RPC is a poor foundation for reliable,
durable, and high-performance TP. It might be fine for sending an
order or invoice now and then, but sustained throughput of the sort
I think of as "high" performance is likely never going to be accomplished
with session-oriented architectures.

For a good overview of TP design, see Jim Gray's book, "Transaction Processing:
Concepts and Techniques".

P.S. AFAIK the first CRS systems of any note came into being in the 1970s not
the 1960s, but I may be incorrect in the matter.
 
D

Dave Brueck

Paul said:
How would you go about building such a site? Is LAMP really the right
approach?

Two major problems I've noticed, don't know if they are universal, but they sure
hurt the performance:

1) Some sites have not put any thought into caching - i.e. the application
server is serving up images or every single page is dynamically generated even
though all (or most) of it is static such that most of the requests just aren't
cacheable.

2) Because a database is there, it gets used, even when it shouldn't, and it
often gets used poorly - bad or no connection pooling, many trips to the
database for each page generated, no table indices, bizarro database schemas.

Overall I'd say my first guess is that too much is being generated on the fly,
often because it's just easier not to worry about cacheability, but a good web
cache can provide orders of magnitude improvement in performance, so it's worth
some extra thought.

One project we had involved the users navigating through a big set of data,
narrowing down the set by making choices about different variables. At any point
it would display the choices that had been made, the remaining choices, and the
top few hits in the data set. We initially thought all the pages would have to
be dynamically generated, but then we realized that each page really represented
a distinct and finite state, so we went ahead and built the site with Zope +
Postgres, but made it so that the URLs were input to Zope and told what state to
generate.

The upshot of all this is that we then just ran a web spider against Zope any
time the data changed (once a week or so), and so the site ended up "feeling"
pretty dynamic to a user but pretty much everything came straight out of a cache.

-Dave
 
T

Tim Daneliuk

Paul said:
The server is slow to respond to requests. Browser rendering is
independent of the server architecture and "slow to be fetched from
the server" sounds like it means low network speed. I'm talking about
the very familiar experience of clicking a link and then waiting,
waiting, waiting for the page to load. You rarely see that happen
with Ebay or Google. It happens all the time with Wikipedia.

This has a lot to do with the latency and speed of the connecting
network. Sites like Ebay, Google, and Amazon are connected
to internet backbone nodes (for speed) and are cached throughout
the network using things like Akami (to reduce latency)...
 
K

Kartic

Paul Rubin said the following on 2/3/2005 7:20 PM:
LAMP = Linux/Apache/MySQL/P{ython,erl,HP}. Refers to the general
class of database-backed web sites built using those components. This
being c.l.py, if you want, you can limit your interest to the case the
P stands for Python.

I notice that lots of the medium-largish sites (from hobbyist BBS's to
sites like Slashdot, Wikipedia, etc.) built using this approach are
painfully slow even using seriously powerful server hardware. Yet
compared to a really large site like Ebay or Hotmail (to say nothing
of Google), the traffic levels on those sites is just chickenfeed.

If you are talking about Wikipedia as a prime example, I agree with you
that it is *painfully* slow.

And the reason for that I probably because of the way the language is
used (PHP) (this is a shot in the dark as I have not looked into
Mediawiki code), and compounded by probably an unoptimized database. I
don't want to start flame wars here about PHP; I use PHP to build client
sites and like it for the "easy building of dynamic sites" but the
downside is that there is no "memory"...every page is compiled each time
a request is made. I doubt if Wikipedia site uses an optimizer (like
Zend) or caching mechanisms. Optimizers and/or PHP caches make a huge
performance difference.

Also, PHP has put dynamic sites within easy reach of several programmers
who slap together sites in no time. These sites may have spaghetti code
and even the best infrastructure is not enough to support shabby design
(code, db setup and even server tuning). I have seen people become
programmers overnight! There are also LAMP sites that use Apache 1.3
that is a resource hog; I guess production sites do not want to upgrade
to Apache 2.x/PHP combo!

Coming to python, to be honest, I have not seen many LAMPy sites. I use
blogspot.com frequently and it is pretty reliable; I hear that it is
written in Python but I have no idea about the server and database software.

The way to go is to build around application servers, IMHO. I like the
Twisted.web/Nevow methodology, though for simple situations it may be an
overkill. I like the Webware with Apache-thru-Webware-Adapter setup -
that is what I am considering implementing for my organization. (Even in
the App Server arena, I have seen Websphere with JSP/Servelets sites to
be soooo slow that I could finish my breakfast and still wait for the page)

From my experience it is an overall tuning thing. Usual culprits are
untuned Linux boxes, unoptimized database design, poorly designed
queries and/or poor application logic...and ah! table-driven pages.
Pages built on tables for the layout kill the browser. CSS is the way to go.

Thanks,
-Kartic
 
P

Paul Rubin

Kartic said:
And the reason for that I probably because of the way the language is
used (PHP) (this is a shot in the dark as I have not looked into
Mediawiki code), and compounded by probably an unoptimized database.

I have the idea that the Wikipedia implementers know what they're doing,
but I haven't looked into it super closely.
I don't want to start flame wars here about PHP; I use PHP to build
client sites and like it for the "easy building of dynamic sites"
but the downside is that there is no "memory"...every page is
compiled each time a request is made. I doubt if Wikipedia site uses
an optimizer (like Zend) or caching mechanisms. Optimizers and/or
PHP caches make a huge performance difference.

They do use an optimizer similar to Zend. They also use Squid as a
static cache.
Also, PHP has put dynamic sites within easy reach of several
programmers who slap together sites in no time. These sites may have
spaghetti code and even the best infrastructure is not enough to
support shabby design (code, db setup and even server tuning). I have
seen people become programmers overnight! There are also LAMP sites
that use Apache 1.3 that is a resource hog; I guess production sites
do not want to upgrade to Apache 2.x/PHP combo!

Hmm, I wasn't aware that Apache 2.x gave any significant speedups
over 1.3 except under Windows. Am I missing something?
The way to go is to build around application servers, IMHO. I like the
Twisted.web/Nevow methodology, though for simple situations it may be
an overkill.

Hmm, I'm not familiar with Nevow. Twisted is pretty neat, though
confusing. I don't see how to scale it to multiple servers though.

I'm asking this question mainly as it relates to midrange or larger
sites, not the very simplest ones (e.g. on my personal site I just use
Python cgi's, which are fine for the few dozen or so hits a week that
they get). So, the complexity of twisted is acceptable.
From my experience it is an overall tuning thing. Usual culprits are
untuned Linux boxes, unoptimized database design, poorly designed
queries and/or poor application logic...and ah! table-driven
pages. Pages built on tables for the layout kill the browser. CSS is
the way to go.

Yes, good point about html tables, though I'm concerned primarily
about server response. (It's off-topic, but how do you use CSS to get
the effect of tables?)
 
A

aurora

Slow compares to what? For a large commerical site with bigger budget,
better infrastructure, better implementation, it is not surprising that
they come out ahead compares to hobbyist sites.

Putting implementation aside, is LAMP inherently performing worst than
commerical alternatives like IIS, ColdFusion, Sun ONE or DB2? Sounds like
that's your perposition.

I don't know if there is any number to support this perposition. Note that
many largest site have open source components in them. Google, Amazon,
Yahoo all run on unix variants. Ebay is the notable exception, which uses
IIS. Can you really say ebay is performing better that amazon (or vice
versa)?

I think the chief factor that a site performing poorly is in the
implementation. It is really easy to throw big money into expensive
software and hardware and come out with a performance dog. Google's
infrastructure relies on a large distributed network of commodity
hardware, not a few expensive boxes. LAMP based infrastructure, if used
right, can support the most demanding applications.
 
P

Paul Rubin

Tim Daneliuk said:
I worked for an Airline computer reservation system (CRS) for almost a
decade. There is nothing about today's laptops that remotely comes close
to the power of those CRS systems, even the old ones. CRS systems are
optimized for extremely high performance I/O and use an operating system
(TPF) specifically designed for high-performance transaction processing.

Yeah, I've been interested for a while in learning a little bit about
how TPF worked. Does Gray's book that you mention say much about it?

I think that the raw hardware of today's laptops dwarfs the old big
iron. An S/360 channel controller may have basically been a mainframe
in its own right, but even a mainframe in those days was just a few
MIPS. The i/o systems and memory are lots faster too, though not by
nearly the ratio by which storage capacity and cpu speed have
increased. E.g., a laptop disk these days has around 10 msec latency
and 20 MB/sec native transfer speed, vs 50+ msec and a few MB/sec for
a 3330-level drive (does that sound about right?.
Web servers are very sessions oriented: make a connection-pass the
unit of work-drop the connection. This is inherently slow (and not
how high performance TP is done). Moreover, really high perfomance
requires a very fine level of I/O tuning on the server - at the CRS
I worked for, they performance people actually only populated part
of the hard drives to minimize head seeks.

Today I think most seeks can be eliminated by just using ram or SSD
(solid state disks) instead of rotating disks. But yeah, you wouldn't
do that on a laptop.
For a good overview of TP design, see Jim Gray's book, "Transaction
Processing: Concepts and Techniques".

Thanks, I'll look for this book. Gray of course is one of the
all-time database gurus and that book is probably something every
serious nerd should read. I've probably been a bad boy just for
having not gotten around to it years ago.
P.S. AFAIK the first CRS systems of any note came into being in the 1970s not
the 1960s, but I may be incorrect in the matter.

From <http://en.wikipedia.org/wiki/Sabre_(computer_system)>:

The system [SABRE] was created by American Airlines and IBM in the
1950s, after AA president C. R. Smith sat next to an IBM sales
representative on a transcontinental flight in 1953. Sabre's first
mainframe in Briarcliff Manor, New York went online in 1960. By
1964, Sabre was the largest private data processing system in the
world. Its mainframe was moved to an underground location in
Tulsa, Oklahoma in 1972.

Originally used only by American, the system was expanded to travel
agents in 1976. It is currently used by a number of companies,
including Eurostar, SNCF, and US Airways. The Travelocity website is
owned by Sabre and serves as a consumer interface to the system.
 
P

Paul Rubin

aurora said:
Slow compares to what? For a large commerical site with bigger budget,
better infrastructure, better implementation, it is not surprising
that they come out ahead compares to hobbyist sites.

Hmm, as mentioned, I'm not sure what the commercial sites do that's
different. I take the view that the free software world is capable of
anything that the commercial world is capable of, so I'm not awed just
because a site is commercial. And sites like Slashdot have pretty big
budgets by hobbyist standards.
Putting implementation aside, is LAMP inherently performing worst than
commerical alternatives like IIS, ColdFusion, Sun ONE or DB2? Sounds
like that's your perposition.

I wouldn't say that. I don't think Apache is a bottleneck compared
with other web servers. Similarly I don't see an inherent reason for
Python (or whatever) to be seriously slower than Java servlets. I
have heard that MySQL doesn't handle concurrent updates nearly as well
as DB2 or Oracle, or for that matter PostgreSQL, so I wonder if busier
LAMP sites might benefit from switching to PostgreSQL (LAMP => LAPP?).
I don't know if there is any number to support this perposition. Note
that many largest site have open source components in them. Google,
Amazon, Yahoo all run on unix variants. Ebay is the notable
exception, which uses IIS. Can you really say ebay is performing
better that amazon (or vice versa)?

I don't know how much the OS matters. I don't know how much the web
server matters. My suspicion is that the big resource sink is the SQL
server. But I'm wondering what people more experienced than I am say
about this. Google certainly doesn't use SQL for its web search index.
I think the chief factor that a site performing poorly is in the
implementation. It is really easy to throw big money into expensive
software and hardware and come out with a performance dog. Google's
infrastructure relies on a large distributed network of commodity
hardware, not a few expensive boxes. LAMP based infrastructure, if
used right, can support the most demanding applications.

Google sure doesn't use LAMP! I've heard that when you enter a Google
query, about sixty different computers work on it. The search index
is distributed all over the place and they use a supercomputer-like
interconnect strategy (but based on commodity ethernet switches) to
move stuff around between the processors.
 
S

Skip Montanaro

Paul> I'm talking about the very familiar experience of clicking a link
Paul> and then waiting, waiting, waiting for the page to load. You
Paul> rarely see that happen with Ebay or Google. It happens all the
Paul> time with Wikipedia.

It's more than a bit unfair to compare Wikipedia with Ebay or Google. Even
though Wikipedia may be running on high-performance hardware, it's unlikely
that they have anything like the underlying network structure (replication,
connection speed, etc), total number of cpus or monetary resources to throw
at the problem that both Ebay and Google have. I suspect money trumps LAMP
every time.

Just as a quick comparison, I executed

host www.wikipedia.org
host www.google.com

on two different machines, my laptop here on Comcast's network in Chicago,
and at Mojam's co-lo server in Colorado Springs. I got the same results for
Wikipedia:

www.wikipedia.org has address 207.142.131.203
www.wikipedia.org has address 207.142.131.204
www.wikipedia.org has address 207.142.131.205
www.wikipedia.org has address 207.142.131.202

but different results for Google. Laptop/Chicago:

www.google.com is a nickname for www.google.akadns.net
www.google.akadns.net has address 64.233.161.104
www.google.akadns.net has address 64.233.161.99
www.google.akadns.net has address 64.233.161.147

Co-Lo server/Colorado Springs:

www.google.com is an alias for www.google.akadns.net.
www.google.akadns.net has address 64.233.187.99
www.google.akadns.net has address 64.233.187.104

We also know Google has thousands of CPUs (I heard 5,000 at one point and
that was a couple years ago). I doubt Wikipedia has more than a handful of
CPUs and they are probably all located in the same facility. Google's
front-end web servers are clearly distributed around the Internet. I
wouldn't be surprised if their back-end servers were widely distributed as
well.

Here's a link to an IEEE Micro article about Google's cluster architecture:

http://www.search3w.com/Siteresources/data/MediaArchive/files/Google 15000 servers secrest.pdf

It was published in 2003 and gives a figure of 15,000 commodity PCs.

Here's one quote from the beginning of the article:

To provide sufficient capacity to handle query traffic, our service
consists of multiple clusters distributed worldwide. Each cluster has
around a few thousand machines, and the geographically distributed setup
protects us against catastrophic data center failures (like those
arising from earthquakes and large-scale power failures).

Skip
 
P

Paul Rubin

Skip Montanaro said:
It's more than a bit unfair to compare Wikipedia with Ebay or
Google. Even though Wikipedia may be running on high-performance
hardware, it's unlikely that they have anything like the underlying
network structure (replication, connection speed, etc), total number
of cpus or monetary resources to throw at the problem that both Ebay
and Google have. I suspect money trumps LAMP every time.

I certainly agree about the money and hardware resource comparison,
which is why I thought the comparison with 1960's mainframes was
possibly more interesting. You could not get anywhere near the
performance of today's servers back then, no matter how much money you
spent. Re connectivity, I wonder what kind of network speed is
available to sites like Ebay that's not available to Jane Webmaster
with a colo rack at some random big ISP. Also, you and Tim Danieliuk
both mentioned caching in the network (e.g. Akamai). I'd be
interested to know exactly how that works and how much difference it
makes.

But the problems I'm thinking of are really obviously with the server
itself. This is clear when you try to load a page and your browser
immediately get the static text on the page, followed by a pause while
the server waits for the dynamic stuff to come back from the database.
Serving a Slashdotting-level load of pure static pages on a small box
with Apache isn't too terrible ("Slashdotting" = the spike in hits
that a web site gets when Slashdot's front page links to it). Doing
that with dynamic pages seems to be much harder. Something is just
bugging me about this. SQL servers provide a lot of capability (ACID
for complicated queries and transactions, etc). that most web sites
don't really need. They pay the price in performance anyway.
We also know Google has thousands of CPUs (I heard 5,000 at one point and
that was a couple years ago).

It's at least 100,000 and probably several times that ;-). I've heard
every that search query does billions of cpu operations and crunches
through 100's of megabytes of data (search on "apple banana" and there
are hundreds of millions of pages with each word, so two lists of that
size must be intersected). 100,000 was the published number of
servers several years ago, and there were reasons to believe that they
were purposely understating the real number.
 
P

Paul Rubin

Jeremy Bowers said:
Same way you'd scale any webserver, load balancing in hardware, store all
user state in a database, and tell the load balancer to try to "persist"
a user's connection to a machine, so Twisted doesn't even have to go back
to the server then?

I understood the Twisted suggestion as meaning avoiding database
traffic by keeping both user and server state resident in the
application. Yes, if you use a database for that, you get multiple
app servers instead of a heavily loaded centralized one. But you now
have a heavily loaded centralized database server instead. You
haven't really solved your scaling problem, you've just moved it back
a layer.
 
M

M.E.Farmer

Paul said:
Yes, good point about html tables, though I'm concerned primarily
about server response. (It's off-topic, but how do you use CSS to get
the effect of tables?)
To emulate a table you use the div and span tag.
(really you can do just about anything with div and span)
Div is a block level tag and span isn't.
You can also group them together and nest them.
div and span have little meaning but the can be styled.
In the CSS declaration we create styles and then just use them later as
needed.
Try to always focus on layout first and actual style later. (CSS is all
about seperation of concerns)
You will find the verbosity of CSS is not as annoying as you might
think it might be, it is quite easy to learn and well worth the effort.
Ok here is a bad example ( do a search on Google for CSS tables ).
( this was torn out of a webapp I am working on )
..<div class='panel'>
.. <div class="archivehead"><strong>
.. <span class="leftspace">Archive</span>
.. <span class="rightspace">Entry Date</span>
.. </strong>
.. </div>
.. <div class="archivelite">
.. <span class="bold">%s</span><span>
.. <strong>%s</strong>
.. </span><span class="right">%s</span>
.. </div>
.. <div class="archivedark">
.. &nbsp &nbsp &nbsp &nbsp posted by: %s
.. <span class="right">
.. <strong>text </strong>
.. <strong> %s</strong>
.. </span>
.. </div>
..</div>
And here is some of the CSS
( these are classes the dot in front of the name tells you that, when
combined with div or span just about anything is possible. )
.. .panel {
.. border: solid thin #666677;
.. margin: 2em 4em 2em 4em;}
.. .leftspaces {
.. letter-spacing:.5em;
.. text-decoration:none;
.. color:#EEEEff;}
.. .rightspaces {
.. letter-spacing:.5em;
.. position: absolute;
.. right: 5em;
.. text-decoration: none;
.. color:#EEEEff;}
..
.. .archivehead {
.. text-indent:.5em;
.. background-color:#333333;}
.. .archivelite {
.. color:#777777;
.. text-indent:.5em;
.. background-color:#222222;}
.. .archivedark {
.. color:#777777; text-indent:.5em;
.. background-color:#111111;}
.. .archivelite a{
.. color:#BBBBDD;
.. text-decortation:none;
.. background-color:#222222;}
.. .archivedark a{
.. color:#BBBBDD;
.. text-decortation:none;
.. background-color:#111111;}
hth,
M.E.Farmer
 
M

M.E.Farmer

Paul said:
Yes, good point about html tables, though I'm concerned primarily
about server response. (It's off-topic, but how do you use CSS to get
the effect of tables?)
To emulate a table you use the div and span tag.
(really you can do just about anything with div and span)
Div is a block level tag and span isn't.
You can also group them together and nest them.
div and span have little meaning but the can be styled.
In the CSS declaration we create styles and then just use them later as
needed.
Try to always focus on layout first and actual style later. (CSS is all
about seperation of concerns)
You will find the verbosity of CSS is not as annoying as you might
think it might be, it is quite easy to learn and well worth the effort.
Ok here is a bad example ( do a search on Google for CSS tables ).
( this was torn out of a webapp I am working on )
..<div class='panel'>
.. <div class="archivehead"><strong>
.. <span class="leftspace">Archive</span>
.. <span class="rightspace">Entry Date</span>
.. </strong>
.. </div>
.. <div class="archivelite">
.. <span class="bold">%s</span><span>
.. <strong>%s</strong>
.. </span><span class="right">%s</span>
.. </div>
.. <div class="archivedark">
.. &nbsp &nbsp &nbsp &nbsp posted by: %s
.. <span class="right">
.. <strong>text </strong>
.. <strong> %s</strong>
.. </span>
.. </div>
..</div>
And here is some of the CSS
( these are classes the dot in front of the name tells you that, when
combined with div or span just about anything is possible. )
.. .panel {
.. border: solid thin #666677;
.. margin: 2em 4em 2em 4em;}
.. .leftspaces {
.. letter-spacing:.5em;
.. text-decoration:none;
.. color:#EEEEff;}
.. .rightspaces {
.. letter-spacing:.5em;
.. position: absolute;
.. right: 5em;
.. text-decoration: none;
.. color:#EEEEff;}
..
.. .archivehead {
.. text-indent:.5em;
.. background-color:#333333;}
.. .archivelite {
.. color:#777777;
.. text-indent:.5em;
.. background-color:#222222;}
.. .archivedark {
.. color:#777777; text-indent:.5em;
.. background-color:#111111;}
.. .archivelite a{
.. color:#BBBBDD;
.. text-decortation:none;
.. background-color:#222222;}
.. .archivedark a{
.. color:#BBBBDD;
.. text-decortation:none;
.. background-color:#111111;}
hth,
M.E.Farmer
 
M

M.E.Farmer

Paul said:
Yes, good point about html tables, though I'm concerned primarily
about server response. (It's off-topic, but how do you use CSS to get
the effect of tables?)
To emulate a table you use the div and span tag.
(really you can do just about anything with div and span)
Div is a block level tag and span isn't.
You can also group them together and nest them.
div and span have little meaning but the can be styled.
In the CSS declaration we create styles and then just use them later as
needed.
Try to always focus on layout first and actual style later. (CSS is all
about seperation of concerns)
You will find the verbosity of CSS is not as annoying as you might
think it might be, it is quite easy to learn and well worth the effort.
Ok here is a bad example ( do a search on Google for CSS tables ).
( this was torn out of a webapp I am working on )
..<div class='panel'>
.. <div class="archivehead"><strong>
.. <span class="leftspace">Archive</span>
.. <span class="rightspace">Entry Date</span>
.. </strong>
.. </div>
.. <div class="archivelite">
.. <span class="bold">%s</span><span>
.. <strong>%s</strong>
.. </span><span class="right">%s</span>
.. </div>
.. <div class="archivedark">
.. &nbsp &nbsp &nbsp &nbsp posted by: %s
.. <span class="right">
.. <strong>text </strong>
.. <strong> %s</strong>
.. </span>
.. </div>
..</div>
And here is some of the CSS
( these are classes the dot in front of the name tells you that, when
combined with div or span just about anything is possible. )
.. .panel {
.. border: solid thin #666677;
.. margin: 2em 4em 2em 4em;}
.. .leftspaces {
.. letter-spacing:.5em;
.. text-decoration:none;
.. color:#EEEEff;}
.. .rightspaces {
.. letter-spacing:.5em;
.. position: absolute;
.. right: 5em;
.. text-decoration: none;
.. color:#EEEEff;}
..
.. .archivehead {
.. text-indent:.5em;
.. background-color:#333333;}
.. .archivelite {
.. color:#777777;
.. text-indent:.5em;
.. background-color:#222222;}
.. .archivedark {
.. color:#777777; text-indent:.5em;
.. background-color:#111111;}
.. .archivelite a{
.. color:#BBBBDD;
.. text-decortation:none;
.. background-color:#222222;}
.. .archivedark a{
.. color:#BBBBDD;
.. text-decortation:none;
.. background-color:#111111;}
hth,
M.E.Farmer
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top