So User gets fresh download of site when revisiting . . .

R

rosco

A site with frequent additions that would be missed by personal, local or
regional cacheing of its URL . . .


1) . . . from the archives in Google Groups -> alt.html --> "cache clear":
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
In the Head of your html doc. Forces browser to not cache the page
when it first loads

<<< . . . does this work? . . . >>>


2) . . . from source code of W3Schools home page
http://www.w3schools.com/default.asp: <meta http-equiv="pragma"
content="no-cache" />
and <meta http-equiv="cache-control" content="no-cache" />

<<< . . . the ' />' does not pass W3C markup validation in my hands. What
gives? A XHTML transitional thing evidently. Will it work with the '>' tag
in my CSS/html strict document? . . . >>>


3) . . . from http://vancouver-webpages.com/META/metatags.detail.html
Pragma
Controls cacheing in HTTP/1.0. Value must be "no-cache". Issued
by browsers during a Reload request, and in a document prevents Netscape
Navigator cacheing a page locally.

.. . . and . . .
Expires
Source: HTTP/1.1 (RFC2068)
The date and time after which the document should be considered expired.
Controls cacheing in HTTP/1.0. In Netscape Navigator, a request for a
document whose expires time has passed will generate a new network request
(possibly with If-Modified-Since). An illegal Expires date, e.g. "0", is
interpreted as "now". Setting Expires to 0 may thus be used to force a
modification check at each visit.

Web robots may delete expired documents from a search engine, or schedule a
revisit.

.. . . and . . .
Cache-Control
Source: HTTP/1.1
Specifies the action of cache agents. Possible values:

a.. Public - may be cached in public shared caches
b.. Private - may only be cached in private cache
c.. no-cache - may not be cached
d.. no-store - may be cached but not archived
Note that browser action is undefined using these headers as META tags.

.. . . and . . .
Robots
Source: Spidering
Controls Web robots on a per-page basis. E.g.

<META NAME="ROBOTS" CONTENT="NOINDEX,FOLLOW">
Robots may traverse this page but not index it.
Altavista supports:

a.. NOINDEX prevents anything on the page from being indexed.
b.. NOFOLLOW prevents the crawler from following the links on the page and
indexing the linked pages.
c.. NOIMAGEINDEX prevents the images on the page from being indexed but
the text on the page can still be indexed.
d.. NOIMAGECLICK prevents the use of links directly to the images, instead
there will only be a link to the page.
Google supports a NOARCHIVE extension to this scheme to request the Google
search engine from caching pages; see the Google FAQ
See also the /robots.txtexclusion method

<<< . . . So. Pragma good. Expires bad (some search engines exclude
site). Cache-control good. Robots/NoArchive .??? . . . that last sentence
with NOARCHIVE doesn't make grammatical sense (like I should talk). Any
thoughts? . . . >>>


4). . . also, from http://vancouver-webpages.com/META/metatags.detail.html
HTTP-EQUIV tags
META tags with an HTTP-EQUIV attribute are equivalent to HTTP headers.
Typically, they control the action of browsers, and may be used to refine
the information provided by the actual headers. Tags using this form should
have an equivalent effect when specified as an HTTP header, and in some
servers may be translated to actual HTTP headers automatically or by a
pre-processing tool.
Note: While HTTP-EQUIV META tag appears to work properly with Netscape
Navigator, other browsers may ignore them, and they are ignored by Web
proxies, which are becoming more widespread. Use of the equivalent HTTP
header, as supported by e.g. Apache server, is more reliable and is
recommended wherever possible.

<<< . . . If http-equiv meta tags are ignored by some browsers, or will be
one day, then what good is all this? Should I investigate if there is
java-script that insures my site is freshly loaded when revisited by a user?
.. . .>>>

Thanks in Advance,

Rosco
 
N

Neal

<<< . . . the ' />' does not pass W3C markup validation in my hands.
What
gives? A XHTML transitional thing evidently. Will it work with the '>'
tag
in my CSS/html strict document? . . . >>>

It's an XHTML empty tag. To use in HTML, remove the space and the /.
4). . . also, from
http://vancouver-webpages.com/META/metatags.detail.html
HTTP-EQUIV tags
META tags with an HTTP-EQUIV attribute are equivalent to HTTP headers.
Typically, they control the action of browsers, and may be used to refine
the information provided by the actual headers. Tags using this form
should
have an equivalent effect when specified as an HTTP header, and in some
servers may be translated to actual HTTP headers automatically or by a
pre-processing tool.
Note: While HTTP-EQUIV META tag appears to work properly with Netscape
Navigator, other browsers may ignore them, and they are ignored by Web
proxies, which are becoming more widespread. Use of the equivalent HTTP
header, as supported by e.g. Apache server, is more reliable and is
recommended wherever possible.

<<< . . . If http-equiv meta tags are ignored by some browsers, or will
be
one day, then what good is all this? Should I investigate if there is
java-script that insures my site is freshly loaded when revisited by a
user?
. . .>>>

If you are on an Apache server and can use .htaccess, or if you run your
own server, you'll rarely need or want meta tags. Meta tags are currently
useful, however, for folks who cannot actually manipulate the HTTP
headers. What you do depends on what your situation is. One way is the
established protocal and is more dependable, the other is rather widely
supported but not the proper way to do it, so who knows what tomorrow
brings.
 
R

rosco

I just found this JS code link on the nsg comp.lanq.javascript:
http://jibbering.com/faq/#FAQ4_1

"".17 How do I force a reload from the server/prevent caching?
To reload a page, location.reload() works, however this does depend on the
cache headers that your server sends, to change this you need to change your
server - a quick fix to this on the client side is to change the URI of the
page so it contains a unique element such as the current date.
location.replace(location.href+'?d='+new Date().valueOf()) of if the
location.href already contains a Query String
location.replace(location.href+'&d='+new Date().valueOf()) ""

http://www.mnot.net/cache_docs/
http://devedge.netscape.com/library/manuals/2000/javascript/1.3/reference/date.html


<< . . . I don't know any javascript. If the "forced reload" feature is
important to me, should I learn JS, just for this, and not rely on meta
http-equiv tags -- or do both? . . . >
Rosco
 
N

Neal

I just found this JS code link on the nsg comp.lanq.javascript:
http://jibbering.com/faq/#FAQ4_1

"".17 How do I force a reload from the server/prevent caching?
To reload a page, location.reload() works...
<< . . . I don't know any javascript. If the "forced reload" feature is
important to me, should I learn JS, just for this, and not rely on meta
http-equiv tags -- or do both? . . . >


Javascript is NOT necessarily part of the browser's package. You would do
well to assume it will not exist, and plan your site accordingly.
Certainly you may include "extras" in case it is enabled, but do not rely
on it being enabled for your ends to be met.
 
B

brucie

in post: <
rosco said:
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
<<< . . . does this work? . . . >>>
no

<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="cache-control" content="no-cache" />
no

<<< . . . the ' />' does not pass W3C markup validation in my hands.
xhtml

Controls cacheing in HTTP/1.0. Value must be "no-cache". Issued
by browsers during a Reload request, and in a document prevents Netscape
Navigator cacheing a page locally.
no


no


no

So. Pragma good.

no

it doesn't matter what you do ultimately if goodies are cached or not is
up to the visitors settings.

<p>please hit refresh as this page has frequent changes</p>

very simple and you don't annoy the crap out of people who are forced to
wait for the pages/images/etc to download *each and every time they view
them*. its very bloody annoying, it wastes peoples time, money and
bandwidth.

if for some reason you still insist on trying to be a control freak at
least cache for a few hours so people can view your goodies without the
wait, then if they come back (doubtful) they have to download them
again.

this requires the host to have the mod_expires module enabled (most do).

stick something like this in a .htaccess file:

ExpiresActive On
ExpiresByType text/html "access plus 1 day"
ExpiresByType image/png "access plus 1 hour 3 minutes"
ExpiresByType image/jpg "modification plus 2 hours"

Module mod_expires
http://httpd.apache.org/docs/mod/mod_expires.html

but its still ultimately up to the visitors settings if your suggestions
are ignored or not.
 
E

e n | c k m a

headers. What you do depends on what your situation is. One way is the
established protocal and is more dependable, the other is rather widely
supported but not the proper way to do it,

I assume using .htaccess for this is considered more reliable?
 
K

KLB

It is my understanding that the majority of browsers in use today do are
enabled to handle JavaScript -- better than 90%, in fact. This would seem
to make the aforementioned JS code a viable option to obtain the feature
desired.
 
M

Mark Parnell

It is my understanding that the majority of browsers in use today do are
enabled to handle JavaScript

Yes, most of the browsers around _can_ handle Javascript, but many
people surf with Javascript disabled.
better than 90%, in fact.

There are no reliable figures, but most estimates say around 15% of
people have Javascript disabled/unavailable. Even assuming that it is
only 10% (i.e. 90% have Javascript), that's a fairly significant number.
But it's your choice - if you want to make your site impossible to use
for 1 in 10 potential customers, go ahead.

BTW: Please quote the relevant parts of the post you are replying to.
Thanks.
 
R

rosco

brucie said:
very simple and you don't annoy the crap out of people who are forced to
wait for the pages/images/etc to download *each and every time they view
them*. its very bloody annoying, it wastes peoples time, money and
bandwidth.

if for some reason you still insist on trying to be a control freak at
least cache for a few hours so people can view your goodies without the
wait, then if they come back (doubtful) they have to download them
again.

.. . . in my case, I would use such code only on my home page, which has no
images, no gifs, no JS currently, and just minimal text. The images, which
are the only thing anyone might want to revisit, are all on secondary pages
and those I would no try to inhibit cacheing. The home page will serve,
with its minimal text, to indicate new links to new images, and would
therefore be worthless as a cached URL. And with just minimal text, there
would be very little difference in the time it takes to freshly download a
current version compared to a cached version -- unless, of course, a
paranoid clueless ghit is operating from a 56k connection and bungs up there
browser with bloated anti-virals and layers of porn filters.

And just what kind of pansy wanker calls himself 'brucie' anyway.
 
B

brucie

in post: <
. . . in my case, I would use such code only on my home page, which has no
images, no gifs, no JS currently, and just minimal text.

you're still making them download the page again when you don't have to.
And with just minimal text, there would be very little difference in
the time it takes to freshly download a current version compared to a
cached version

there is a noticeable difference even if the browser is just checking
for a 304. i don't understand your insistence on not caching when its
not needed or the best solution.
And just what kind of pansy wanker calls himself 'brucie' anyway.

it must be skool holidays.
 
T

Toby A Inkster

Mark said:
if you want to make your site impossible to use
for 1 in 10 potential customers, go ahead.

My favourite way to make my site impossible to use for 1 in 10 potential
customers is to power down the server on New Year's Day and leave it
switched off until early February.

That's a much easier way to alienate customers than having to go fiddling
with the DOM.
 
W

Whitecrest

There are no reliable figures, but most estimates say around 15% of
people have Javascript disabled/unavailable. Even assuming that it is
only 10% (i.e. 90% have Javascript), that's a fairly significant number.
But it's your choice - if you want to make your site impossible to use
for 1 in 10 potential customers, go ahead.

In addition to the percentage that has Javascript turned off, you also
have to look at the dynamics of the people that have it turned off.
Depending on the content of your site, it can make the fact that these
people have it off completely insignificant, OR it can make it much more
significant than 15%
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,013
Latest member
KatriceSwa

Latest Threads

Top