Google caches pages, although some sites (like Washington Post) charge to access to those pages after a certain time has passed since their original publication. How does Google work around that? If they make the page available for free, isn’t that violation of copyright law? If they are selective, how could they possible figure out which sites not to cache, given the volume they have to handle?
<meta name="robots" content="nocache">
That’s the magic line that webmasters can insert into their pages to stop Google from caching them. It’s common knowledge among web types (I should hope), so if they forget to put the tags on pages with subscriber-only content, then I would say that that is their lookout…
Confirmation from Google’s own site:
[/quote]
The “Cached” link will be missing for sites that have not been indexed, as well as for sites whose owners have requested we not cache their content.
[quote]
Of course, another thing that’s common knowledge among web types is that the slash goes on the closing tag, not the opening one. :smack: