[Home] [By Thread] [By Date] [Recent Entries]
At 10:25 AM 12/6/99 +0100, Lars Marius Garshol wrote: > >First thought: this is fine for very simple uses, but for more complex >uses something along the lines of the robots.txt file would be very >nice. How about a variant PI that can point to a robots.rdf resource? Two reasons, one based on keeping it very simple for authors, and one on keeping it simple for robot implementors. In our experience, the simple form covers almost all needs. We have 1000+ customers, and only three or four of them use our selective indexing support. So, I think of the robots meta tag as a proven solution that doesn't need major improvement. Secondly, fetching two or more entities for one document makes the robot code much more complex. If the robots.rdf file gets a 404, what happens? What about a 401 or a timeout? The robot may need separate last-modified dates and revisit times for each entity. And after it is implemented and tested, how do you explain all that to customers who just want search results? wunder -- Walter R. Underwood Senior Staff Engineer Infoseek Software GO Network, part of The Walt Disney Company wunder@i... http://software.infoseek.com/cce/ (my product) http://www.best.com/~wunder/ 1-408-543-6946 xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev@i... Archived as: http://www.lists.ic.ac.uk/hypermail/xml-dev/ and on CD-ROM/ISBN 981-02-3594-1 To unsubscribe, mailto:majordomo@i... the following message; unsubscribe xml-dev To subscribe to the digests, mailto:majordomo@i... the following message; subscribe xml-dev-digest List coordinator, Henry Rzepa (mailto:rzepa@i...)
|

Cart



