The Google blog has a nice ongoing set of tutorials on how to use the Robots Exclusion Protocol rules to control how and what search engines index on your site. The first part was [published last month][1] and this afternoon they [posted a sequel][2]. Most of the information in the little tutorials applies to all search engines that follow robots.txt, though a couple of things are specific to Google. And even if you think you know everything about robots.txt already there still might be a few surprises for you in these tutorials. For instance I never knew that it was possible to stop Google from displaying the little summary text snippets below the results links. I still can't think of a situation where that would be helpful, but it's good to know should the need arise. Today's post promises at least one more short tutorial detailing common exclusion problems that and how to solve them so stay tuned. Also worth checking out is Google's overall [guide to the Robots Exclusion Protocol][3] as well as the more search engine neutral [guidelines at robotstxt.org][4]. [1]: http://googleblog.blogspot.com/2007/01/controlling-how-search-engines-access.html "Controlling how search engines access and index your website" [2]: http://googleblog.blogspot.com/2007/02/robots-exclusion-protocol.html "The Robots Exclusion Protocol" [3]: http://www.google.com/support/webmasters/bin/topic.py?topic=8843 "How Google crawls my site" [4]: http://www.robotstxt.org/wc/exclusion.html "Robots Exclusion"