A file placed on a web server that gives instructions to search engine crawlers, which are the robots that index web pages for search engines. Robots.txt files tell web crawlers what they should include in their index—and what they should ignore. Pages such as form results pages (“thank you” pages that are triggered when users submit a form, for example) can be hidden from search engines.