Robots.txt is a file that contains instructions on how to crawl a website. It is also known as robots exclusion protocol, and this standard is used by sites to tell the bots which part of their website needs indexing. Also, you can specify which areas you don’t want to get processed by these crawlers; such areas contain duplicate content or are under development.