While crawling or indexing a website, search engine crawlers refer to robots.txt to know which URLs they are allowed to index and which web pages they are not allowed to index. As a simple text file, robots txt file helps webmasters to allow search engines to access specific content and lock away specific content from the search engines. However, the primary objective of robots txt is not to prevent search engines from accessing and indexing web pages.
According to Google,
“A robots.txt file tells search engine crawlers which pages or files the crawler can or can’t request from your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google.”
He has a laser-sharp focus on the goals to be achieved.
He is a leader who has ’empathy’ as the core competency.
His knowledge on Web Analytics is commendable and he used this information in each and every step,