In my recent article, I have written about the Google Search Console setup. It helps us to manage our website coverage, URL’s indexing, etc.
The robots.txt file contains some lines of code which will help the search engine to crawl, index the web pages present on your website. Search engines first read the robot.txt file of your website and accordingly start indexing URLs.
We can define in a robots.txt file which pages to crawl or not to crawl. Along with it, you can easily block or restrict the page from web crawlers to index so that it will not be seen in the search result.
Here is the sample of a robots.txt file. You can change the robot.txt file as per your requirement.
User-agent: * Disallow: /wp-admin/ Disallow: /recommended/ Disallow: /tag/ Sitemap: https://scrollbucks.com/post-sitemap.xml Sitemap: https://scrollbucks.com/page-sitemap.xml
The asterisk after user-agent means that robots.txt file applies to all the web robots.
Disallow means tells the web robots that not to visit and crawl the pages.
The sitemap tells the web robot that where is your all web pages present and crawl these web pages.
Add Robots.txt in BlogSpot blog
Login to your BlogSpot blog and navigate to Setting -> Search Preferences
You will see the Crawlers and Indexing section enable the custom robots.txt and paste the above code. Make sure that you have updated all the website details in the robots.txt file.
You’ll love to read
- How to configure Google Search Console for the website?
- How to add Google Analytics filters to sort the traffic?
- How to add Google Analytics to BlogSpot blog?
- How to check the page rank with Google Analytics?
Latest posts by Amit Kharbade (see all)
- How To Track Enhanced eCommerce in Restricted Content Pro and GiveWP? - March 27, 2021
- How to Use Smart Tags for Dynamic Text Replacement in OptinMonster? - March 20, 2021
- How to Easily Use Lead Source Tracking in Google Analytics? - March 15, 2021