.
Back to Top

0 What are the Crawl Errors in Google Webmaster Tools (GWT)

Crawlers are the Google Spiders who keep on searching your website's pages/links at certain intervals for search engine indexing and try to match their relevance for best possible Ads.

When Crawler does not get any link working, the error message has appeared in the report page with all those links. These are not permanent and Crawler keeps on rectifying search results to suit Ad types. If your website has a lot of non-working or broken links, it will be displayed as very poor in Crawler report.

Google Crawlers and Bots

This report certainly will not affect your account or content but drastically reduce your site's rating as users won't find the links working and will reach to a dead page.

Robots.txt
This file contains the 'don't' list. These are the content to be excluded from Crawler search. Such content may not be as per Google's policy and hence not suitable for indexing.

As an Author of a website, you must be keen about fixing broken links as it reflects its bad behavior. This will certainly harm your site's overall reputation.

Worst effects of this could be, as soon as a bad link is clicked, chances of a user closing the site is high which is a loss to the website. Poor web design and flaws may also cause Crawl Errors and can again be rectified in simple ways.

In the Google Webmaster Tools, you can verify your site by uploading a verification file. For Bloggers, they need to add a unique Meta verification tag. You will need to create XML sitemap and submit Sitemap URL in GWT.

How To modify SEO for content?
Just remember if you are linking any external page through a link on your blog, clicked destination page will be crawled and indexed.

A ‘Nofollow’ tag suggests search engines "Not to follow this particular page or link". By adding simple Meta tag and attribution to a web page, it can be achieved easily.

There are several types...
If you don’t want a web page to be crawled, you can put it in 'robots.txt'. This way it will not be indexed.

Using 'nonindex' Meta tag in web page header, the page will be excluded from indexing.

'Noindex.Nofollow' Meta tag suggests Google not to index in search results, also don't crawl the content (That is stopping Google bots crawling your page), and simply vice versa with 'Noindex.follow'

'rel=nofollow' attribute suggests, "Don't pass the page through PageRank or endorse this page to my viewers'. E.g. If you are linking external 'B' page on your 'A' website through a link, with a rel=nofollow attribute, 'B' page will be excluded from PageRanking with this tag.

Different Types of Crawl Errors
Not Found, URLs not followed, URLs restricted by robots.txt, URLs timed out, HTTP errors, URL unreachable, Soft 404s (page does not exist anymore).
Related Posts Plugin for WordPress, Blogger...

Zergnet