Crawl Errors Google Search Console

Crawl errors in Google Search Console are one of the most important technical issues an SEO & Digital Marketing Consultant needs to monitor and fix. When Google’s crawler (Googlebot) cannot properly access or understand your pages, it can limit how well your site is indexed and how often it appears in search results. Understanding what crawl errors are, how they appear in Google Search Console, and how to fix them is critical for any website owner, including professional service businesses in South Africa.

Below is a detailed, SEO‑optimised guide on Crawl Errors Google Search Console based entirely on verifiable external sources.


What are crawl errors in Google Search Console?

Google defines crawling as the process where automated software called Googlebot discovers pages on the web by following links and reading sitemaps. These pages are then processed and, where appropriate, added to the Google index. According to Google’s own documentation, crawling and indexing are foundational steps before a page can appear in search results, and crawl issues can prevent pages from being properly indexed (Google Search Central help on how search works).

Historically, Google Search Console had a dedicated Crawl Errors report. In 2018, Google announced that the older Crawl Errors report would be replaced with more focused reporting in the Index Coverage and URL Inspection tools. The goal was to show webmasters clearer information about what prevents URLs from being indexed, rather than listing all errors together (Google Webmaster Central Blog update on Search Console reports).

Today, what most site owners call “crawl errors” are surfaced in Google Search Console as:

  • Indexing issues in the Pages (Indexing) report.
  • URL‑specific crawl and index information in the URL Inspection tool.
  • Sitemap‑related errors in the Sitemaps report.
  • Security and manual actions under their own sections when relevant.

Common types of crawl‑related issues reported by Google Search Console

Although the interface has changed, the typical error types are well documented by Google.

1. 404 “Not found” errors

A 404 error means the page does not exist at the URL Google attempted to crawl. Google explains that 404s are normal for any web and that they do not harm your site overall; however, if important pages return 404, they obviously cannot rank or drive traffic (Google Search Central help on 404 errors).

404‑related statuses often appear in the Pages (Indexing) report as:

  • “Not found (404)”
  • “Soft 404” (when the server returns 200 OK but the content looks like an error page)

Google recommends:

  • Allowing natural 404s for genuinely removed content.
  • Implementing redirects (301) when there is a clear replacement URL.
  • Ensuring internal links and sitemaps do not point to non‑existent URLs (Google’s guidance on correcting 404s).

2. Server (5xx) errors

Server errors (5xx) indicate that the server failed to respond correctly to Googlebot. Google notes that temporary server errors can cause pages to be crawled less efficiently and might prevent them from being indexed if the issue persists (Google Search Central on server errors).

Typical causes include:

  • Hosting outages or timeouts.
  • Overloaded servers.
  • Misconfigured firewall or security rules blocking Googlebot.

Google recommends:

  • Ensuring your site can handle Googlebot’s crawl rate.
  • Using server logs and hosting tools to diagnose repeated 5xx responses.
  • Whitelisting Googlebot if blocked inadvertently (Google’s guidance on server connectivity).

3. Redirect errors

When redirects are misconfigured, Google Search Console may report issues such as redirect loops, chains, or incorrect targets through the indexing reports and URL Inspection tool. Google’s documentation on redirects emphasises:

Excessive or broken redirects can waste crawl budget and may stop Google from successfully reaching the final destination URL.

4. Blocked by robots.txt

The robots.txt file tells crawlers which paths are disallowed. Google’s documentation clarifies that if a URL is blocked by robots.txt, Google generally will not crawl it, but the URL could still be indexed if it’s discovered via external links (without content, only the URL) (Google Search Central robots.txt specification).

In Google Search Console, URLs blocked by robots.txt can appear as:

  • “Blocked by robots.txt” in indexing‑related reporting.
  • Blocked status in URL Inspection.

Google recommends:

5. Access denied / 403 errors

A 403 Forbidden or other “Access denied” responses occur when the server refuses Googlebot access. Google explains that such responses can be caused by:

When critical pages are blocked this way, Google cannot crawl and index them.

6. DNS errors

DNS (Domain Name System) errors arise when Googlebot cannot resolve your domain. According to Google’s support documentation, DNS issues can prevent the crawler from reaching your server at all, leading to large‑scale crawl failures (Google Search Central help on DNS issues).

Recommended actions include:

  • Checking DNS configuration with your domain registrar or hosting provider.
  • Monitoring for intermittent failures and fixing misconfigured name servers.

Where to see crawl‑related problems in Google Search Console

Although the legacy Crawl Errors report has been retired, Google describes several tools within Google Search Console that now surface crawl and indexing problems.

Pages (Indexing) report

Google’s Pages report (previously called “Coverage”) shows the indexing state of URLs Google has discovered for your property. According to Google’s official overview, this report contains:

This is where many of the modern “crawl error” signals appear.

URL Inspection tool

The URL Inspection tool lets you inspect a single URL. Google explains that this tool shows:

You can also request Indexing after fixing problems, signalling to Google that a URL is ready to be crawled again.

Sitemaps report

Submitting a sitemap helps Google discover URLs more efficiently. The Sitemaps report in Google Search Console shows whether Google was able to fetch and process your sitemap successfully and may highlight errors like malformed XML or unreachable sitemap URLs (Google Search Central guide on sitemaps).

Well‑structured sitemaps reduce the chance that important URLs are missed or left un‑crawled.


How crawl issues can impact SEO performance

From Google’s own guidance on crawling and indexing, if Googlebot cannot access or correctly interpret your pages, they may:

While some errors like 404s for genuinely removed pages are normal, unresolved systematic crawl problems can lead to:

  • Reduced organic traffic because key URLs are missing from the index.
  • Wasted crawl budget, especially on large sites.
  • Poor user experience if broken links result from site navigation issues.

For a service‑based website, this can mean critical pages (such as service descriptions, location pages or contact pages) are not visible in search results when potential customers are searching.


Best‑practice steps to find and fix crawl errors in Google Search Console

Based on Google’s documentation and recommendations, a methodical approach to handling Crawl Errors Google Search Console typically includes these steps:

1. Review the Pages (Indexing) report regularly

Google recommends using the Pages report as the primary view of indexing issues, categorised by status and reason (Index coverage / Pages report guidance):

  • Filter by Error and Excluded.
  • Prioritise:
    • Server errors (5xx).
    • DNS and connectivity issues.
    • Unexpected 404s for important URLs.
    • Robots.txt blocks on pages that should be visible.

2. Use URL Inspection for critical URLs

For key pages (home, services, contact, important blog posts), Google suggests using the URL Inspection tool to:

  • Check the last crawl date and status.
  • See how Googlebot renders the page.
  • Confirm that the canonical URL and index status match your intentions (URL Inspection tool overview).

If an issue is found and then fixed, you can request Indexing from the same tool to prompt Google to recrawl.

3. Fix internal causes of 404s and soft 404s

Using Google’s advice on handling 404s (404 error guidance):

  • Update or remove internal links pointing to non‑existent pages.
  • Fix or regenerate sitemaps so they don’t reference deleted URLs.
  • When content has moved, implement a 301 redirect to the most relevant new URL (redirects documentation).

4. Resolve server and DNS issues with your host

According to Google’s help on server connectivity and DNS (server errors help, DNS issues help):

  • Work with your hosting provider to identify repeated 5xx errors.
  • Check and stabilise DNS configuration.
  • Ensure that security systems or firewalls are not blocking Googlebot’s user‑agents or IP ranges unintentionally.

5. Review robots.txt and meta robots settings

Google recommends:

  • Using robots.txt to control crawling of low‑value or resource‑heavy sections (e.g., faceted search URLs), not to hide content from the index (robots.txt introduction).
  • Using noindex via meta tags or HTTP headers when you want content to be accessible but not indexed (robots meta tag guidance).

Check that important pages are not accidentally blocked, and update rules if necessary.

6. Maintain and submit XML sitemaps

Google’s sitemap best practices include:

  • Listing canonical URLs.
  • Updating sitemaps when new content is added or removed.
  • Keeping them free of 404 or redirected URLs as much as possible (sitemaps overview).

After updating your sitemap, resubmit it through the Sitemaps report to help Google discover your clean URL set more efficiently.

7. Monitor changes and validate fixes

Google Search Console allows you to Validate Fix for certain error types in the Pages report. When you click this, Google re‑checks affected URLs to confirm whether the problem has been resolved (Index coverage / Pages report help).

This creates a feedback loop: identify → fix → validate → confirm status.


Why an SEO & Digital Marketing Consultant should prioritise crawl error management

Google’s own documentation on technical SEO repeatedly highlights that ensuring a site can be crawled and indexed is a prerequisite for search visibility (Google Search Central – SEO Starter Guide). Without a clean crawl profile:

  • Content strategy and link building efforts may not reach their full potential because Google cannot reliably access all targeted pages.
  • Signals like structured data, internal linking improvements, and on‑page optimisation might not be fully processed by Googlebot.
  • For local and professional service sites, some service or location pages might be invisible in search, reducing lead generation opportunities.

By using the tools and guidance provided in Google Search Console, and following the best practices documented by Google Search Central, site owners and consultants can:

  • Quickly identify crawl and index problems.
  • Systematically resolve the most impactful issues.
  • Improve the consistency and reach of their organic visibility.

Managing crawl errors in Google Search Console is ultimately about making it as easy as possible for Googlebot to discover, understand and trust your website. By regularly reviewing the Pages report, using the URL Inspection tool, maintaining clean sitemaps, and following Google’s technical recommendations, you lay a strong foundation for all other SEO and digital marketing work.