Technical SEO is an aspect of on-page SEO and refers to the process of optimising the technical side of a website to enhance crawlability, indexing and ranking in SERPs.
A text document used to give directives to search engine bots and spiders on how to crawl and index a website.
Directives and commands given to search engine robots to control the index coverage of a website.
An error message given when a user or web browser, attempts to visit a page but could not return any content. E.g. The page no longer exists.
While ensuring your site has relevant and useful content should be an integral part of any SEO strategy, sometimes it’s easy to forget about the finer points of technical SEO.
Fetch and Render, now known as “Inspect any URL” is a free Google SEO tool that you can use to help streamline your SEO efforts and discover glitches that may be holding your site back.
With so many details to consider, technical SEO can seem overwhelmingly complex. It doesn’t have to be though.
At its core, technical SEO is all about ensuring your web pages are crawlable by Google search bots so that they appear when users search for them and rank well.
Find out how to use Fetch and Render and why it’s so important for boosting your pages’ rankings.
The “Inspect any URL” tool is located within Google Search Console and allows you to check how Google crawls a web page as well as how it will appear on both desktop and mobile devices.
Since the tool was first created, Google Search Console have performed some upgrades and the platform now looks pretty different.
Instead of using the terms Fetch and Render, the tool is now called ‘Inspect any URL’.
The tool will help you build a clear picture of how Google views your site. You can:
An essential step in optimising your technical SEO, it’s a highly useful SEO tool for pinpointing finer technical details that could be holding your site back from performing well in Google rankings.
While Search Console’s “Inspect any URL” tool is fairly straightforward once you’ve got your head around it, it’s still important to know how to use it to prevent any confusion further down the line.
1. Log in to Google Search Console, scroll down to URL Inspection located on the left-hand navigation bar. Paste the URL you want to inspect in the search bar.
2. Check whether Google could fetch your URL when the last crawl was and if the page is indexed.
3. View the crawled page as Google sees the code. If your page doesn’t appear on Google in the way you want, it’s likely the answer lies in the code that the search bots crawl.
4. Check if Google considers your page to be mobile-friendly. Google now uses mobile-first indexing with priority given to pages that are easy to use from a mobile device. If your page isn’t mobile-friendly, it’s time to make some improvements so Google starts rewarding it with rankings.
5. Request indexing if you find that your page doesn’t show up in Google search results.
Now you’ve gone through the process, you’re probably wondering what the different statuses mean and how you can use them to make active improvements to your site.
Complete: This is the ideal status that you want to obtain. Complete means that Google was able to connect with your site and any referenced resources.
Partial: Having a partial status means Google was able to connect with and fetch your site but the robots.txt file blocked a resource referenced by the page. By clicking on the status, the tool will show you which resource was blocked and how severe the blockage is. This is key information for figuring out which technical glitches need fixing.
Not Authorized: This is usually caused by a 403 error or any other type of restricted access.
Redirected: The tool only checks the exact URL requested. If the web page includes a redirect to another page, it will receive a redirect status.
Not Found: This status normally comes from a 404 error, but it can also appear when a site is hacked and Google doesn’t want to index the page.
Unreachable: This status is usually caused by a timeout issue.
Unreachable robots.txt: Google is unable to reach the host of the resource for robots.txt - you probably need to test and update the robots.txt file.
DNS Not Found: If your status isn’t found it could be because of a typo or site downtime.
Temporarily Unreachable: This status is usually down to a timeout error or too many fetch requests.
Error: This is the least common status. It is the result of an unknown error. If you keep having problems with an error status appearing, it’s best to post on the Google help forum.
There are some specific situations when you may not want Google to crawl your web pages and rank them in the SERPs. This is particularly true if your site has a member’s area or requires a subscription to access specific content.
In these cases, you can use the tool to ensure your web pages aren’t available to those just clicking on search results.
To ensure your pages can rank well, they first need to be crawlable by Google. It’s best to check your pages can be crawled by Google search bots every so often. If you make any changes or updates to your site, you should run these pages through Inspect any URL to verify that Google hasn’t detected any issues.
The more crawlable your web pages are and the easier you make life easier for Google, the more Google will reward your content.