Return to home
Back to blog

What is Technical SEO: A Beginner’s Guide

04/01/2019

So, what is Technical SEO?

It’s no big surprise that the title of this article will probably scare away half of its potential readership. There’s just something about the word ‘technical’ that makes people want to scream.

But a good SEO strategy is a perfect blend of science and art. While content is most definitely king, and without good content your website doesn’t stand a chance at ranking well in SERPs, the same is equally true of the other side of our balanced art and science equation.

Technical SEO is all of the work that’s done to allow search engine spiders to crawl your website and index your content, and without it, your content marketing and keyword research efforts could all be in vain.

Why is Technical SEO important?

When search engines are trying to work out which websites, or more to the point, which webpages should rank for certain keywords, they employ a system of bots known as web crawlers, or more commonly, spiders. These spiders do exactly what you’re probably picturing they do; they traverse the web moving from page to page taking in the information they come across. This is why we say that search engines' crawl your website.

The job of technical SEO is to make sure that these spiders find the information you want them to in the easiest way possible, so the search engine that’s employing the spider can successfully index your site. It is then that your web page will rank well in that search engine and be returned as high as possible within the search results.

If the spiders don’t find certain pieces of content on your site, or worse, they end up getting stuck somewhere on your site, then you’re not going to rank for those pieces of quality content because the spiders never saw them.

How to help the spiders out

As with anything to do with SEO it can sometimes seem like a bit of a black art with ever changing requirements, which to some extent is true. Google’s algorithms for what content ranks where are ever-changing. However, that doesn’t mean that technical SEO has to be complicated.

Over the course of this blog post we'll take a look at some simple things that you can do to ensure that the spiders see the content you want them to:

  1. Page Speed
  2. Mobile Optimisation
  3. Security
  4. Sitemaps and Robots
  5. Structured Data
  6. Fix Content Duplication
  7. Meta Data and Hierarchy

Faster is always better

One of the most well-known ‘ranking-signals’ used by Google is page speed and site speed, and with the advent of Mobile-First Indexing, this is even more of a factor than it has been before.

Remember, the job of a search engine (and therefore its spiders) is to match up users with the content they’re searching for. Now, it shouldn’t be a huge surprise the users hate slow websites; recent studies have reported that as a page load time goes from:

  • 1 second to 3 seconds, the bounce rate increases by 32%
  • 1 second to 5 seconds, the bounce rate increases by 90%
  • 1 second to 6 seconds, the bounce rate increases by 106%
  • 1 second to 10 seconds, the bounce rate increases by 123%

So, search engines will of course take this into account when deciding if the user is going to get to the content they want, and whether the user experience is seamless.

There are a whole host of things to look at when considering website and page load speed, from compressing pages with GZIP, minifying CSS and Javascript files, and ensuring that images are compressed. Google’s Pagespeed Insights tool can help to identify the optimisations that can be carried out, and in the case of compression and minification of assets, even help you with these operations.

Be mobile-friendly

We all do it more and more these days - browse the web from our smart phones. In fact, outside work I rarely use a computer to browse webpages. Search engines know this, and they sure are smart to this fact, mobile friendliness is key.

One of the easiest and most important ways you can make your site is mobile-friendly is to make sure that you have a responsive website. Responsive sites optimise themselves to the device types they’re being viewed on, and so, will display the same content in a user-friendly way.

Even when armed with a responsive site, you should still pay attention to the page speeds when tested on a 3G connection. We’re talking mobile browsing off of a Wi-Fi connection, which would be slower anyway.

Another way to be mobile-friendly is to consider Accelerated Mobile Pages (AMP). This is a relatively new concept introduced by Google, which essentially provides a super lightweight and fast version of your site specifically for mobile.

Google’s Lighthouse tool can help to identify quality issues in websites, and is a good first point of call for identifying where particular pages might cause issues on mobile and/or on a 3G connection.

Secure your site

Back in 2014, Google announced that it wanted to see HTTPS everywhere. Google believed that every website should be using SSL; a security technology which encrypts the information passed between a website’s backend and the user’s browser.

Since then, Google has been giving preference to sites using SSL over those not using SSL in their rankings. So where possible you should ensure that your site is secure and is using a SSL certificate.

Maps, robots, and consoles

There are two files which every website should have. Their purpose is to do nothing but help spiders out.

The first is an XML sitemap. Essentially, this a roadmap of the pages on your site which you want spiders to interrogate so that they understand your site structure. It also supplies the spider with a few useful pieces of information such as when the page was last modified, how frequently it’s updated, and what priority it has on your site.

The second is the Robots TXT file. This file tells the spiders how to crawl the pages of the site by giving them specific instructions on which parts of the website can and can’t be crawled.

This is particularly important if you have sections of your site (say a CMS or an API) which you don’t want spiders to crawl because they add no benefit to the end user coming from an SERP.

You can use Google Search Console to submit your website for indexing but moreover, test and check your sitemaps and robots files.

Structured Data

You’d have no doubt seen the ‘rich snippets’ in Google which add information such as star ratings or images of particular products. These immediately bring more information to the end user on the SERP which increases click-through rate (CTR), and ultimately your conversion rate.

These ‘rich snippets’ are enabled by something called structured data markup, which essentially are additional pieces of code that sit within your pages.

These pieces of code define a schema that spiders can use to pull additional information about that content of the page. Be that star ratings and images for a product page, the details of a company for the home page, or a whole host of additional data.

Duplicate content

Back in the old days of the internet (before search engines started to get smart), certain websites would duplicate their content pages on purpose so that they could manipulate search rankings by looking like they had more content about a given topic.

In 2011 with the initial Panda update, Google started to punish websites that it identified as having duplicate content, meaning they got busted with lower rankings rather than higher.

To that end, you want to ensure that your website doesn’t have anything that Google might consider to be duplicate content, be that based on the content itself being the same or very similar, or the titles or meta data being the same or similar.

If you do have duplicate content or you want to resolve any duplicate content issues, you can either remove one of the pages in question or use canonical URL tags to link the pages. Of course, if you do choose to remove a page, don’t forget to put a 301 redirect (permanent redirect) in place to the winning version of the page.

Meta Data and Content Hierarchy

Much like the structured data that we spoke about earlier, there are several pieces of information that can be added to each webpage which help to describe the page and its purpose / topic to the spider. These elements are known as the metadata of the page.

Certain pieces of the meta are then used by the search engine to form the result within is displayed in the SERP, which ultimately is what drives users to click through to your webpage.

Similarly, the content on the page can be marked up with header tags to ensure that a clear hierarchy is defined on a page. This allows the user to quickly see what content they’re about to get themselves into and allows spiders to carry out a very similar exercise.

Further down the rabbit hole

Hopefully, this should have given you a ‘brief’ indication of what Technical SEO entails, but it’s far from an exhaustive listing of everything you can and should be considering for your site, Google considers a number of ranking factors with more being discovered or added all of the time.

In fact, it’s probably fair to say that it barely scratches the surface…

Author: Leigh Chilcott, Data Scientist