Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Technical SEO – The Definitive Guide (Part 1)

I started designing WordPress websites back in 2010.  Back then it was just for fun and really just to see if I could create one.  But now, years later I am creating websites professionally, for a living, and I have learnt that just having a pretty website up on the web isn’t enough.

If you want a successful website, where you have real visitors reading your content, engaging with you and buying your stuff, then you really need to know how to optimise your website so that people can actually find it online.

Optimising your website for Search is called Search Engine Optimisation (SEO for short).  There are many aspects of SEO – which I will cover at a later date.  But to kick things off I think its better to get your head around the technical aspects of SEO first.

With my definitive guide to Technical SEO I will help you understand what technical SEO is, what it is used for and give you actionable steps to help you improve your own website so that you can use it to boost you up in the search rankings.

This is just Part 1, and within part 1 I am going to be focusing on the following;

What is Technical SEO?

Technical SEO is an important step in the whole SEO Process.  If you have issues with your technical SEO then it is likely that you will find it hard to get your website up in the search results.  To have a successful website you must understand what technical SEO is and how to get it right.

The best thing about technical SEO is that once you have fixed potential issues on your site, you wont have to deal with them again.

This type of search engine optimisation is called ‘techincal’ because it has nothing to do with the actual content on your website.  Instead the goal of technical SEO is to optimise the techie parts of your site, such as its infrastructure.

Here you can see the three main pillars that SEO is built upon.  The three main pillars of SEO are, Technical SEO, On-page SEO and Off-page SEO.

On-Page SEO looks at the content within your website and how you make it more relevant to what the user is searching for.  Off-page SEO (also known as link building), is the process of getting mentions (links) from other websites to increase your websites trust during the search ranking process.

From the graphic above you can see that there are no clear boundaries between each pillar, they all have to work together for your website to be fully optimised.

Google Search Console

One of the best free tools you can get that will help you see how well your websites technical SEO is performing, is Google Search Console.  You can use Google’s Search Console to:

Receive important messages or warnings about your websites health.

Find out how many pages are indexed by Google

Find out how many links are pointing to your website (links that Google Knows about)

Troubleshoot crawling and indexing issues on you website.

See detailed information on what keywords your website is associated with and where your website is ranking within the Google search results.

Learning how to take advantage of the data you get from the Google Search console will improve your websites SEO.  So if you don’t have a free account, go and get one and verify your website.  Here is a short video, from that shows you how to get started.

SEO Friendly URLS

URL optimisation is one of the easiest SEO tasks to configure but at the same time it’s a necessary step you should take to make sure that your website is SEO friendly.

If you are using WordPress or any other CMS that is SEO friendly, optimising your URL structure is something you will setup at the beginning and probably never have to deal with it again.

Making sure that your websites URL is optimised for Search Engines is one of the easiest places to start when improving your technical SEO.  If your website is built on WordPress then your URL structure is something you will setup at the beginning and probably won’t have to deal with it again.  However if you’re not sure if your WordPress website is setup correctly then I will go through how to optimise your URL.

What is a Friendly URL?

First, let’s start with some basic terminology that will help you understand what we want to achieve.

What is a URL? A URL is the acronym for Uniform Resource Locator. In simple terms, a URL specifies the web address of a page.

Right, let’s start with some basic information that will help you understand what a URL is.  A URL is the acronym of Uniform Resource Locator.  In simple terms, a URL specifies the web address of the page.

Every website on the Internet has a unique URL.  This is achieved by the domain name i.e entrepreneursblog.co.uk, which is the home page of the website.  A page of a website is shown after the domain name like this /seo-definative-guide.

These together make up the Unique URL of a page.  So, the URL of a single page has two parts, the first part is the domain name, which is not configurable and the second part of the URL is the page name (which is configurable).  No two pages within in the same domain can have the same URL.

So how do you make your URL SEO friendly?  Well a friendly URL is one that accurately describes your website’s page using keywords within the page’s content that are easy to read for both search engines and your visitors.

Here is an example of a friendly URL:

https://www.entrepreneursblog.co.uk/technical-seo

Here is an example of a non friendly URL:

https://www.entrepreneursblog.co.uk/folder/P09349/009

Why do URLS matter for SEO?

#1 – Friendly URLS improve the user experience – SEO is all about making the user experience on your website better and easier to understand URLs give both humans (your visitors) and search engines a good indication on what your website is all about.  

A user can tell just by reading a friendly URL https://www.entreprenuersblog.co.uk/seo-definative-guide that the page they are about to visit has information that will guide them through SEO.  The URL is also shown in the search results, a well crafted URL will be more informative and can help attract visitors to your site, improving your websites CTR (Click Through Rate).

#2 – Its an SEO ranking factor – Google has managed to become better over the years and more efficient when it comes to URL interpretation.  Now most morden CMS platforms like WordPress are able show optimised URLs.  Yes it might be a into SEO factor, but several SEO ranking studies have shown that the majority of the pages that show in the first page search results on Google have optimised URLs.

#3 – Links – some uses may want to link to your website, using the URL of the page as anchor text, so if your URL contains relevant keywords, this provides search engines with more information about your website’s page.

So  how do you optimise your URL?

Heres how to opmtise your URLs.

Domain Name

As I have mentioned above the first part of a URL is the domain name and this is not configurable.  Thats why its better to choose a good domain name for your website right from the very beginning.  To pick a good domain name you need to think of the following;

  • A domain name that is short (2-3 words long)
  • Catchy
  • Easy to remember
  • Preferably a .com, .net or .org domain
  • For local businesses, it is an advantage to have a domain name registered in your country domain.  For example businesses in the UK it would be better to have a .co.uk domain.

By having a good domain name it will help you in establishing user trust and it does not have a direct effect on search rankings.  In the past, having a keyword based domain had some advantages but not anymore (execpt if your using a country specific domain name for local SEO purposes).

HTTPS URLs (SSL Certificates)

Another factor that helps optimise your URL is the security of your website, and in particular the use of an SSL certificate.

Installing an SSL certificate on your website, helps in 3 major ways;

  1. it makes your URLs https and this is an additional way to gain users trust.
  2. it makes your website more secure (any information submitted through your website is encrypted)
  3. It gives you a small search engine ranking boost.  

Using keywords within your URL

Using keywords in your website’s URL provides both users and search engines more information about what your web page is about.  For example, https://www.entrepreneursblog.co.uk/seo-friendly-url is a friendly URL that contains the keyword ‘SEO’, ‘friendly’, ‘url’ separated by dashes.

Compare that with the title of the post (What is an SEO Friendly URL Structure) with the URl, you will notice that certain words like ‘what’, ‘is’, ‘and’ have been removed from the URL.

But does it matter where you place the keywords within your URL?  When Google was asked about this question, they officially said that it doesn’t matter very much, but studies do show that it is better to have keywords at the beginning of the URL rather than in the middle or the end.

But you must be careful not just to cram as many relevant keywords as you can into your URL, this is called Keyword Stuffing and it is a term used when you try to inject keywords into a URL in an unnatural way for the sole purpose of optimising for search engines and not your visitors.

To avoid Keyword Stuffing in your URLs, try not to repeat the same keyword more than once.  For example, DON’T DO THIS; https://www.entrepreneursblog.co.uk/services/webdesign/webdesignpackages/buyme/

Instead, use keywords without repetition.  Here is a better URL; https://www.entrepreneursblog.co.uk/services/web-design/

URL length

Although URLs have a character limit of 2048, it is better to keep your URL as short as possible.  If you have any characters that don’t make sense to users and search engines are better to be avoided. 

Lowercase, Uppercase and Spaces in your URL

Characters in a URL should always be lowercase.

Most CMS platforms like WordPress will allow you to have uppercase letters in a URL, for example www.mywebsite/My-URL this would be a different URL from www.mywebsite/my-url.

Most search engines including Google will see these URLs as two difference pages.  To avoid any duplicate content issues, make sure that all your URLs are lowercase.

White spaces in your URL

There are some cases where your CMS will automatically create URLs from filenames, especially images.

When there is a space in the filename this will be translated as %20 in the URL.  

For example, if you have an image with the name “SEO tips and tricks.png” if your CMS is not properly configured to use dashes ‘-‘ as separators this will be shown as “SEO%20tips%20and%20tricks.png” and this is not friendly.

Use of dates within your URL

The use of dates within your URL doesn’t offer any benefits to users or search engines.  

Google users other things to identify when a post was published and having the date as part of the URL adds unnecessary complexity.

Folder structure

It is good practice to keep you folder structure (thats the slashes separators within the URL) to 2 levels maximum.

Consider the following;

URL that is linked directly to the domain;

https://www.entrepreneursblog.co.uk/genesis-framework/

URL in a folder that is 1 level down:

https://www.entrepreneursblog.co.uk/services/genesis-framework/

URL in folders 2 levels down:

https://www.entrepreneursblog.co.uk/services/web-design/genesis-framework/

Anything more that 2 levels down is best to be avoided.

Should you add the category name in the URL?

In WordPress, you can add your blog posts into categories and depending on your WP settings, the category name will be shown within the URL.  For example, you have a category called “WordPress Tutorials” the URL will be https://www.entrepreneursblog.co.uk/wordpress-tutorials/genesis-themes/

This is OK as long as your category names are meaningful and are relevant to your visitors.

How to configure URLS in WordPress

Configuring your URLS in WordPress is very easy. WordPress is an SEO friendly platform and all you have to do is go to SETTINGS -> PERMALINK SETTINGS and choose one of the common settings or write your custom structure.

By choosing the Post name setting, your URLS will be directly linked to your domain without any folders or intermediators.

When publishing a post or a page, WordPress will try and create the URL based on the title of the page.

What you can do to optimise your URL is to click the EDIT button that is above the post title, configure your URL and then click the SAVE DRAFT or UPDATE buttons.

Use 301 Redirects for any URL change

Whether you are doing a website redesign or migrating to https or just optimising your URLS, it is necessary to add 301 redirects to let Google know that you are changing the address of a page.

301 redirects will help you maintain your SEO rankings and improve the user experience since any URLs bookmarked by users, will still work.

There are a couple of ways to add 301 redirects in WordPress. You can make use of a plugin or write the code directly in your .htaccess file.

301 Redirects using a plugin

Install and activate simple 301 redirects plugin.

From the Settings menu select 301 Redirects

Type the ‘old’ URL in the left side (Request column) and the new URL in the right side (Destination column).

Click the SAVE CHANGES button.

Open a new browser window and type the OLD URL in the address bar, if everything was done 

correctly, it should redirect to the new page.

301 Redirects using the .htaccess file

If you feel comfortable with making changes to your WordPress installation, edit .htaccess using FTP and add any redirections (at the top of the file) using the following format:

Redirect 301 /old URL (without the domain name) /new URL (including the domain name).

For example:

Redirect 301 /my-not-so-friendly-and-lengthy-url https://www.example.com/seo-friendly-url

Finish off by updating and resubmitting your XML Sitemap

In both cases (either when using a plugin or manual way), you should update your XML sitemap and resubmit it to Google.

Preferred Domain

www vs no www

What is the technical difference between having www in your URL and not having www?

Let’s see a couple of examples:

URLS with www

  • https://www.example.com
  • http://www.example.com

URLS with no www

  • https://example.com
  • http://example.com

Domains with no www in front are also called naked domains. Domains with www can also act as a hostname which can sometimes be easier to manage when it comes to cookies, in cases where you have a number of subdomains assigned to the www domain.

This is a really technical thing which in reality won’t affect the majority of websites, so if you are having difficulties understanding what this means, just skip this explanation and move on to the big question.

Should I use www or no www in front of my domain?

The answer is simple. It depends on your personal preference. There is no SEO advantage from using the one format or the other.

Three things are important:

  1. First, to configure your website to ‘listen’ to only one of the variations (more on this below).
  2. Second, to let Google and other search engines know what is your preferred choice.
  3. Third, to be consistent and use the chosen variation when adding internal links to your content or when running link building campaigns.

Google considers http:// and http://www as two different websites

Why all the fuss about www and no www? Because Google considers these to be two different websites.

In other words, in the eyes of Google, http://example.com and http://www.example.com are two different websites and if you don’t specific which version you want to use, you will end up having SEO issues.

Let’s see how to set the preferred domain for your website and how to communicate your decision with Google so that you avoid and crawling and indexing issues.

How to set your preferred domain in WordPress?

Login to your WordPress dashboard and then click GENERAL and then select SETTINGS from the left menu.

In the WordPress Address (ULR) and Site Address (URL), set your preferred domain.

How to set your preferred domain in WordPress

In the example below, I have selected my preferred domain to have the www in front. As explained above, there is no advantage from doing so, it’s just a matter of personal preference.

How to test that your preferred domain is set correctly?

To test that WordPress can successfully redirect from one version of your domain to the other, perform the following test.

Open a browser window and type http://example.com, if your preferred domain is set to http://www.example, then the page should automatically redirect to http://www.example.com.

How to set your preferred domain in Google?

For consistency purposes, you need to do the same in Google Search Console.

If you don’t have an account, go to Google Webmaster tools, register for a free account and ADD and VERIFY, ALL variations of your website.

This means that if you have https already activated on your website, you need to add ALL four variations as shown below.

Domain variations in google search console

Then you need to go to SITE SETTINGS (click the gear icon from the top right).

Add make sure that your preferred domain has the same format as the one specified in the WordPress dashboard.

Follow the above procedure for ALL your website variations and ensure that all variations point to the same format. That’s it, now Google knows what is your preferred domain.

How to set the canonical URL of your domain?

The preferred domain is also known as the canonical domain. A canonical URL in general is a piece of html code that tells search engines, what is the canonical or preferred version of the page.

Best SEO practices suggest that you have the canonical URL set for each end every page of your website.

To check if your theme sets the canonical URL correctly, open your homepage in a new browser window and then go to VIEW SOURCE (right click anywhere on the page and select VIEW SOURCE).

Search for the word ‘canonical’ and you should see a line like this:

This indicates the canonical URL for the page and it helps preventing duplicate content issues.

Repeat the above tests for all pages of your website.

If you cannot find the CANONICAL declaration in your HTML Code, it means that your theme is not SEO Friendly and it does not support it.

Don’t worry though, you can install the free version of Yoast SEO plugin and it will do this automatically for all your website posts and pages (including the homepage and archive pages).

What if I want to change my preferred domain and add or remove the www for an already established website?

This is highly NOT recommended. If you already have an established website then there is no reason of changing your preferred domain.

In case you do want to do this, you will have to add 301 Redirects to redirect traffic and links from one version of the domain to the other. (see Topic on SEO Friendly URLs).

Crawling and Indexing

If you accidentally block search engine crawlers from accessing your website (or parts of it), this can have a big impact on your SEO without realizing it.

Blocked Resources

Login to your Google Search Console account and click BLOCKED RESOURCES under GOOGLE INDEX.

What this report shows are the resources (images, css, javascript, etc) that the Google bot cannot access. Next to each resource, Google will also tell you the number of pages affected.

Note: The list may contain both items that are part of your website (domain), or external resources like the example above. For the first case, we will see below how to correct the problem but for external resources there is not much you can do, so you can safely ignore these warnings.

Fetch as Google

Fetch as Google is found under CRAWL and it is one of the most useful functions of the Search Console.

You can use ‘Fetch as Google’ to check if Google can access your website correctly, to notify Google of important changes made to a page (s) or in the cases you want to inform Google about a new page on your website (you can help them find it faster rather than waiting for the Google crawler to discover it).

You should only use the ‘Submit to Index’ function of ‘Fetch as Google’ when something important changed on your website and not for normal page updates or additions.

The first thing you need to do is click ‘Fetch and Render’. If you don’t type a URL in the box, Google will attempt to read your homepage. After a few seconds, you will see the results of the test.

In the status column, you will either have ‘unavailable’, ‘complete’ or ‘partial’. Unavailable means that Google was not able to find the website or page. ‘Partial’ means that Google could read the page but there are some issues and ‘complete’ means that everything was ok.

To get more details click the ‘status’. Your screen should look similar to the screenshot below:

Notice that the last column is called ‘Severity’. This can have the values of Low, Medium or High.

Any items that are marked as high or medium need immediate attention. This means that Google cannot access resources that are important for the crawling process and if this is the case, it negatively affects your SEO.

In the majority of cases, these can be fixed by making changes to Robots.txt (see next Topic). If you see resources that are external to your website, then most probably they will be marked as low and this is something that you don’t need to worry.

Robots.txt

One of the first things you need to check and optimize when working on your technical SEO is the robots.txt file. A problem or misconfiguration in your robots.txt can cause critical SEO issues that can negatively impact your rankings and traffic.

If you are on WordPress there is towards the end of this topic, specific information about WordPress virtual robots.txt file.

What is robots.txt?

A robots.txt is a text file that resides in the root directory of your website and gives search engines crawlers instructions as to which pages they can crawl and index, during the crawling and indexing process.

If you have read the previous topic on how search engines work, you know that during the crawling and indexing stage, search engines try to find pages available on the public web, that they can include in their index.

When visiting a website, the first thing they do is to look for and check the contents of the robots.txt file. Depending on the rules specified in the file, they create a list of the URLS they can crawl and later index for the particular website.

What happens if you don’t have a robots.txt file? If a robots.txt file is missing, search engine crawlers assume that all publicly available pages of the particular website can be crawled and added to their index.

What happens if the robots.txt is not well formatted? It depends on the issue. If search engines cannot understand the contents of the file because it is misconfigured, they will still access the website and ignore whatever is in robots.txt.

What happens if I accidentally block search engines from accessing my website? That’s a big problem. For starters, they will not crawl and index pages from your website and gradually they will remove any pages that are already available in their index.

Do you need a robots.txt file?

Yes, you definitely need to have a robots.txt even if you don’t want to exclude any pages or directories of your website from appearing in search engine results.

Why use a robots.txt?

The most common use cases of robots.txt are the following:

#1 – To block search engines from accessing specific pages or directories of your website. For example, look at the robots.txt below and notice the disallow rules.

These statements instruct search engine crawlers not to index the specific directories. Notice that you can use an * as a wild card character.

#2 – When you have a big website, crawling and indexing can be a very resource intensive process. Crawlers from various search engines will be trying to crawl and index your whole site and this can create serious performance problems.

In this case, you can make use of the robots.txt to restrict access to certain parts of your website that are not important for SEO or rankings. This way, you not only reduce load on your server but it makes the whole indexing process faster.

#3 – When you decide to use URL cloaking for your affiliate links. This is not the same as cloaking your content or URLS to trick users or search engines but it’s a valid process for making your affiliate links easier to manage.

Two Important things to know about robots.txt

The first thing is that any rules you add to the robots.txt are directives only. This means that it’s up to search engines to obey and follow the rules.

In most cases they do but If you have content that you don’t want to be included in their index, the best way is to password protect the particular directory or page.

The second thing is that even if you block a page or directory in robots, it can still appear in the search results if it has links from other pages that are already index. In other words, adding a page to the robots.txt does not guarantee that it will removed or not appear on the web.

Besides password protecting the page or directory, another way is to use page directives. There are added to the

of every page and they look like the example below:

How does robots.txt work?

The robots file has a very simple structure. There are some predefined keyword/value combinations you can use.

The most common are: User-agent, Disallow, Allow, Crawl-delay, Sitemap.

User-agent: Specifies which crawlers should take into account the directives. You can use an * to reference all crawlers or specify the name of a crawler, see examples below.

You can view all available names and values for the user-agent directive, here.

User-agent: * – includes all crawlers.

User-agent: Googlebot – instructions are for Google bot only.

Disallow: The directive that instructs a user-agent (specified above), not to crawl a URL or part of a website.

The value of disallow can be a specific file, URL or directory. Look at the example below taken from Google support.

Allow: The directive that tells explicitly which pages or subfolders it can be accessed. This is applicable for the Googlebot only.

You can use the allow to give access to a specific sub-folder on your website, even though the parent directory is disallowed.

For example, you can disallow access to your Photos directory but allow access to your BMW sub-folder which is located under Photos.


User-agent: *
Disallow: /photos
Allow: /photos/bmw/

Crawl-delay: You can specific a crawl-delay value to force search engine crawlers wait for a specific amount of time before crawling the next page from your website. The value you enter is in milliseconds.

It should be noted that the crawl-delay is not taken into account by Googlebot.

You can use Google Search Console to control the crawl rate for Google (the option is found under Site Settings).

You can use the crawl rate in cases you have a website with thousands of pages and you don’t want to overload your server with continuous requests.

In the majority of cases, you shouldn’t make use of the crawl-delay directive.

Sitemap: The sitemap directive is supported by the major search engines including Google and it is used to specify the location of your XML Sitemap.

Even if you don’t specify the location of the XML sitemap in the robots, search engines are still able to find it.

How to create a robots.txt?

Creating a robots.txt file is easy. All you need is a text editor (like brackets or notepad) and access to your website’s files (via FTP or control panel).

Before getting into the process of creating a robots file, the first thing to do is to check if you already have one.

The easiest way to do this is to open a new browser window and navigate to https://www.yourdomain.com/robots.txt

If you see a something similar to the one below, it means that you already have a robots.txt file and you can edit the existing file instead of creating a new one.

User-agent: *
Allow: /

How to edit your robots.txt

Use your favorite FTP client and connect to your website’s root directory.

Robots.txt is always located in the root folder (www or public_html, depending on your server).

Download the file to your PC and open it with a text editor.

Make the necessary changes and upload the file back to your server.

How to create a new robots.txt

If you don’t already have a robots.txt then create a new .txt file using a text editor, add your directives, save it and upload it to the root directory of your website.

Important: Make sure that your file name is robots.txt and not anything else. Also, have in mind that the file name is case-sensitive so it should be all lowercase.

Where do you put robots.txt? robots.txt should always reside in the root of your website and not in any folder.

Example of a robots.txt

In a typical scenario, your robots.txt file should have the following contents:

User-agent: *
Allow: /
Sitemap: https://example.com/sitemap.xml

This allows all bots to access your website without any to . It also specifies the sitemap location to make it easier for search engines to locate it.

How to test and validate your robots.txt?

While you can view the contents of your robots.txt by navigating to the robots.txt URL, the best way to test and validate it, is through the robots.txt Tester option of Google Search Console.

Login to your Google Search Console Account.

Click on robots.txt Tester, found under Crawl options.

Click the Test button.

If everything is ok, the Test button will turn green and the label will change to ALLOWED. If there is a problem, the line that causes a disallow will be highlighted.

Robots.txt SEO Best Practices

Test your robots.txt and make sure that you are not blocking any parts of your website that you want to appear in search engines.

Do not block CSS or JS folders. Google during the crawling and indexing process is able to view a website like a real user and if your pages need the JS and CSS to function properly, they should not be blocked.

If you are on WordPress, there is no need to block access to your wp-admin and wp-includefolders. WordPress does a great job using the meta robots tag.

Don’t try to specify different rules per search engine bot, it can get confusing and difficult to keep up-to date. Better use user-agent:* and provide one set of rules for all bots.

If you want to exclude pages from being indexed by search engines, better do it using the  in the header of each page and not through the robots.txt.

Conclusion

Technical SEO is a huge part of getting your website ready and optimised for search engines to ensure that it performs well. It is also quite an indepth topic to discuss. This is why I have decided to split this post into 2 parts.

So i hope you have gained some knowledge in exactly what Technical SEO is and that this article has given you some quick wins to try on your own website.

In the next part of this guide I will be digging into:

  • Search Visibility
  • Schema Markup and SEO
  • Breadcrumbs
  • Paging
  • Comments settings and much more

So if you haven’t already signed up to our mailing list and downloaded your 3 FREE ebooks. I would suggest you do, as you will have the next part of this guide delivered directly to your inbox.

The post Technical SEO – The Definitive Guide (Part 1) appeared first on Entrepreneurs Blog.



This post first appeared on Entreprenuers, please read the originial post: here

Share the post

Technical SEO – The Definitive Guide (Part 1)

×

Subscribe to Entreprenuers

Get updates delivered right to your inbox!

Thank you for your subscription

×