5 Brief Notes On Technical SEO and Its Important

technical seo and its importance

What is Technical SEO?

Technical SEO is optimizing your website to help search engines understand and index your pages. now for beginners, technical SEO doesn’t need to be all that technical.

For that reason, this article will focus on the basics so you can perform regular maintenance on your site and ensure that your pages can be discovered and indexed by search engines.

Why Is It Important?

Basically, if search engines can’t properly access read, understand, or index your pages, then you won’t rank or even be found for that matter, so avoid innocent mistakes like removing yourself from google’s index or diluting a page’s backlinks.

Things to Focus in Technical SEO

No Index Tag

first is the no index meta tag by adding the following piece of code to your page it’s telling search engines not to add it to their index and you probably don’t want to do that and this actually happens more often than you might think.

<meta name="robots" content="noindex" />

For example, let’s say you hire a web designer to create or redesign a website for you during the development phase they may create it on a subdomain on their own site so it actually makes sense for them to no index.

the site they’re working on but what often happens is after you’ve approved the design they’ll migrate it over to your domain but they often forget to remove them at a no index tag and as a result, your pages end up getting removed from google search index or never making it in.

Now there are times when it actually makes sense to no-index certain pages for example our author’s pages are not indexed because from an SEO perspective these pages provide very little value to search engines but from a user experience standpoint

It can be argued that it makes sense to be there some people may have their favorite authors on a blog and want to read just their content generally speaking for small sites you won’t need to worry about no indexing specific pages just keep your eye out for no index tags on your pages, especially if after a redesign.

Robots.txt

The second point of discussion is robots.txt. Robots.txt is a file that usually lives on your root domain and you should be able to access it at yourdomain.com/robots.txt.Now the file itself includes a set of rules for search engine crawlers and tells them where they can and cannot go on your site and its important to note that a website can have multiple robots files if you’re using sub domains.

For example, if you have a blog on domain.com then you’d have a robots.txt file for just the root domain, but you might also have an eCommerce store that lives on store.domain.com so you could have a separate robots file for your online store that means that crawlers could be given two different sets of rules depending on the domain.

They’re trying to crawl now the rules are created using something called directives and while you probably don’t need to know what all of them are or what they do, there are two that you should know about from an indexing standpoint. The first is a user agent which defines the crawler that the rules apply and the value for this directive will be the name of the crawler.

For example, google’s user agent is named Googlebot and the second directive is to disallow. This is a page or directory on your domain that you don’t want the user agent to crawl. For example, if you set the user agent to GoogleBot and the disallow value to a slash you’re telling google not to crawl any pages on your site not good now.

If you were to set the user agent to an asterisk (*) that means your rule should apply to all crawlers so if your robot’s file looks something like this. Then it’s telling all crawlers please don’t crawl any pages on my site while this might sound like something you would never use there are times when it makes sense to block specific parts of your site or to block certain crawlers.

For example, if you have a WordPress website and you don’t want your wp-admin folder to be crawled then you can simply set the user agent to all crawlers and set the disallow value to wp-admin now. If you’re a beginner I wouldn’t worry too much about your robot’s file but if you run into any indexing issues that need to be troubleshot robots.txt is one of the first places I check.

Sitemaps

Alright, the next thing to discuss is sitemaps are usually XML files and they list the important URLs on your website so these can be pages images videos and other files and sitemaps help search engines like google to more intelligently crawl your site.

Now creating an XML file can be complicated if you don’t know how to code and it’s almost impossible to maintain manually but if you’re using a CMS like WordPress there are plugins like Yoast and rank math which will automatically generate sitemaps for you to help search engines find your sitemaps. You can use the sitemap directive in your robots file and also submit it in the google search console

Proper Redirects

Next up are redirects. A redirect takes visitors and bots from one URL to another and their purpose is to consolidate signals. For example, let’s say you have two pages on your website on the best cricket balls an old one at domain.com bestcricketballs2018 and another at domain.com best cricket balls seeing as these are highly relevant to one another.

It would make sense to redirect the 2018 version to the current version and by consolidating these pages you’re telling search engines to pass the signals from the redirected URL to the destination URL.

Use Canonical Tags

The last point I want to talk about is the canonical tag a canonical tag is a snippet of HTML code that looks like this its purpose is to tell search engines what the preferred URL is for a page and this helps to solve duplicate content issues.

For example, let’s say your website is accessible at both http://yourdomain.com and https://yourdomain.com and for whatever reason, you weren’t able to use a redirect these would be exact duplicates but by setting a canonical URL you’re telling search engines that there’s a preferred version of the page as a result.

They’ll pass signals such as links to the canonical URL so they’re not diluted across two different pages now it’s important to note that Google may choose to ignore your canonical tag looking back at the previous example.

If we set the canonical tag to the unsecured HTTP page google would probably select the secure HTTPS version instead. Now if you’re running a simple WordPress site you shouldn’t have to worry about this too much cms are pretty good out of the box and will handle a lot of these basic technical issues for you.

So these are some of the foundational things that are good to know when it comes to indexing which is arguably the most important part of SEO because again if your pages aren’t getting indexed nothing else really matters now we won’t really dig deeper into this because you’ll probably only have to worry about indexing issues if and when you run into problems instead we have been focusing on technical SEO best practices to keep your website in good health.

Leave a Comment

Your email address will not be published. Required fields are marked *

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.