Every little thing You Want To Know About The X-Robots-Tag HTTP Header

SEO, in its most elementary sense, depends upon one factor above all others: Search engine spiders crawling and indexing your website.

However practically each web site goes to have pages that you just don’t need to embrace on this exploration.

For instance, do you really need your privateness coverage or inner search pages exhibiting up in Google outcomes?

In a best-case situation, these are doing nothing to drive site visitors to your website actively, and in a worst-case, they might be diverting site visitors from extra essential pages.

Fortunately, Google permits site owners to inform search engine bots what pages and content material to crawl and what to disregard. There are a number of methods to do that, the most typical being utilizing a robots.txt file or the meta robots tag.

Now we have a superb and detailed rationalization of the ins and outs of robots.txt, which it is best to positively learn.

However in high-level phrases, it’s a plain textual content file that lives in your web site’s root and follows the Robots Exclusion Protocol (REP).

Robots.txt supplies crawlers with directions in regards to the website as a complete, whereas meta robots tags embrace instructions for particular pages.

Some meta robots tags you would possibly make use of embrace index, which tells engines like google so as to add the web page to their index; noindex, which tells it to not add a web page to the index or embrace it in search outcomes; comply with, which instructs a search engine to comply with the hyperlinks on a web page; nofollow, which tells it to not comply with hyperlinks, and a complete host of others.

Each robots.txt and meta robots tags are helpful instruments to maintain in your toolbox, however there’s additionally one other approach to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is one other means so that you can management how your webpages are crawled and listed by spiders. As a part of the HTTP header response to a URL, it controls indexing for a whole web page, in addition to the precise components on that web page.

And whereas utilizing meta robots tags is pretty simple, the X-Robots-Tag is a little more difficult.

However this, in fact, raises the query:

When Ought to You Use The X-Robots-Tag?

In response to Google, “Any directive that can be utilized in a robots meta tag will also be specified as an X-Robots-Tag.”

When you can set robots.txt-related directives within the headers of an HTTP response with each the meta robots tag and X-Robots Tag, there are specific conditions the place you’ll need to use the X-Robots-Tag – the 2 most typical being when:

  • You need to management how your non-HTML information are being crawled and listed.
  • You need to serve directives site-wide as a substitute of on a web page degree.

For instance, if you wish to block a selected picture or video from being crawled – the HTTP response methodology makes this simple.

The X-Robots-Tag header can also be helpful as a result of it means that you can mix a number of tags inside an HTTP response or use a comma-separated record of directives to specify directives.

Possibly you don’t need a sure web page to be cached and wish it to be unavailable after a sure date. You need to use a mix of “noarchive” and “unavailable_after” tags to instruct search engine bots to comply with these directions.

Primarily, the facility of the X-Robots-Tag is that it’s way more versatile than the meta robots tag.

The benefit of utilizing an X-Robots-Tag with HTTP responses is that it means that you can use common expressions to execute crawl directives on non-HTML, in addition to apply parameters on a bigger, world degree.

That will help you perceive the distinction between these directives, it’s useful to categorize them by sort. That’s, are they crawler directives or indexer directives?

Right here’s a helpful cheat sheet to elucidate:

Crawler Directives Indexer Directives
Robots.txt – makes use of the person agent, permit, disallow, and sitemap directives to specify the place on-site search engine bots are allowed to crawl and never allowed to crawl. Meta Robots tag – means that you can specify and forestall engines like google from exhibiting explicit pages on a website in search outcomes.

Nofollow – means that you can specify hyperlinks that ought to not go on authority or PageRank.

X-Robots-tag – means that you can management how specified file varieties are listed.

The place Do You Put The X-Robots-Tag?

Let’s say you need to block particular file varieties. A super method can be so as to add the X-Robots-Tag to an Apache configuration or a .htaccess file.

The X-Robots-Tag could be added to a website’s HTTP responses in an Apache server configuration by way of .htaccess file.

Actual-World Examples And Makes use of Of The X-Robots-Tag

In order that sounds nice in principle, however what does it appear to be in the true world? Let’s have a look.

Let’s say we wished engines like google to not index .pdf file varieties. This configuration on Apache servers would look one thing just like the under:

<Information ~ ".pdf$">
  Header set X-Robots-Tag "noindex, nofollow"

In Nginx, it could appear to be the under:

location ~* .pdf$ {
  add_header X-Robots-Tag "noindex, nofollow";

Now, let’s take a look at a distinct situation. Let’s say we need to use the X-Robots-Tag to dam picture information, equivalent to .jpg, .gif, .png, and many others., from being listed. You can do that with an X-Robots-Tag that might appear to be the under:

<Information ~ ".(png|jpe?g|gif)$">
Header set X-Robots-Tag "noindex"

Please observe that understanding how these directives work and the affect they’ve on each other is essential.

For instance, what occurs if each the X-Robots-Tag and a meta robots tag are situated when crawler bots uncover a URL?

If that URL is blocked from robots.txt, then sure indexing and serving directives can’t be found and won’t be adopted.

If directives are to be adopted, then the URLs containing these can’t be disallowed from crawling.

Verify For An X-Robots-Tag

There are a number of totally different strategies that can be utilized to verify for an X-Robots-Tag on the positioning.

The simplest approach to verify is to put in a browser extension that can inform you X-Robots-Tag details about the URL.

Screenshot of Robots Exclusion Checker, December 2022

One other plugin you should utilize to find out whether or not an X-Robots-Tag is getting used, for instance, is the Net Developer plugin.

By clicking on the plugin in your browser and navigating to “View Response Headers,” you may see the varied HTTP headers getting used.

web developer plugin

One other methodology that can be utilized for scaling to be able to pinpoint points on web sites with one million pages is Screaming Frog.

After working a website by way of Screaming Frog, you may navigate to the “X-Robots-Tag” column.

This can present you which ones sections of the positioning are utilizing the tag, together with which particular directives.

Screaming Frog Report. X-Robot-TagScreenshot of Screaming Frog Report. X-Robotic-Tag, December 2022

Utilizing X-Robots-Tags On Your Website

Understanding and controlling how engines like google work together together with your web site is the cornerstone of search engine marketing. And the X-Robots-Tag is a robust software you should utilize to do exactly that.

Simply remember: It’s not with out its risks. It is extremely simple to make a mistake and deindex your complete website.

That stated, in the event you’re studying this piece, you’re in all probability not an search engine optimisation newbie. As long as you utilize it properly, take your time and verify your work, you’ll discover the X-Robots-Tag to be a helpful addition to your arsenal.

Extra Assets:

Featured Picture: Song_about_summer/Shutterstock

Leave a Reply

Your email address will not be published. Required fields are marked *

Schedule Call

👋🏻 Hi friend, how are you today?

Need help? contact us here... 👇