7 PERSISTENT SEO MYTHS THAT WILL HOLD UP IN 2022

By Rank Jacker on May 8, 2021

seo-myths-and-facts

SEO myths are a dime a dozen. Some of them have their origin in an SEO measure that has long since become obsolete.

Others are based on mere conclusions that have gotten stuck in the minds of SEOs. Even in 2021, there will still be iron SEO myths that are not (no longer) true.

Here are some of the most persistent SEO myths, coupled with official responses from Google reps like John Müller and Matt Cutts.

1. The Ideal Keyword Density Is XY …

  • For the page to rank, it must contain the keyword at least X times.
  • This widespread belief may have been current until the late 2000s – but it persists to this day. Search engines have dramatically improved their understanding of language over the past decade:
  • The Hummingbird update in 2013 gave us a semantic understanding of language. Search queries could be better understood and since then Google has been able to understand the search intent behind a search query. Voice search has also become more intuitive.​
  • In 2015, artificial intelligence was introduced into the algorithm with RankBrain. RankBrain continuously learns more about search intentions and uses user signals such as return to SERPs or the bounce rate to evaluate the relevance of the search result.​
  • In 2019, BERT joined the Hummingbird and the Brain. Thanks to BERT, Google can understand search queries in a more differentiated way and identify the meaning of individual words in the context of the search query. Google now understands that the search query “Can I register a car for someone” is looking for different information than the procedure for registering a car.

seo myths
Mere keyword optimization of the page is no longer enough for a page to be considered relevant.

It is no longer necessary for keyword X to occur a certain percentage in a text.

If only because Google recognizes synonyms and determines the context of the content.

Pages that are not optimized for this keyword but still deal with the topic in a highly relevant manner can rank for a keyword.

Create Content That Is Useful For The User

Instead of focusing on keyword density, the user (s) should be the focus. Ask yourself:

  1. What is the search intention?
  2. What is the trigger for the search?
  3. Which problems or challenges should be solved?
  4. What questions does the user ask himself?
  5. What does he/she need to cope with these challenges as easy as possible?
  6. Which content format is best?

You can answer all of these questions with the help of a SERP analysis. We have created a step-by-step guide for you that explains exactly how to analyze the search results to create really useful content.

2. Meta Keywords Must Be Given

  • Google uses meta keywords for ranking.​
  • Google ignores meta keywords.​

Meta keywords are part of the metadata that every URL has. This data set consists of.

  1. Meta title
  2. Meta description
  3. Meta keywords

seo myths
 

Important: Meta Title & Description

Meta title and meta description are very important parts of every URL. Together with the URL, they result in the so-called snippet, i.e. the search result in the search engines.

Your snippet is your figurehead and has to convince people to come to your page from the search results. The meta title is also a ranking factor.

In order for your search result to be as convincing as possible, you should optimize the meta title and description.

 

Meta Keywords Are Ignored

… And that is since at least 2009! Search engines’ language understanding has long been good enough to determine the subject and context of the content.

In the video, Matt Cutts says very clearly: “We don’t use the keywords meta tag for ranking … it was just too much spam. We do not use this information. “

3. Hidden Content Is Not Indexed

  • Google cannot see content that I hide (for example behind tabs or accordions) and thus cannot index it.
  • In most cases, Google can see such content. It depends on how the hidden content is technically implemented:
  1. If the content is loaded and hidden by CSS, Google’s crawlers can read and index the content.
  2. If the content is only loaded after an interaction (e.g. click, tap or swipe), Google’s crawlers can not read the content (it was not loaded) and thus can not index it. After all, when crawling a page, Google does not click on every element to test whether unloaded content could be hidden behind it.
  3. This is also confirmed by John Müller in the Google Webmaster Central Hangout on June 14, 2019

4. Penalties For Multiple H1 Headings

  • If I use several headings of the same type, for example, several H1 or H2 headings, then I risk a Google penalty.​
  • A semantically well-structured page is generally recommended. This also means that a document only contains one H1 heading. The H1 heading is like the title of a book or film: there is only one. H2 headings are normal subheadings.​

myths about seo
Headings structure an article and subheadings can appear in any number.

BUT: These are only recommendations. Google works with what it finds on the web. As John Müller explains in the video, the web is full of incorrectly used headings.

A highly relevant answer to a search query is not withheld from the user just because there are several H1 headings. And you won’t get yourself a Google penalty either.

Clean Headings For The User

You should (as with everything else) worry more about your users than about the search engine.

A meaningful page structure with neatly placed headings will make it easier for them to grasp and understand your content.

This is why SEO tools also output multiple H1 headings, missing H2 headings, or incorrect nesting as errors.

5. Social Signals Are A Ranking Factor

  • Social signals are a ranking factor.​
  • Social signals are not a direct ranking factor on Google. Matt Cutts confirmed this back in 2014 in a video from Google Search Central.​

Social signals are not a direct ranking factor on Google. Matt Cutts confirmed this back in 2014 in a video from Google Search Central.

In the video, he says very clearly: “Don’t assume that just because there is a signal from Facebook or Twitter that Google has access to it. Pages can be blocked, there are no-follow links or the like. ”

However, there are correlations between frequently shared content and good rankings:

Like no-follow links, social signals are anything but useless. After all, they also bring important traffic to your site. And: The more often your content is shared via social media, the more people will come to your site, interact with it, and generate positive user signals.

The attention that frequently shared content receives often gives them more backlinks – which in turn are a ranking factor and can bring even more traffic to the site.

The fact that content is often shared on social media is because the content is particularly useful and/or entertaining.

6. There Is A Duplicate Content Penalty

  • If I have duplicate content, I run the risk of being penalized for my website.
  • There is no duplicate content penalty. If Google finds identical content several times, it either looks for a variant or ranks it in the search results. Or, there are permanent fluctuations in the rankings, so that a stable ranking cannot be established for a page. Then one also speaks of keyword cannibalization, triggered by duplicate content.

There is no duplicate content penalty. If Google finds identical content several times, it either looks for a variant and ranks it in the search results. Or, there are permanent fluctuations in the rankings, so that a stable ranking cannot be established for a page.

Then one also speaks of keyword cannibalization, triggered by duplicate content.

You see, you don’t need to worry about a Google penalty, but you do about your rankings. It therefore always makes sense to identify and fix internal duplicate content.

seo myths

The Causes Of Duplicate Content

Internal duplicate content can have many causes, for example:

  1. One page is with and without www. reachable.
  2. Online shops with product variants: Each variant of the product (size, color, etc.) has its own URL, but the product descriptions are identical apart from the variance, which can result in duplicate content.
  3. Very similar landing pages (e.g. for local SEO). If these differ only in the name of the city and are otherwise identical, this can lead to problems.
  4. Indexed URL parameters that arise, for example, from session IDs, filters, or the internal search. The content on these URLs is then identical.
  5. An article is assigned to several categories of a blog or online magazine and these categories are located in the path of the URL.

Prevent Duplicate Content With The Canonical Tag

You can prevent duplicate content in the search engines by using the canonical tag. The Canonical Tag contains the URL of the “original” and shows the search engines which page should really appear in the search results.

Tips:

Here you can find a detailed article on Duplicate Content and Canonical Tags. You will also learn how to use the canonical tag correctly and what to look out for.

Report Copyright Infringement

If you find an exact copy of your article or your picture on someone else’s website without ever having given your permission, you don’t have to accept it.

Inform the webmaster (s) of this and ask for immediate removal within a specified period. If he/she does not comply, you can request Google to remove content from Google.

This will not remove your article from the foreign site, but at least it will no longer appear in the search results.

Plus, this is a clear copyright infringement and the law is on your side. If there is no reaction, you can call in the lawyer.

Of course, you should also not copy any content from third-party websites and thus make yourself liable for copyright infringement.

7. Frequent Crawling Leads To Better Rankings

seo myths

  • Myths: If Google crawls my page more, I’ll rank better.​
  • Fact: Crawling does not lead to better rankings – crawling is the technical prerequisite for ranking.

Crawling does not lead to better rankings – crawling is the technical prerequisite for ranking.

For Google to be able to index and position a page, the search engine must first of all

    1. See,
    2. Understand and
    3. Classify

The crawling allows the search engine to see the website (or at least to see parts of it that it is allowed to see). Which pages she is allowed to look at and which not is controlled by the robots.txt.

The Googlebot looks at the individual sub-pages so that the content can be classified in the next step. So if a page can’t be crawled, it can’t rank either.

What Are Some Myths About SEO?

What Is The Crawl Budget?

How often a website is crawled is determined by the crawl budget allocated to each website.

The aim is for all new features and changes to the website to be included in the search engine index (SERPs) as quickly as possible and to achieve good or better rankings in the second step.

Crawl budget optimization is particularly relevant for large websites with many subpages, such as online shops.

The optimization of the crawl budget is not a ranking factor and does not lead to better rankings.

But it is a measure to get new and updated content in the index more quickly – and thus to achieve better rankings more quickly.

For Whom Is The Crawl Budget Important?

The crawl budget quickly becomes tight for large websites that make many changes every day or that have a large number of subpages that actually do not need to be crawled.

Online shops with thousands of products, product categories, filter options, search functions, etc. are particularly affected.

If you notice that new pages or updated content are taking a long time to appear in search results, it makes sense to look at your crawl budget.

Conclusion:

Don’t let SEO myths fool you! As long as your website is technically clean, has unique content, and does not contain spam, there should be no problems from Google. Then it’s all about: to inspire the user and to satisfy the search intentions. Whether a word is then in the H1 or H3 is irrelevant.

Amit Kumar,
Marketing Lead & Co-Founder RankJacker SEO.

I'm an MBA & A former Sales Professional with a knack for experimenting with Content & Copywriting. I love to keep a tab on the latest updates in Content Marketing & SEO applications. Connect with me

Additional Articles

Reputation Management SEO: A Beginner’s Guide

If you own a business, an integral part of your job is to identify, monitor, and influence how pe...

How Long Does It Take To Rank On Google?

When working on an online SEO strategy, some of the most common questions that an SEO agency or e...

Content Marketing Vs Digital Marketing: Key Differences

Content marketing and digital marketing are online strategies that are mostly used interchangeabl...

View all our articles

IF NOT NOW, THEN WHEN?

BOOK A CALL TODAY