Tag Archive | "Google’s"

SEOs Have Doubt With Google’s Disavow Link Tool

Lily Ray posted a Twitter poll some time ago asking about Google’s disavow link tool. The responses to the question shows that SEOs have serious doubt in how effective and useful the disavow link tool is.

Search Engine Roundtable

Posted in IM NewsComments Off

New SEO Experiments: A/B Split Testing Google’s UGC Attribute

Posted by Cyrus-Shepard

When Craig Bradford of Distilled reached out and asked if we’d like to run some SEO experiments on Moz using DistilledODN, our reply was an immediate “Yes please!”

If you’re not familiar with DistilledODN, it’s a sophisticated platform that allows you to do a number of cool things in the SEO space:

  1. Make almost any change to your website through the ODN dashboard. Since the ODN is a cloud platform that sits in front of your website (like a CDN) it doesn’t matter how your website is built or what CMS it uses. You can change a single page — or more likely — entire sections.
  2. The ODN allows you to A/B split test these changes and both measure and predict their impact on organic traffic. They also have a feature called full-funnel testing allowing you to measure impact on both SEO and CRO at the same time.

When you find something that works, you see a positive result like this:

DistilledODN Positive Result

SEO experimentation is great, but almost nobody does it right because it’s impossible to control for other factors. Yes, you updated your title tags, but did Google roll out an update today? Sure, you sped up your site, but did a bunch of spam just link to you?

A/B split testing solves this problem by applying your changes to only a portion of your pages — typically 50% — and measuring the difference between the two groups. Fortunately, the ODN can deploy these changes near-instantly, up to thousands of pages at a time.

It then crunches the numbers and tells you what’s working, or not.

Testing Google’s UGC link attribute

For our first test, we decided to tackle something simple and fast. Craig suggested looking at Google’s new link attributes, and we were off!

To summarize: Google recently introduced new link attributes for webmasters/SEOs to label links. Those attributes are:

  • rel=”sponsored” – For paid and sponsored links
  • rel=”ugc” – For links in user-generated content (UGC)
  • rel=”nofollow” – Remains a catch-all for all followed links

On the Moz blog, all comments links are currently marked “nofollow” — following years of SEO best practices. Google has stated that using the new attributes won’t give you a rankings boost. That said, we wanted to test for ourselves if changing these links to “ugc” would impact the rankings/traffic of our blog pages.

To be clear: We are not testing if the pages we link to change rankings, but instead the source page that hosts the link — in this case, the blog pages with comments.

Here’s an example of a comment the ODN modified.

UGC Comment

After we set the test running, 50% of blog posts had comments with “ugc” links, while 50% kept their original “nofollow” attributes.

Experiment results

We expected a “null” test — meaning we wouldn’t see a significant impact.

In fact, that’s exactly what happened.

DistilledODN Null Results

If we detected a significant change, the probability cone at the bottom right would have pointed more dramatically up or down.

In fact, at a 95% confidence interval, the test predicted traffic would either fall 26,000 visits/month or gain 9,300 visits/month.

Hence, a null result.

This validates Google’s statements that using the “ugc” attribute won’t give you a ranking boost.

What should Moz test next?

While “null” tests aren’t as fun as a positive result, we have a lot of cool A/B SEO testing ahead of us.

The great thing is we can now test out changes with the ODN, and when we find one that works, pass that to our developers to make the changes permanently. This cuts down on needless development work and stops the guessing game.

We have a Trello board set up for test ideas, and we’d love to add some community ideas to the mix. The ODN is currently running on the Moz Blog and Q&A, so anything in these site sections is fair game.

We’re also looking at experiments where we use Moz data to inform these decisions. For example, a Moz Pro crawl identified that the Moz Blog titles currently use H2 tags instead of H1. Google recently indicated this likely shouldn’t impact rankings, but wouldn’t it be good to test?

Missing H1 Tags

What wild/clever/ridiculous/obvious SEO things should we test? With each good test, we’ll publish the results. Leave your ideas in the comments below.

Big thanks to the Distilled Team, including Will Critchlow and Tom Anthony, for embarking on this journey with us.

And if you’d like to learn more about DistilledODN and SEO split testing in general, this post is highly recommended.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Posted in IM NewsComments Off

How Google’s Nofollow, Sponsored, & UGC Links Impact SEO

Posted by Cyrus-Shepard

Google shook up the SEO world by announcing big changes to how publishers should mark nofollow links. The changes — while beneficial to help Google understand the web — nonetheless caused confusion and raised a number of questions. We’ve got the answers to many of your questions here.

14 years after its introduction, Google today announced significant changes to how they treat the “nofollow” link attribute. The big points:

  1. Link attribution can be done in three ways: ”nofollow”, “sponsored”, and “ugc” — each signifying a different meaning. (The fourth way, default, means no value attributed)
  2. For ranking purposes, Google now treats each of the nofollow attributes as “hints” — meaning they likely won’t impact ranking, but Google may choose to ignore the directive and use nofollow links for rankings.
  3. Google continues to ignore nofollow links for crawling and indexing purposes, but this strict behavior changes March 1, 2020, at which point Google begins treating nofollow attributes as “hints”, meaning they may choose to crawl them.
  4. You can use the new attributes in combination with each other. For example, rel=”nofollow sponsored ugc” is valid.
  5. Paid links must either use the nofollow or sponsored attribute (either alone or in combination.) Simply using “ugc” on paid links could presumably lead to a penalty.
  6. Publishers don’t have to do anything. Google offers no incentive for changing, or punishment for not changing.
  7. Publishers using nofollow to control crawling may need to reconsider their strategy.

Why did Google change nofollow?

Google wants to take back the link graph.

Google introduced the nofollow attribute in 2005 as a way for publishers to address comment spam and shady links from user-generated content (UGC). Linking to spam or low-quality sites could hurt you, and nofollow offered publishers a way to protect themselves.

Google also required nofollow for paid or sponsored links. If you were caught accepting anything of value in exchange for linking out without the nofollow attribute, Google could penalize you.

The system generally worked, but huge portions of the web—sites like Forbes and Wikipedia—applied nofollow across their entire site for fear of being penalized, or not being able to properly police UGC.

This made entire portions of the link graph less useful for Google. Should curated links from trusted Wikipedia contributors really not count? Perhaps Google could better understand the web if they changed how they consider nofollow links.

By treating nofollow attributes as “hints”, they allow themselves to better incorporate these signals into their algorithms.

Hopefully, this is a positive step for deserving content creators, as a broader swath of the link graph opens up to more potential ranking influence. (Though for most sites, it doesn’t seem much will change.)

What is the ranking impact of nofollow links?

Prior to today, SEOs generally believed nofollow links worked like this:

  • Not used for crawling and indexing (Google didn’t follow them.)
  • Not used for ranking, as confirmed by Google. (Many SEOs have believed for years that this was in fact not the case)

To be fair, there’s a lot of debate and speculation around the second statement, and Google has been opaque on the issue. Experimental data and anecdotal evidence suggest Google has long considered nofollow links as a potential ranking signal.

As of today, Google’s guidance states the new link attributes—including sponsored and ugc—are treated like this:

  • Still not used for crawling and indexing (see the changes taking place in the future below)
  • For ranking purposes, all nofollow directives are now officially a “hint” — meaning Google may choose to ignore it and use it for ranking purposes. Many SEOs believe this is how Google has been treating nofollow for quite some time.

Beginning March 1, 2020, these link attributes will be treated as hints across the board, meaning:

  • In some cases, they may be used for crawling and indexing
  • In some cases, they may be used for ranking

Emphasis on the word “some.” Google is very explicit that in most cases they will continue to ignore nofollow links as usual.

Do publishers need to make changes?

For most sites, the answer is no — only if they want to. Google isn’t requiring sites to make changes, and as of yet, there is no business case to be made.

That said, there are a couple of cases where site owners may want to implement the new attributes:

  1. Sites that want to help Google better understand the sites they—or their contributors—are linking to. For example, it could be to everyone’s benefit for sites like Wikipedia to adopt these changes. Or maybe Moz could change how it marks up links in the user-generated Q&A section (which often links to high-quality sources.)
  2. Sites that use nofollow for crawl control. For sites with large faceted navigation, nofollow is sometimes an effective tool at preventing Google from wasting crawl budget. It’s too early to tell if publishers using nofollow this way will need to change anything before Google starts treating nofollow as a crawling “hint” but it may be important to pay attention to.

To be clear, if a site is properly using nofollow today, SEOs do not need to recommend any changes be made. Though sites are free to do so, they should not expect any rankings boost for doing so, or new penalties for not changing.

That said, Google’s use of these new link attributes may evolve, and it will be interesting to see in the future—through study and analysis—if a ranking benefit does emerge from using nofollow attributes in a certain way.

Which link attribute should you use?

If you choose to change your nofollow links to be more specific, Google’s guidelines are very clear, so we won’t repeat them in-depth here. In brief, your choices are:

  1. rel=”sponsored” – For paid or sponsored links. This would assumingly include affiliate links, although Google hasn’t explicitly said.
  2. rel=”ugc” – Links within all user-generated content. Google has stated if UGC is created by a trusted contributor, this may not be necessary.
  3. rel=”nofollow” – A catchall for all nofollow links. As with the other nofollow directives, these links generally won’t be used for ranking, crawling, or indexing purposes.

Additionally, attributes can be used in combination with one another. This means a declaration such as rel=”nofollow sponsored” is 100% valid.

Can you be penalized for not marking paid links?

Yes, you can still be penalized, and this is where it gets tricky.

Google advises to mark up paid/sponsored links with either “sponsored” or “nofollow” only, but not “ugc”.

This adds an extra layer of confusion. What if your UGC contributors are including paid or affiliate links in their content/comments? Google, so far, hasn’t been clear on this.

For this reason, we may likely see publishers continue to markup UGC content with “nofollow” as a default, or possibly “nofollow ugc”.

Can you use the nofollow attributes to control  crawling and indexing?

Nofollow has always been a very, very poor way to prevent Google from indexing your content, and it continues to be that way.

If you want to prevent Google from indexing your content, it’s recommended to use one of several other methods, most typically some form of “noindex”.

Crawling, on the other hand, is a slightly different story. Many SEOs use nofollow on large sites to preserve crawl budget, or to prevent Google from crawling unnecessary pages within faceted navigation.

Based on Google statements, it seems you can still attempt to use nofollow in this way, but after March 1, 2020, they may choose to ignore this. Any SEO using nofollow in this way may need to get creative in order to prevent Google from crawling unwanted sections of their sites.

Final thoughts: Should you implement the new nofollow attributes?

While there is no obvious compelling reason to do so, this is a decision every SEO will have to make for themselves.

Given the initial confusion and lack of clear benefits, many publishers will undoubtedly wait until we have better information.

That said, it certainly shouldn’t hurt to make the change (as long as you mark paid links appropriately with “nofollow” or “sponsored”.) For example, the Moz Blog may someday change comment links below to rel=”ugc”, or more likely rel=”nofollow ugc”.

Finally, will anyone actually use the “sponsored” attribute, at the risk of giving more exposure to paid links? Time will tell.

What are your thoughts on Google’s new nofollow attributes? Let us know in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Posted in IM NewsComments Off

Google’s Indexing Issues Continue But This One Is Different

Last night I reported that Google was having issues indexing new content again, yes – again. Danny Sullivan from Google said it seems like that was the case and said “We’ll post on @googlewmc if we confirm and have more to share.” Nothing was posted there – yet. But it does seem like indexing issues are happening for some sites – not all.

Search Engine Roundtable

Posted in IM NewsComments Off

All Links are Not Created Equal: 20 New Graphics on Google’s Valuation of Links

Posted by Cyrus-Shepard

Twenty-two years ago, the founders of Google invented PageRank, and forever changed the web. A few things that made PageRank dramatically different from existing ranking algorithms:

  • Links on the web count as votes. Initially, all votes are equal.
  • Pages which receive more votes become more important (and rank higher.)
  • More important pages cast more important votes.

But Google didn’t stop there: they innovated with anchor text, topic-modeling, content analysis, trust signals, user engagement, and more to deliver better and better results.

Links are no longer equal. Not by a long shot.

Rand Fishkin published the original version of this post in 2010—and to be honest, it rocked our world. Parts of his original have been heavily borrowed here, and Rand graciously consulted on this update.

In this post, we’ll walk you through 20 principles of link valuation that have been observed and tested by SEOs. In some cases, they have been confirmed by Google, while others have been patented. Please note that these are not hard and fast rules, but principles that interplay with one another. A burst of fresh link can often outweigh powerful links, spam links can blunt the effect of fresh links, etc.

We strongly encourage you to test these yourselves. To quote Rand, “Nothing is better for learning SEO than going out and experimenting in the wild.”

1. Links From Popular Pages Cast More Powerful Votes

Let’s begin with a foundational principle. This concept formed the basis of Google’s original PageRank patent, and quickly help vault it to the most popular search engine in the world.

PageRank can become incredibly complex very quickly—but to oversimplify—the more votes (links) a page has pointed to it, the more PageRank (and other possible link-based signals) it accumulates. The more votes it accumulates, the more it can pass on to other pages through outbound links.

In basic terms, popular pages are ones that have accumulated a lot of votes themselves. Scoring a link from a popular page can typically be more powerful than earning a link from a page with fewer link votes.

Links From Popular Pages Cast More Powerful Votes

2. Links “Inside” Unique Main Content Pass More Value than Boilerplate Links

Google’s Reasonable Surfer, Semantic Distance, and Boilerplate patents all suggest valuing content and links more highly if they are positioned in the unique, main text area of the page, versus sidebars, headers, and footers, aka the “boilerplate.”

It certainly makes sense, as boilerplate links are not truly editorial, but typically automatically inserted by a CMS (even if a human decided to put them there.) Google’s Quality Rater Guidelines encourage evaluators to focus on the “Main Content” of a page.

Links Inside Unique Main Content Pass More Value than Boilerplate Links

Similarly, SEO experiments have found that links hidden within expandable tabs or accordions (by either CSS or JavaScript) may carry less weight than fully visible links, though Google says they fully index and weight these links.

3. Links Higher Up in the Main Content Cast More Powerful Votes

If you had a choice between 2 links, which would you choose?

  1. One placed prominently in the first paragraph of a page, or
  2. One placed lower beneath several paragraphs

Of course, you’d pick the link visitors would likely click on, and Google would want to do the same. Google’s Reasonable Surfer Patent describes methods for giving more weight to links it believes people will actually click, including links placed in more prominent positions on the page.

Links Higher Up in the Main Content Cast More Powerful Votes

Matt Cutts, former head of Google’s Webspam team, once famously encouraged SEOs to pay attention to the first link on the page, and not bury important links. (source)

4. Links With Relevant Anchor Text May Pass More Value

Also included in Google’s Reasonable Surfer patent is the concept of giving more weight to links with relevant anchor text. This is only one of several Google patents where anchor text plays an important role.

Multiple experiments over the years repeatedly confirm the power of relevant anchor text to boost a page’s ranking better than generic or non-relevant anchor text.

It’s important to note that the same Google patents that propose boosting the value of highly-relevant anchors, also discuss devaluing or even ignoring off-topic or irrelevant anchors altogether.

Not that you should spam your pages with an abundance of exact match anchors. Data typically shows that high ranking pages typically have a healthy, natural mix of relevant anchors pointing to them.

Links With Relevant Anchor Text May Pass More Value

Similarly, links may carry the context of the words+phrases around/near the link. Though hard evidence is scant, this is mentioned in Google’s patents, and it makes sense that a link surrounded by topically relevant content would be more contextually relevant than the alternative.

5. Links from Unique Domains Matter More than Links from Previously Linking Sites

Experience shows that it’s far better to have 50 links from 50 different domains than to have 500 more links from a site that already links to you.

This makes sense, as Google’s algorithms are designed to measure popularity across the entire web and not simply popularity from a single site.

In fact, this idea has been supported by nearly every SEO ranking factor correlation study ever performed. The number of unique linking root domains is almost always a better predictor of Google rankings than a site’s raw number of total links.

Links from Unique Domains Matter More than Links from Previously Linking Sites

Rand points out that this principle is not always universally true. “When given the option between a 2nd or 3rd link from the NYTimes vs. randomsitexyz, it’s almost always more rank-boosting and marketing helpful to go with another NYT link.”

6. External Links are More Influential than Internal Links

If we extend the concept from #3 above, then it follows that links from external sites should count more than internal links from your own site. The same correlation studies almost always show that high ranking sites are associated with more external links than lower ranking sites.

Search engines seem to follow the concept that what others say about you is more important than what you say about yourself.

External Links are More Influential than Internal Links

That’s not to say that internal links don’t count. On the contrary, internal linking and good site architecture can be hugely impactful on Google rankings. That said, building external links is often the fastest way to higher rankings and more traffic.

7. Links from Sites Closer to a Trusted Seed Set May Pass More Value

The idea of TrustRank has been around for many years. Bill Slawski covers it here.

More recently, Google updated its original PageRank patent with a section that incorporates the concept of “trust” using seed sites. The closer a site is linked to a trusted seed site, the more of a boost it receives.

In theory, this means that black hat Private Blog Networks (PBNs) would be less effective if they were a large link distance away from more trusted sites.

Links from Sites Closer to a Trusted Seed Set May Pass More Value

Beyond links, other ways that Google may evaluate trust is through online reputation—e.g. through online reviews or sentiment analysis—and use of accurate information (facts). This is of particular concern with YMYL (Your Money or Your Life) pages that “impact the future happiness, health, financial stability, or safety of users.”

This means links from sites that Google considers misleading and/or dangerous may be valued less than links from sites that present more reputable information.

8. Links From Topically Relevant Pages May Cast More Powerful Votes

You run a dairy farm. All things being equal, would you rather have a link from:

  1. The National Dairy Association
  2. The Association of Automobile Mechanics

Hopefully, you choose “a” because you recognize it’s more relevant. Though several mechanisms, Google may act in the same way to toward topically relevant links, including Topic-Sensitive PageRank, phrase-based indexing, and local inter-connectivity.

These concepts also help discount spam links from non-relevant pages.

Links From Topically Relevant Pages Cast More Powerful Votes

While I’ve included the image above, the concepts around Google’s use of topical relevance is incredibly complex. For a primer on SEO relevance signals, I recommend reading: 

  1. Topical SEO: 7 Concepts of Link Relevance & Google Rankings
  2. More than Keywords: 7 Concepts of Advanced On-Page SEO

9. Links From Fresh Pages Can Pass More Value Than Links From Stale Pages

Freshness counts.

Google uses several ways of evaluating content based on freshness. One way to determine the relevancy of a page is to look at the freshness of the links pointing at it.

The basic concept is that pages with links from fresher pages—e.g. newer pages and those more regularly updated—are likely more relevant than pages with links from mostly stale pages, or pages that haven’t been updated in a while. 

For a good read on the subject, Justing Briggs has described and named this concept FreshRank.

    A page with a burst of links from fresher pages may indicate immediate relevance, compared to a page that has had the same old links for the past 10 years. In these cases, the rate of link growth and the freshness of the linking pages can have a significant influence on rankings.

    Links From Fresh Pages Can Pass More Value Than Links From Stale Pages

    It’s important to note that “old” is not the same thing as stale. A stale page is one that:

    • Isn’t updated, often with outdated content
    • Earns fewer new links over time
    • Exhibits declining user engagement

    If a page doesn’t meet these requirements, it can be considered fresh – no matter its actual age. As Rand notes, “Old crusty links can also be really valuable, especially if the page is kept up to date.”

    10. The Rate of Link Growth Can Signal Freshness

    If Google sees a burst of new links to a page, this could indicate a signal of relevance.

    By the same measure, a decrease in the overall rate of link growth would indicate that the page has become stale, and likely to be devalued in search results.

    All of these freshness concepts, and more, are covered by Google’s Information Retrieval Based on Historical Data patent.

    The Rate of Link Growth Can Signal Freshness

    If a webpage sees an increase in its link growth rate, this could indicate a signal of relevance to search engines. For example, if folks start linking to your personal website because you’re about to get married, your site could be deemed more relevant and fresh (as far as this current event goes.)

    11. Google Devalues Spam and Low-Quality Links

    While there are trillions of links on the web, the truth is that Google likely ignores a large swath of them.

    Google’s goal is to focus on editorial links, e.g. “links that you didn’t even have to ask for because they are editorially given by other website owners.” Since Penguin 4.0, Google has implied that their algorithms simply ignore links that they don’t feel meet these standards. These include links generated by negative SEO and link schemes.

    Google Devalues Spam and Low-Quality Links

    That said, there’s lots of debate if Google truly ignores all low-quality links, as there’s evidence that low-quality links—especially those Google might see as manipulative—may actually hurt you.

    12. Link Echos: The Influence Of A Link May Persist Even After It Disappears

    Link Echos (a.k.a. Link Ghosts) describe the phenomenon where the ranking impact of a link often appears to persist, even long after the link is gone.

    Rand has performed several experiments on this and the reverberation effect of links is incredibly persistent, even months after the links have dropped from the web, and Google has recrawled and indexed these pages several times.

    Speculation as to why this happens includes: Google looking at other ranking factors once the page has climbed in rankings (e.g. user engagement), Google assigning persistence or degradation to link value that isn’t wholly dependent on its existence on the page, or factors we can’t quite recognize.

    Link Echos: The Influence Of A Link May Persist Even After It Disappears

    Whatever the root cause, the value of a link can have a reverberating, ethereal quality that exists separately from its HTML roots.

    As a counterpoint, Niel Patel recently ran an experiment where rankings dropped after low-authority sites lost a large number of links all at once, so it appears possible to overcome this phenomenon under the right circumstances.

    13. Sites Linking Out to Authoritative Content May Count More Than Those That Do Not

    While Google claims that linking out to quality sites isn’t an explicit ranking factor, they’ve also made statements in the past that it can impact your search performance.

    “In the same way that Google trusts sites less when they link to spammy sites or bad neighborhoods, parts of our system encourage links to good sites.” – Matt Cutts

    Sites Linking Out to Authoritative Content May Count More Than Those That Do Not

    Furthermore, multiple SEO experiments and anecdotal evidence over the years suggest that linking out to relevant, authoritative sites can result in a net positive effect on rankings and visibility.

    14. Pages That Link To Spam May Devalue The Other Links They Host

    If we take the quote above and focus specifically on the first part, we understand that Google trusts sites less when they link to spam.

    This concept can be extended further, as there’s ample evidence of Google demoting sites it believes to be hosting paid links, or part of a private blog network.

    Pages That Link To Spam May Devalue The Other Links They Host

    Basic advice: when relevant and helpful, link to authoritative sites (and avoid linking to bad sites) when it will benefit your audience.

    15. Nofollowed Links Aren’t Followed, But May Have Value In Some Cases

    Google invented the nofollow link specifically because many webmasters found it hard to prevent spammy, outbound links on their sites – especially those generated by comment spam and UGC.

    A common belief is that nofollow links don’t count at all, but Google’s own language leaves some wriggle room. They don’t follow them absolutely, but “in general” and only “essentially” drop the links from their web graph.

    Nofollowed Links Aren't Followed, But May Have Value In Some Cases

    That said, numerous SEO experiments and correlation data all suggest that nofollow links can have some value, and webmasters would be wise to maximize their value.

    16. ManyJavaScript Links Pass Value, But Only If Google Renders Them

    In the old days of SEO, it was common practice to “hide” links using JavaScript, knowing Google couldn’t crawl them.

    Today, Google has gotten significantly better at crawling and rendering JavaScript, so that most JavaScript links today will count.

    ManyJavaScript Links Pass Value, But Only If Google Renders Them

    That said, Google still may not crawl or index every JavaScript link. For one, they need extra time and effort to render the JavaScript, and not every site delivers compatible code. Furthermore, Google only considers full links with an anchor tag and href attribute.

    17. If A Page Links To The Same URL More Than Once, The First Link Has Priority

    … Or more specifically, only the first anchor text counts.

    If Google crawls a page with two or more links pointing to the same URL, they have explained that while PageRank flows normally through both, they will only use the first anchor text for ranking purposes.

    This scenario often comes into play when your sitewide navigation links to an important page, and you also link to it within an article below.

    If A Page Links To The Same URL More Than Once, The First Link Has Priority

    Through testing, folks have discovered a number of clever ways to bypass the First Link Priority rule, but newer studies haven’t been published for several years.

    18. Robots.txt and Meta Robots May Impact How and Whether Links Are Seen

    Seems obvious, but in order for Google to weigh a link in it’s ranking algorithm, it has to be able to crawl and follow it. Unsurprisingly, there are a number of site and page-level directives which can get in Google’s way. These include:

    • The URL is blocked from crawling by robots.txt
    • Robots meta tag or X-Robots-Tag HTTP header use the “nofollow” directive
    • The page is set to “noindex, follow” but Google eventually stops crawling
    Robots.txt and Meta Robots May Impact How and Whether Links Are Seen

    Often Google will include a URL in its search results if other pages link to it, even if that page is blocked by robots.txt. But because Google can’t actually crawl the page, any links on the page are virtually invisible.

    19. Disavowed Links Don’t Pass Value (Typically)

    If you’ve built some shady links, or been hit by a penalty, you can use Google’s disavow tool to help wipe away your sins.

    By disavowing, Google effectively removes these backlinks for consideration when they crawl the web.

    Disavowed Links Don’t Pass Value (Typically)

    On the other hand, if Google thinks you’ve made a mistake with your disavow file, they may choose to ignore it entirely – probably to prevent you from self-inflicted harm.

    20. Unlinked Mentions May Associate Data or Authority With A Website

    Google may connect data about entities (concepts like a business, a person, a work of art, etc) without the presence of HTML links, like the way it does with local business citations or with which data refers to a brand, a movie, a notable person, etc.

    In this fashion, unlinked mentions may still associate data or authority with a website or a set of information—even when no link is present.

    Unlinked Mentions May Associate Data or Authority With A Website

    Bill Slawski has written extensively about entities in search (a few examples here, here, and here). It’s a heady subject, but suffice to say Google doesn’t always need links to associate data and websites together, and strong entity associations may help a site to rank.

    Below, you’ll find all twenty principals combined into a single graphic. If you’d like to print or embed the image, click here for a higher-res version.

    Please credit Moz when using any of these images.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Moz Blog

    Posted in IM NewsComments Off

    How Bad Was Google’s Deindexing Bug?

    Posted by Dr-Pete

    On Friday, April 5, after many website owners and SEOs reported pages falling out of rankings, Google confirmed a bug that was causing pages to be deindexed:

    MozCast showed a multi-day increase in temperatures, including a 105° spike on April 6. While deindexing would naturally cause ranking flux, as pages temporarily fell out of rankings and then reappeared, SERP-monitoring tools aren’t designed to separate the different causes of flux.

    Can we isolate deindexing flux?

    Google’s own tools can help us check whether a page is indexed, but doing this at scale is difficult, and once an event has passed, we no longer have good access to historical data. What if we could isolate a set of URLs, though, that we could reasonably expect to be stable over time? Could we use that set to detect unusual patterns?

    Across the month of February, the MozCast 10K daily tracking set had 149,043 unique URLs ranking on page one. I reduced that to a subset of URLs with the following properties:

    1. They appeared on page one every day in February (28 total times)
    2. The query did not have sitelinks (i.e. no clear dominant intent)
    3. The URL ranked at position #5 or better

    Since MozCast only tracks page one, I wanted to reduce noise from a URL “falling off” from, say, position #9 to #11. Using these qualifiers, I was left with a set of 23,237 “stable” URLs. So, how did those URLs perform over time?

    Here’s the historical data from February 28, 2019 through April 10. This graph is the percentage of the 23,237 stable URLs that appeared in MozCast SERPs:

    Since all of the URLs in the set were stable throughout February, we expect 100% of them to appear on February 28 (which the graph bears out). The change over time isn’t dramatic, but what we see is a steady drop-off of URLs (a natural occurrence of changing SERPs over time), with a distinct drop on Friday, April 5th, a recovery, and then a similar drop on Sunday, April 7th.

    Could you zoom in for us old folks?

    Having just switched to multifocal contacts, I feel your pain. Let’s zoom that Y-axis a bit (I wanted to show you the unvarnished truth first) and add a trendline. Here’s that zoomed-in graph:


    The trend-line is in purple. The departure from trend on April 5th and 7th is pretty easy to see in the zoomed-in version. The day-over-day drop on April 5th was 4.0%, followed by a recovery, and then a second, very similar, 4.4% drop.

    Note that this metric moved very little during March’s algorithm flux, including the March “core” update. We can’t prove definitively that the stable URL drop cleanly represents deindexing, but it appears to not be impacted much by typical Google algorithm updates.

    What about dominant intent?

    I purposely removed queries with expanded sitelinks from the analysis, since those are highly correlated with dominant intent. I hypothesized that dominant intent might mask some of the effects, as Google is highly invested in surfacing specific sites for those queries. Here’s the same analysis just for the queries with expanded sitelinks (this yielded a smaller set of 5,064 stable URLs):

    Other than minor variations, the pattern for dominant-intent URLs appears to be very similar to the previous analysis. It appears that the impact of deindexing was widespread.

    Was it random or systematic?

    It’s difficult to determine whether this bug was random, affecting all sites somewhat equally, or was systematic in some way. It’s possible that restricting our analysis to “stable” URLs is skewing the results. On the other hand, trying to measure the instability of inherently-unstable URLs is a bit nonsensical. I should also note that the MozCast data set is skewed toward so-called “head” terms. It doesn’t contain many queries in the very-long tail, including natural-language questions.

    One question we can answer is whether large sites were impacted by the bug. The graph below isolates our “Big 3″ in MozCast: Wikipedia, Amazon, and Facebook. This reduced us to 2,454 stable URLs. Unfortunately, the deeper we dive, the smaller the data-set gets:


    At the same 90–100% zoomed-in scale, you can see that the impact was smaller than across all stable URLs, but there’s still a clear pair of April 5th and April 7th dips. It doesn’t appear that these mega-sites were immune.

    Looking at the day-over-day data from April 4th to 5th, it appears that the losses were widely distributed across many domains. Of domains that had 10-or-more stable URLs on April 4th, roughly half saw some loss of ranking URLs. The only domains that experienced 100% day-over-day loss were those that had 3-or-fewer stable URLs in our data set. It does not appear from our data that deindexing systematically targeted specific sites.

    Is this over, and what’s next?

    As one of my favorite movie quotes says: “There are no happy endings because nothing ever ends.” For now, indexing rates appear to have returned to normal, and I suspect that the worst is over, but I can’t predict the future. If you suspect your URLs have been deindexed, it’s worth manually reindexing in Google Search Console. Note that this is a fairly tedious process, and there are daily limits in place, so focus on critical pages.

    The impact of the deindexing bug does appear to be measurable, although we can argue about how “big” 4% is. For something as consequential as sites falling out of Google rankings, 4% is quite a bit, but the long-term impact for most sites should be minimal. For now, there’s not much we can do to adapt — Google is telling us that this was a true bug and not a deliberate change.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Moz Blog

    Posted in IM NewsComments Off

    Exploring Google’s New Carousel Featured Snippet

    Posted by TheMozTeam

    Google let it be known earlier this year that snippets were a-changin’. And true to their word, we’ve seen them make two major updates to the feature — all in an attempt to answer more of your questions.

    We first took you on a deep dive of double featured snippets, and now we’re taking you for a ride on the carousel snippet. We’ll explore how it behaves in the wild and which of its snippets you can win.

    For your safety, please remain seated and keep your hands, arms, feet, and legs inside the vehicle at all times!

    What a carousel snippet is an how it works

    This particular snippet holds the answers to many different questions and, as the name suggests, employs carousel-like behaviour in order to surface them all.

    When you click one of the “IQ-bubbles” that run along the bottom of the snippet, JavaScript takes over and replaces the initial “parent” snippet with one that answers a brand new query. This query is a combination of your original search term and the text of the IQ-bubble.

    So, if you searched [savings account rates] and clicked the “capital one” IQ-bubble, you’d be looking at a snippet for “savings account rates capital one.” That said, 72.06 percent of the time, natural language processing will step in here and produce something more sensible, like “capital one savings account rates.”

    On the new snippet, the IQ-bubbles sit at the top, making room for the “Search for” link at the bottom. The link is the bubble snippet’s query and, when clicked, becomes the search query of a whole new SERP — a bit of fun borrowed from the “People also ask” box.

    You can blame the ludicrous “IQ-bubble” name on Google — it’s the class tag they gave on HTML SERP. We have heard them referred to as “refinement” bubbles or “related search” bubbles, but we don’t like either because we’ve seen them do both refine and relate. IQ-bubble it is.

    There are now 6 times the number of snippets on a SERP

    Back in April, we sifted through every SERP in STAT to see just how large the initial carousel rollout was. Turns out, it made a decent-sized first impression.

    Appearing only in America, we discovered 40,977 desktop and mobile SERPs with carousel snippets, which makes up a hair over 9 percent of the US-en market. When we peeked again at the beginning of August, carousel snippets had grown by half but still had yet to reach non-US markets.

    Since one IQ-bubble equals one snippet, we deemed it essential to count every single bubble we saw. All told, there were a dizzying 224,508 IQ-bubbles on our SERPs. This means that 41,000 keywords managed to produce over 220,000 extra featured snippets. We’ll give you a minute to pick your jaw up off the floor.

    The lowest and most common number of bubbles we saw on a carousel snippet was three, and the highest was 10. The average number of bubbles per carousel snippet was 5.48 — an IQ of five if you round to the nearest whole bubble (they’re not that smart).

    Depending on whether you’re a glass-half-full or a glass-half-empty kind of person, this either makes for a lot of opportunity or a lot of competition, right at the top of the SERP.

    Most bubble-snippet URLs are nowhere else on the SERP

    When we’ve looked at “normal” snippets in the past, we’ve always been able to find the organic results that they’ve been sourced from. This wasn’t the case with carousel snippets — we could only find 10.76 percent of IQ-bubble URLs on the 100-result SERP. This left 89.24 percent unaccounted for, which is a metric heck-tonne of new results to contend with.

    Concerned about the potential competitor implications of this, we decided to take a gander at ownership at the domain level.

    Turns out things weren’t so bad. 63.05 percent of bubble snippets had come from sites that were already competing on the SERP — Google was just serving more varied content from them. It does mean, though, that there was a brand new competitor jumping onto the SERP 36.95 percent of the time. Which isn’t great.

    Just remember: these new pages or competitors aren’t there to answer the original search query. Sometimes you’ll be able to expand your content in order to tackle those new topics and snag a bubble snippet, and sometimes they’ll be beyond your reach.

    So, when IQ-bubble snippets do bother to source from the same SERP, what ranks do they prefer? Here we saw another big departure from what we’re used to.

    Normally, 97.88 percent of snippets source from the first page, and 29.90 percent typically pull from rank three alone. With bubble snippets, only 36.58 percent of their URLs came from the top 10 ranks. And while the most popular rank position that bubble snippets pulled from was on the first page (also rank three), just under five percent of them did this.

    We could apply the always helpful “just rank higher” rule here, but there appears to be plenty of exceptions to it. A top 10 spot just isn’t as essential to landing a bubble snippet as it is for a regular snippet.

    We think this is due to relevancy: Because bubble snippet queries only relate to the original search term — they’re not attempting to answer it directly — it makes sense that their organic URLs wouldn’t rank particularly high on the SERP.

    Multi-answer ownership is possible

    Next we asked ourselves, can you own more than one answer on a carousel snippet? And the answer was a resounding: you most definitely can.

    First we discovered that you can own both the parent snippet and a bubble snippet. We saw this occur on 16.71 percent of our carousel snippets.

    Then we found that owning multiple bubbles is also a thing that can happen. Just over half (57.37 percent) of our carousel snippets had two or more IQ-bubbles that sourced from the same domain. And as many as 2.62 percent had a domain that owned every bubble present — and most of those were 10-bubble snippets!

    Folks, it’s even possible for a single URL to own more than one IQ-bubble snippet, and it’s less rare than we’d have thought — 4.74 percent of bubble snippets in a carousel share a URL with a neighboring bubble.

    This begs the same obvious question that finding two snippets on the SERP did: Is your content ready to pull multi-snippet duty?

    “Search for” links don’t tend to surface the same snippet on the new SERP

    Since bubble snippets are technically providing answers to questions different from the original search term, we looked into what shows up when the bubble query is the keyword being searched.

    Specifically, we wanted to see if, when we click the “Search for” link in a bubble snippet, the subsequent SERP 1) had a featured snippet and 2) had a featured snippet that matched the bubble snippet from whence it came.

    To do this, we re-tracked our 40,977 SERPs and then tracked their 224,508 bubble “Search for” terms to ensure everything was happening at the same time.

    The answers to our two pressing questions were thus:

    1. Strange, but true, even though the bubble query was snippet-worthy on the first, related SERP, it wasn’t always snippet-worthy on its own SERP. 18.72 percent of “Search for” links didn’t produce a featured snippet on the new SERP.
    2. Stranger still, 78.11 percent of the time, the bubble snippet and its snippet on the subsequent SERP weren’t a match — Google surfaced two different answers for the same question. In fact, the bubble URL only showed up in the top 20 results on the new SERP 31.68 percent of the time.

    If we’re being honest, we’re not exactly sure what to make of all this. If you own the bubble snippet but not the snippet on the subsequent SERP, you’re clearly on Google’s radar for that keyword — but does that mean you’re next in line for full snippet status?

    And if the roles are reversed, you own the snippet for the keyword outright but not when it’s in a bubble, is your snippet in jeopardy? Let us know what you think!

    Paragraph and list formatting reign supreme (still!)

    Last, and somewhat least, we took a look at the shape all these snippets were turning up in.

    When it comes to the parent snippet, Heavens to Betsy if we weren’t surprised. For the first time ever, we saw an almost even split between paragraph and list formatting. Bubble snippets, on the other hand, went on to match the trend we’re used to seeing in regular ol’ snippets:

    We also discovered that bubble snippets aren’t beholden to one type of formatting even in their carousel. 32.21 percent of our carousel snippets did return bubbles with one format, but 59.71 percent had two and 8.09 percent had all three. This tells us that it’s best to pick the most natural format for your content.

    Get cracking with carousel snippet tracking

    If you can’t wait to get your mittens on carousel snippets, we track them in STAT, so you’ll know every keyword they appear for and have every URL housed within.

    If you’d like to learn more about SERP feature tracking and strategizing, say hello and request a demo!

    This article was originally published on the STAT blog on September 13, 2018.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Moz Blog

    Posted in IM NewsComments Off

    Google’s John Mueller: “Why Not Put A Date On It”

    As you all know, I am a huge fan of dates on articles and blog content. I even suggested Google should penalize pages that do not put dates on articles. Google has not gone that far, yet – but John Mueller seems to be a fan of dates on articles as well.

    Search Engine Roundtable

    Posted in IM NewsComments Off

    Google’s August 1st Core Update: Week 1

    Posted by Dr-Pete

    On August 1, Google (via Danny Sullivan’s @searchliaison account) announced that they released a “broad core algorithm update.” Algorithm trackers and webmaster chatter confirmed multiple days of heavy ranking flux, including our own MozCast system:

    Temperatures peaked on August 1-2 (both around 114°F), with a 4-day period of sustained rankings flux (purple bars are all over 100°F). While this has settled somewhat, yesterday’s data suggests that we may not be done.

    August 2nd set a 2018 record for MozCast at 114.4°F. Keep in mind that, while MozCast was originally tuned to an average temperature of 70°F, 2017-2018 average temperatures have been much higher (closer to 90° in 2018).

    Temperatures by Vertical

    There’s been speculation that this algo update targeted so called YMYL queries (Your Money or Your Life) and disproportionately impacted health and wellness sites. MozCast is broken up into 20 keyword categories (roughly corresponding to Google Ads categories). Here are the August 2nd temperatures by category:

    At first glance, the “Health” category does appear to be the most impacted. Keywords in that category had a daily average temperature of 124°F. Note, though, that all categories showed temperatures over 100°F on August 1st – this isn’t a situation where one category was blasted and the rest were left untouched. It’s also important to note that this pattern shifted during the other three days of heavy flux, with other categories showing higher average temperatures. The multi-day update impacted a wide range of verticals.

    Top 30 winners

    So, who were the big winners (so far) of this update? I always hesitate to do a winners/losers analysis – while useful, especially for spotting patterns, there are plenty of pitfalls. First and foremost, a site can gain or lose SERP share for many reasons that have nothing to do with algorithm updates. Second, any winners/losers analysis is only a snapshot in time (and often just one day).

    Since we know that this update spanned multiple days, I’ve decided to look at the percentage increase (or decrease) in SERP share between July 31st and August 7th. In this analysis, “Share” is a raw percentage of page-1 rankings in the MozCast 10K data set. I’ve limited this analysis to only sites that had at least 25 rankings across our data set on July 31 (below that the data gets very noisy). Here are the top 30…

    The first column is the percentage increase across the 7 days. The final column is the overall share – this is very low for all but mega-sites (Wikipedia hovers in the colossal 5% range).

    Before you over-analyze, note the second column – this is the percent change from the highest July SERP share for that site. What the 7-day share doesn’t tell us is whether the site is naturally volatile. Look at Time.com (#27) for a stark example. Time Magazine saw a +19.5% lift over the 7 days, which sounds great, except that they landed on a final share that was down 54.4% from their highest point in July. As a news site, Time’s rankings are naturally volatile, and it’s unclear whether this has much to do with the algorithm update.

    Similarly, LinkedIn, AMC Theaters, OpenTable, World Market, MapQuest, and RE/MAX all show highs in July that were near or above their August 7th peaks. Take their gains with a grain of salt.

    Top 30 losers

    We can run the same analysis for the sites that lost the most ground. In this case, the “Max %” is calculated against the July low. Again, we want to be mindful of any site where the 7-day drop looks a lot different than the drop from that site’s July low-point…

    Comparing the first two columns, Verywell Health immediately stands out. While the site ended the 7-day period down 52.3%, it was up just over 200% from July lows. It turns out that this site was sitting very low during the first week of July and then saw a jump in SERP share. Interestingly, Verywell Family and Verywell Fit also appear on our top 30 losers list, suggesting that there’s a deeper story here.

    Anecdotally, it’s easy to spot a pattern of health and wellness sites in this list, including big players like Prevention and LIVESTRONG. Whether this list represents the entire world of sites hit by the algorithm update is impossible to say, but our data certainly seems to echo what others are seeing.

    Are you what you E-A-T?

    There’s been some speculation that this update is connected to Google’s recent changes to their Quality Rater Guidelines. While it’s very unlikely that manual ratings based on the new guidelines would drive major ranking shifts (especially so quickly), it’s entirely plausible that the guideline updates and this algorithm update share a common philosophical view of quality and Google’s latest thinking on the subject.

    Marie Haynes’ post theorizing the YMYL connection also raises the idea that Google may be looking more closely at E-A-T signals (Expertise, Authoritativeness and Trust). While certainly an interesting theory, I can’t adequately address that question with this data set. Declines in sites like Fortune, IGN and Android Central pose some interesting questions about authoritativeness and trust outside of the health and wellness vertical, but I hesitate to speculate based only on a handful of outliers.

    If your site has been impacted in a material way (including significant traffic gains or drops), I’d love to hear more details in the comments section. If you’ve taken losses, try to isolate whether those losses are tied to specific keywords, keyword groups, or pages/content. For now, I’d advise that this update could still be rolling out or being tweaked, and we all need to keep our eyes open.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Moz Blog

    Posted in IM NewsComments Off

    Google’s John Mueller Shares His SEO Related Podcast List

    A Reddit thread asks folks to share their favorite SEO related podcasts. I spotted John Mueller of Google share his list of his favorite SEO podcasts as well…

    Search Engine Roundtable

    Posted in IM NewsComments Off