Tag Archive | "Google’s"

How Google’s Nofollow, Sponsored, & UGC Links Impact SEO

Posted by Cyrus-Shepard

Google shook up the SEO world by announcing big changes to how publishers should mark nofollow links. The changes — while beneficial to help Google understand the web — nonetheless caused confusion and raised a number of questions. We’ve got the answers to many of your questions here.


14 years after its introduction, Google today announced significant changes to how they treat the “nofollow” link attribute. The big points:

  1. Link attribution can be done in three ways: ”nofollow”, “sponsored”, and “ugc” — each signifying a different meaning. (The fourth way, default, means no value attributed)
  2. For ranking purposes, Google now treats each of the nofollow attributes as “hints” — meaning they likely won’t impact ranking, but Google may choose to ignore the directive and use nofollow links for rankings.
  3. Google continues to ignore nofollow links for crawling and indexing purposes, but this strict behavior changes March 1, 2020, at which point Google begins treating nofollow attributes as “hints”, meaning they may choose to crawl them.
  4. You can use the new attributes in combination with each other. For example, rel=”nofollow sponsored ugc” is valid.
  5. Paid links must either use the nofollow or sponsored attribute (either alone or in combination.) Simply using “ugc” on paid links could presumably lead to a penalty.
  6. Publishers don’t have to do anything. Google offers no incentive for changing, or punishment for not changing.
  7. Publishers using nofollow to control crawling may need to reconsider their strategy.

Why did Google change nofollow?

Google wants to take back the link graph.

Google introduced the nofollow attribute in 2005 as a way for publishers to address comment spam and shady links from user-generated content (UGC). Linking to spam or low-quality sites could hurt you, and nofollow offered publishers a way to protect themselves.

Google also required nofollow for paid or sponsored links. If you were caught accepting anything of value in exchange for linking out without the nofollow attribute, Google could penalize you.

The system generally worked, but huge portions of the web—sites like Forbes and Wikipedia—applied nofollow across their entire site for fear of being penalized, or not being able to properly police UGC.

This made entire portions of the link graph less useful for Google. Should curated links from trusted Wikipedia contributors really not count? Perhaps Google could better understand the web if they changed how they consider nofollow links.

By treating nofollow attributes as “hints”, they allow themselves to better incorporate these signals into their algorithms.

Hopefully, this is a positive step for deserving content creators, as a broader swath of the link graph opens up to more potential ranking influence. (Though for most sites, it doesn’t seem much will change.)

What is the ranking impact of nofollow links?

Prior to today, SEOs generally believed nofollow links worked like this:

  • Not used for crawling and indexing (Google didn’t follow them.)
  • Not used for ranking, as confirmed by Google. (Many SEOs have believed for years that this was in fact not the case)

To be fair, there’s a lot of debate and speculation around the second statement, and Google has been opaque on the issue. Experimental data and anecdotal evidence suggest Google has long considered nofollow links as a potential ranking signal.

As of today, Google’s guidance states the new link attributes—including sponsored and ugc—are treated like this:

  • Still not used for crawling and indexing (see the changes taking place in the future below)
  • For ranking purposes, all nofollow directives are now officially a “hint” — meaning Google may choose to ignore it and use it for ranking purposes. Many SEOs believe this is how Google has been treating nofollow for quite some time.

Beginning March 1, 2020, these link attributes will be treated as hints across the board, meaning:

  • In some cases, they may be used for crawling and indexing
  • In some cases, they may be used for ranking

Emphasis on the word “some.” Google is very explicit that in most cases they will continue to ignore nofollow links as usual.

Do publishers need to make changes?

For most sites, the answer is no — only if they want to. Google isn’t requiring sites to make changes, and as of yet, there is no business case to be made.

That said, there are a couple of cases where site owners may want to implement the new attributes:

  1. Sites that want to help Google better understand the sites they—or their contributors—are linking to. For example, it could be to everyone’s benefit for sites like Wikipedia to adopt these changes. Or maybe Moz could change how it marks up links in the user-generated Q&A section (which often links to high-quality sources.)
  2. Sites that use nofollow for crawl control. For sites with large faceted navigation, nofollow is sometimes an effective tool at preventing Google from wasting crawl budget. It’s too early to tell if publishers using nofollow this way will need to change anything before Google starts treating nofollow as a crawling “hint” but it may be important to pay attention to.

To be clear, if a site is properly using nofollow today, SEOs do not need to recommend any changes be made. Though sites are free to do so, they should not expect any rankings boost for doing so, or new penalties for not changing.

That said, Google’s use of these new link attributes may evolve, and it will be interesting to see in the future—through study and analysis—if a ranking benefit does emerge from using nofollow attributes in a certain way.

Which link attribute should you use?

If you choose to change your nofollow links to be more specific, Google’s guidelines are very clear, so we won’t repeat them in-depth here. In brief, your choices are:

  1. rel=”sponsored” – For paid or sponsored links. This would assumingly include affiliate links, although Google hasn’t explicitly said.
  2. rel=”ugc” – Links within all user-generated content. Google has stated if UGC is created by a trusted contributor, this may not be necessary.
  3. rel=”nofollow” – A catchall for all nofollow links. As with the other nofollow directives, these links generally won’t be used for ranking, crawling, or indexing purposes.

Additionally, attributes can be used in combination with one another. This means a declaration such as rel=”nofollow sponsored” is 100% valid.

Can you be penalized for not marking paid links?

Yes, you can still be penalized, and this is where it gets tricky.

Google advises to mark up paid/sponsored links with either “sponsored” or “nofollow” only, but not “ugc”.

This adds an extra layer of confusion. What if your UGC contributors are including paid or affiliate links in their content/comments? Google, so far, hasn’t been clear on this.

For this reason, we may likely see publishers continue to markup UGC content with “nofollow” as a default, or possibly “nofollow ugc”.

Can you use the nofollow attributes to control  crawling and indexing?

Nofollow has always been a very, very poor way to prevent Google from indexing your content, and it continues to be that way.

If you want to prevent Google from indexing your content, it’s recommended to use one of several other methods, most typically some form of “noindex”.

Crawling, on the other hand, is a slightly different story. Many SEOs use nofollow on large sites to preserve crawl budget, or to prevent Google from crawling unnecessary pages within faceted navigation.

Based on Google statements, it seems you can still attempt to use nofollow in this way, but after March 1, 2020, they may choose to ignore this. Any SEO using nofollow in this way may need to get creative in order to prevent Google from crawling unwanted sections of their sites.

Final thoughts: Should you implement the new nofollow attributes?

While there is no obvious compelling reason to do so, this is a decision every SEO will have to make for themselves.

Given the initial confusion and lack of clear benefits, many publishers will undoubtedly wait until we have better information.

That said, it certainly shouldn’t hurt to make the change (as long as you mark paid links appropriately with “nofollow” or “sponsored”.) For example, the Moz Blog may someday change comment links below to rel=”ugc”, or more likely rel=”nofollow ugc”.

Finally, will anyone actually use the “sponsored” attribute, at the risk of giving more exposure to paid links? Time will tell.

What are your thoughts on Google’s new nofollow attributes? Let us know in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Google’s Indexing Issues Continue But This One Is Different

Last night I reported that Google was having issues indexing new content again, yes – again. Danny Sullivan from Google said it seems like that was the case and said “We’ll post on @googlewmc if we confirm and have more to share.” Nothing was posted there – yet. But it does seem like indexing issues are happening for some sites – not all.


Search Engine Roundtable

Posted in IM NewsComments Off

All Links are Not Created Equal: 20 New Graphics on Google’s Valuation of Links

Posted by Cyrus-Shepard

Twenty-two years ago, the founders of Google invented PageRank, and forever changed the web. A few things that made PageRank dramatically different from existing ranking algorithms:

  • Links on the web count as votes. Initially, all votes are equal.
  • Pages which receive more votes become more important (and rank higher.)
  • More important pages cast more important votes.

But Google didn’t stop there: they innovated with anchor text, topic-modeling, content analysis, trust signals, user engagement, and more to deliver better and better results.

Links are no longer equal. Not by a long shot.

Rand Fishkin published the original version of this post in 2010—and to be honest, it rocked our world. Parts of his original have been heavily borrowed here, and Rand graciously consulted on this update.

In this post, we’ll walk you through 20 principles of link valuation that have been observed and tested by SEOs. In some cases, they have been confirmed by Google, while others have been patented. Please note that these are not hard and fast rules, but principles that interplay with one another. A burst of fresh link can often outweigh powerful links, spam links can blunt the effect of fresh links, etc.

We strongly encourage you to test these yourselves. To quote Rand, “Nothing is better for learning SEO than going out and experimenting in the wild.”

1. Links From Popular Pages Cast More Powerful Votes

Let’s begin with a foundational principle. This concept formed the basis of Google’s original PageRank patent, and quickly help vault it to the most popular search engine in the world.

PageRank can become incredibly complex very quickly—but to oversimplify—the more votes (links) a page has pointed to it, the more PageRank (and other possible link-based signals) it accumulates. The more votes it accumulates, the more it can pass on to other pages through outbound links.

In basic terms, popular pages are ones that have accumulated a lot of votes themselves. Scoring a link from a popular page can typically be more powerful than earning a link from a page with fewer link votes.

Links From Popular Pages Cast More Powerful Votes

2. Links “Inside” Unique Main Content Pass More Value than Boilerplate Links

Google’s Reasonable Surfer, Semantic Distance, and Boilerplate patents all suggest valuing content and links more highly if they are positioned in the unique, main text area of the page, versus sidebars, headers, and footers, aka the “boilerplate.”

It certainly makes sense, as boilerplate links are not truly editorial, but typically automatically inserted by a CMS (even if a human decided to put them there.) Google’s Quality Rater Guidelines encourage evaluators to focus on the “Main Content” of a page.

Links Inside Unique Main Content Pass More Value than Boilerplate Links

Similarly, SEO experiments have found that links hidden within expandable tabs or accordions (by either CSS or JavaScript) may carry less weight than fully visible links, though Google says they fully index and weight these links.

3. Links Higher Up in the Main Content Cast More Powerful Votes

If you had a choice between 2 links, which would you choose?

  1. One placed prominently in the first paragraph of a page, or
  2. One placed lower beneath several paragraphs

Of course, you’d pick the link visitors would likely click on, and Google would want to do the same. Google’s Reasonable Surfer Patent describes methods for giving more weight to links it believes people will actually click, including links placed in more prominent positions on the page.

Links Higher Up in the Main Content Cast More Powerful Votes

Matt Cutts, former head of Google’s Webspam team, once famously encouraged SEOs to pay attention to the first link on the page, and not bury important links. (source)

4. Links With Relevant Anchor Text May Pass More Value

Also included in Google’s Reasonable Surfer patent is the concept of giving more weight to links with relevant anchor text. This is only one of several Google patents where anchor text plays an important role.

Multiple experiments over the years repeatedly confirm the power of relevant anchor text to boost a page’s ranking better than generic or non-relevant anchor text.

It’s important to note that the same Google patents that propose boosting the value of highly-relevant anchors, also discuss devaluing or even ignoring off-topic or irrelevant anchors altogether.

Not that you should spam your pages with an abundance of exact match anchors. Data typically shows that high ranking pages typically have a healthy, natural mix of relevant anchors pointing to them.

Links With Relevant Anchor Text May Pass More Value

Similarly, links may carry the context of the words+phrases around/near the link. Though hard evidence is scant, this is mentioned in Google’s patents, and it makes sense that a link surrounded by topically relevant content would be more contextually relevant than the alternative.

5. Links from Unique Domains Matter More than Links from Previously Linking Sites

Experience shows that it’s far better to have 50 links from 50 different domains than to have 500 more links from a site that already links to you.

This makes sense, as Google’s algorithms are designed to measure popularity across the entire web and not simply popularity from a single site.

In fact, this idea has been supported by nearly every SEO ranking factor correlation study ever performed. The number of unique linking root domains is almost always a better predictor of Google rankings than a site’s raw number of total links.

Links from Unique Domains Matter More than Links from Previously Linking Sites

Rand points out that this principle is not always universally true. “When given the option between a 2nd or 3rd link from the NYTimes vs. randomsitexyz, it’s almost always more rank-boosting and marketing helpful to go with another NYT link.”

6. External Links are More Influential than Internal Links

If we extend the concept from #3 above, then it follows that links from external sites should count more than internal links from your own site. The same correlation studies almost always show that high ranking sites are associated with more external links than lower ranking sites.

Search engines seem to follow the concept that what others say about you is more important than what you say about yourself.

External Links are More Influential than Internal Links

That’s not to say that internal links don’t count. On the contrary, internal linking and good site architecture can be hugely impactful on Google rankings. That said, building external links is often the fastest way to higher rankings and more traffic.

7. Links from Sites Closer to a Trusted Seed Set May Pass More Value

The idea of TrustRank has been around for many years. Bill Slawski covers it here.

More recently, Google updated its original PageRank patent with a section that incorporates the concept of “trust” using seed sites. The closer a site is linked to a trusted seed site, the more of a boost it receives.

In theory, this means that black hat Private Blog Networks (PBNs) would be less effective if they were a large link distance away from more trusted sites.

Links from Sites Closer to a Trusted Seed Set May Pass More Value

Beyond links, other ways that Google may evaluate trust is through online reputation—e.g. through online reviews or sentiment analysis—and use of accurate information (facts). This is of particular concern with YMYL (Your Money or Your Life) pages that “impact the future happiness, health, financial stability, or safety of users.”

This means links from sites that Google considers misleading and/or dangerous may be valued less than links from sites that present more reputable information.

8. Links From Topically Relevant Pages May Cast More Powerful Votes

You run a dairy farm. All things being equal, would you rather have a link from:

  1. The National Dairy Association
  2. The Association of Automobile Mechanics

Hopefully, you choose “a” because you recognize it’s more relevant. Though several mechanisms, Google may act in the same way to toward topically relevant links, including Topic-Sensitive PageRank, phrase-based indexing, and local inter-connectivity.

These concepts also help discount spam links from non-relevant pages.

Links From Topically Relevant Pages Cast More Powerful Votes

While I’ve included the image above, the concepts around Google’s use of topical relevance is incredibly complex. For a primer on SEO relevance signals, I recommend reading: 

  1. Topical SEO: 7 Concepts of Link Relevance & Google Rankings
  2. More than Keywords: 7 Concepts of Advanced On-Page SEO

9. Links From Fresh Pages Can Pass More Value Than Links From Stale Pages

Freshness counts.

Google uses several ways of evaluating content based on freshness. One way to determine the relevancy of a page is to look at the freshness of the links pointing at it.

The basic concept is that pages with links from fresher pages—e.g. newer pages and those more regularly updated—are likely more relevant than pages with links from mostly stale pages, or pages that haven’t been updated in a while. 

For a good read on the subject, Justing Briggs has described and named this concept FreshRank.

    A page with a burst of links from fresher pages may indicate immediate relevance, compared to a page that has had the same old links for the past 10 years. In these cases, the rate of link growth and the freshness of the linking pages can have a significant influence on rankings.

    Links From Fresh Pages Can Pass More Value Than Links From Stale Pages

    It’s important to note that “old” is not the same thing as stale. A stale page is one that:

    • Isn’t updated, often with outdated content
    • Earns fewer new links over time
    • Exhibits declining user engagement

    If a page doesn’t meet these requirements, it can be considered fresh – no matter its actual age. As Rand notes, “Old crusty links can also be really valuable, especially if the page is kept up to date.”

    10. The Rate of Link Growth Can Signal Freshness

    If Google sees a burst of new links to a page, this could indicate a signal of relevance.

    By the same measure, a decrease in the overall rate of link growth would indicate that the page has become stale, and likely to be devalued in search results.

    All of these freshness concepts, and more, are covered by Google’s Information Retrieval Based on Historical Data patent.

    The Rate of Link Growth Can Signal Freshness

    If a webpage sees an increase in its link growth rate, this could indicate a signal of relevance to search engines. For example, if folks start linking to your personal website because you’re about to get married, your site could be deemed more relevant and fresh (as far as this current event goes.)

    11. Google Devalues Spam and Low-Quality Links

    While there are trillions of links on the web, the truth is that Google likely ignores a large swath of them.

    Google’s goal is to focus on editorial links, e.g. “links that you didn’t even have to ask for because they are editorially given by other website owners.” Since Penguin 4.0, Google has implied that their algorithms simply ignore links that they don’t feel meet these standards. These include links generated by negative SEO and link schemes.

    Google Devalues Spam and Low-Quality Links

    That said, there’s lots of debate if Google truly ignores all low-quality links, as there’s evidence that low-quality links—especially those Google might see as manipulative—may actually hurt you.

    12. Link Echos: The Influence Of A Link May Persist Even After It Disappears

    Link Echos (a.k.a. Link Ghosts) describe the phenomenon where the ranking impact of a link often appears to persist, even long after the link is gone.

    Rand has performed several experiments on this and the reverberation effect of links is incredibly persistent, even months after the links have dropped from the web, and Google has recrawled and indexed these pages several times.

    Speculation as to why this happens includes: Google looking at other ranking factors once the page has climbed in rankings (e.g. user engagement), Google assigning persistence or degradation to link value that isn’t wholly dependent on its existence on the page, or factors we can’t quite recognize.

    Link Echos: The Influence Of A Link May Persist Even After It Disappears

    Whatever the root cause, the value of a link can have a reverberating, ethereal quality that exists separately from its HTML roots.

    As a counterpoint, Niel Patel recently ran an experiment where rankings dropped after low-authority sites lost a large number of links all at once, so it appears possible to overcome this phenomenon under the right circumstances.

    13. Sites Linking Out to Authoritative Content May Count More Than Those That Do Not

    While Google claims that linking out to quality sites isn’t an explicit ranking factor, they’ve also made statements in the past that it can impact your search performance.

    “In the same way that Google trusts sites less when they link to spammy sites or bad neighborhoods, parts of our system encourage links to good sites.” – Matt Cutts

    Sites Linking Out to Authoritative Content May Count More Than Those That Do Not

    Furthermore, multiple SEO experiments and anecdotal evidence over the years suggest that linking out to relevant, authoritative sites can result in a net positive effect on rankings and visibility.

    14. Pages That Link To Spam May Devalue The Other Links They Host

    If we take the quote above and focus specifically on the first part, we understand that Google trusts sites less when they link to spam.

    This concept can be extended further, as there’s ample evidence of Google demoting sites it believes to be hosting paid links, or part of a private blog network.

    Pages That Link To Spam May Devalue The Other Links They Host

    Basic advice: when relevant and helpful, link to authoritative sites (and avoid linking to bad sites) when it will benefit your audience.

    15. Nofollowed Links Aren’t Followed, But May Have Value In Some Cases

    Google invented the nofollow link specifically because many webmasters found it hard to prevent spammy, outbound links on their sites – especially those generated by comment spam and UGC.

    A common belief is that nofollow links don’t count at all, but Google’s own language leaves some wriggle room. They don’t follow them absolutely, but “in general” and only “essentially” drop the links from their web graph.

    Nofollowed Links Aren't Followed, But May Have Value In Some Cases

    That said, numerous SEO experiments and correlation data all suggest that nofollow links can have some value, and webmasters would be wise to maximize their value.

    16. ManyJavaScript Links Pass Value, But Only If Google Renders Them

    In the old days of SEO, it was common practice to “hide” links using JavaScript, knowing Google couldn’t crawl them.

    Today, Google has gotten significantly better at crawling and rendering JavaScript, so that most JavaScript links today will count.

    ManyJavaScript Links Pass Value, But Only If Google Renders Them

    That said, Google still may not crawl or index every JavaScript link. For one, they need extra time and effort to render the JavaScript, and not every site delivers compatible code. Furthermore, Google only considers full links with an anchor tag and href attribute.

    17. If A Page Links To The Same URL More Than Once, The First Link Has Priority

    … Or more specifically, only the first anchor text counts.

    If Google crawls a page with two or more links pointing to the same URL, they have explained that while PageRank flows normally through both, they will only use the first anchor text for ranking purposes.

    This scenario often comes into play when your sitewide navigation links to an important page, and you also link to it within an article below.

    If A Page Links To The Same URL More Than Once, The First Link Has Priority

    Through testing, folks have discovered a number of clever ways to bypass the First Link Priority rule, but newer studies haven’t been published for several years.

    18. Robots.txt and Meta Robots May Impact How and Whether Links Are Seen

    Seems obvious, but in order for Google to weigh a link in it’s ranking algorithm, it has to be able to crawl and follow it. Unsurprisingly, there are a number of site and page-level directives which can get in Google’s way. These include:

    • The URL is blocked from crawling by robots.txt
    • Robots meta tag or X-Robots-Tag HTTP header use the “nofollow” directive
    • The page is set to “noindex, follow” but Google eventually stops crawling
    Robots.txt and Meta Robots May Impact How and Whether Links Are Seen

    Often Google will include a URL in its search results if other pages link to it, even if that page is blocked by robots.txt. But because Google can’t actually crawl the page, any links on the page are virtually invisible.

    19. Disavowed Links Don’t Pass Value (Typically)

    If you’ve built some shady links, or been hit by a penalty, you can use Google’s disavow tool to help wipe away your sins.

    By disavowing, Google effectively removes these backlinks for consideration when they crawl the web.

    Disavowed Links Don’t Pass Value (Typically)

    On the other hand, if Google thinks you’ve made a mistake with your disavow file, they may choose to ignore it entirely – probably to prevent you from self-inflicted harm.

    20. Unlinked Mentions May Associate Data or Authority With A Website

    Google may connect data about entities (concepts like a business, a person, a work of art, etc) without the presence of HTML links, like the way it does with local business citations or with which data refers to a brand, a movie, a notable person, etc.

    In this fashion, unlinked mentions may still associate data or authority with a website or a set of information—even when no link is present.

    Unlinked Mentions May Associate Data or Authority With A Website

    Bill Slawski has written extensively about entities in search (a few examples here, here, and here). It’s a heady subject, but suffice to say Google doesn’t always need links to associate data and websites together, and strong entity associations may help a site to rank.

    Below, you’ll find all twenty principals combined into a single graphic. If you’d like to print or embed the image, click here for a higher-res version.

    Please credit Moz when using any of these images.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


    Moz Blog

    Posted in IM NewsComments Off

    How Bad Was Google’s Deindexing Bug?

    Posted by Dr-Pete

    On Friday, April 5, after many website owners and SEOs reported pages falling out of rankings, Google confirmed a bug that was causing pages to be deindexed:

    MozCast showed a multi-day increase in temperatures, including a 105° spike on April 6. While deindexing would naturally cause ranking flux, as pages temporarily fell out of rankings and then reappeared, SERP-monitoring tools aren’t designed to separate the different causes of flux.

    Can we isolate deindexing flux?

    Google’s own tools can help us check whether a page is indexed, but doing this at scale is difficult, and once an event has passed, we no longer have good access to historical data. What if we could isolate a set of URLs, though, that we could reasonably expect to be stable over time? Could we use that set to detect unusual patterns?

    Across the month of February, the MozCast 10K daily tracking set had 149,043 unique URLs ranking on page one. I reduced that to a subset of URLs with the following properties:

    1. They appeared on page one every day in February (28 total times)
    2. The query did not have sitelinks (i.e. no clear dominant intent)
    3. The URL ranked at position #5 or better

    Since MozCast only tracks page one, I wanted to reduce noise from a URL “falling off” from, say, position #9 to #11. Using these qualifiers, I was left with a set of 23,237 “stable” URLs. So, how did those URLs perform over time?

    Here’s the historical data from February 28, 2019 through April 10. This graph is the percentage of the 23,237 stable URLs that appeared in MozCast SERPs:

    Since all of the URLs in the set were stable throughout February, we expect 100% of them to appear on February 28 (which the graph bears out). The change over time isn’t dramatic, but what we see is a steady drop-off of URLs (a natural occurrence of changing SERPs over time), with a distinct drop on Friday, April 5th, a recovery, and then a similar drop on Sunday, April 7th.

    Could you zoom in for us old folks?

    Having just switched to multifocal contacts, I feel your pain. Let’s zoom that Y-axis a bit (I wanted to show you the unvarnished truth first) and add a trendline. Here’s that zoomed-in graph:

    

    The trend-line is in purple. The departure from trend on April 5th and 7th is pretty easy to see in the zoomed-in version. The day-over-day drop on April 5th was 4.0%, followed by a recovery, and then a second, very similar, 4.4% drop.

    Note that this metric moved very little during March’s algorithm flux, including the March “core” update. We can’t prove definitively that the stable URL drop cleanly represents deindexing, but it appears to not be impacted much by typical Google algorithm updates.

    What about dominant intent?

    I purposely removed queries with expanded sitelinks from the analysis, since those are highly correlated with dominant intent. I hypothesized that dominant intent might mask some of the effects, as Google is highly invested in surfacing specific sites for those queries. Here’s the same analysis just for the queries with expanded sitelinks (this yielded a smaller set of 5,064 stable URLs):

    Other than minor variations, the pattern for dominant-intent URLs appears to be very similar to the previous analysis. It appears that the impact of deindexing was widespread.

    Was it random or systematic?

    It’s difficult to determine whether this bug was random, affecting all sites somewhat equally, or was systematic in some way. It’s possible that restricting our analysis to “stable” URLs is skewing the results. On the other hand, trying to measure the instability of inherently-unstable URLs is a bit nonsensical. I should also note that the MozCast data set is skewed toward so-called “head” terms. It doesn’t contain many queries in the very-long tail, including natural-language questions.

    One question we can answer is whether large sites were impacted by the bug. The graph below isolates our “Big 3″ in MozCast: Wikipedia, Amazon, and Facebook. This reduced us to 2,454 stable URLs. Unfortunately, the deeper we dive, the smaller the data-set gets:

     

    At the same 90–100% zoomed-in scale, you can see that the impact was smaller than across all stable URLs, but there’s still a clear pair of April 5th and April 7th dips. It doesn’t appear that these mega-sites were immune.

    Looking at the day-over-day data from April 4th to 5th, it appears that the losses were widely distributed across many domains. Of domains that had 10-or-more stable URLs on April 4th, roughly half saw some loss of ranking URLs. The only domains that experienced 100% day-over-day loss were those that had 3-or-fewer stable URLs in our data set. It does not appear from our data that deindexing systematically targeted specific sites.

    Is this over, and what’s next?

    As one of my favorite movie quotes says: “There are no happy endings because nothing ever ends.” For now, indexing rates appear to have returned to normal, and I suspect that the worst is over, but I can’t predict the future. If you suspect your URLs have been deindexed, it’s worth manually reindexing in Google Search Console. Note that this is a fairly tedious process, and there are daily limits in place, so focus on critical pages.

    The impact of the deindexing bug does appear to be measurable, although we can argue about how “big” 4% is. For something as consequential as sites falling out of Google rankings, 4% is quite a bit, but the long-term impact for most sites should be minimal. For now, there’s not much we can do to adapt — Google is telling us that this was a true bug and not a deliberate change.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


    Moz Blog

    Posted in IM NewsComments Off

    Exploring Google’s New Carousel Featured Snippet

    Posted by TheMozTeam

    Google let it be known earlier this year that snippets were a-changin’. And true to their word, we’ve seen them make two major updates to the feature — all in an attempt to answer more of your questions.

    We first took you on a deep dive of double featured snippets, and now we’re taking you for a ride on the carousel snippet. We’ll explore how it behaves in the wild and which of its snippets you can win.

    For your safety, please remain seated and keep your hands, arms, feet, and legs inside the vehicle at all times!

    What a carousel snippet is an how it works

    This particular snippet holds the answers to many different questions and, as the name suggests, employs carousel-like behaviour in order to surface them all.

    When you click one of the “IQ-bubbles” that run along the bottom of the snippet, JavaScript takes over and replaces the initial “parent” snippet with one that answers a brand new query. This query is a combination of your original search term and the text of the IQ-bubble.

    So, if you searched [savings account rates] and clicked the “capital one” IQ-bubble, you’d be looking at a snippet for “savings account rates capital one.” That said, 72.06 percent of the time, natural language processing will step in here and produce something more sensible, like “capital one savings account rates.”

    On the new snippet, the IQ-bubbles sit at the top, making room for the “Search for” link at the bottom. The link is the bubble snippet’s query and, when clicked, becomes the search query of a whole new SERP — a bit of fun borrowed from the “People also ask” box.

    You can blame the ludicrous “IQ-bubble” name on Google — it’s the class tag they gave on HTML SERP. We have heard them referred to as “refinement” bubbles or “related search” bubbles, but we don’t like either because we’ve seen them do both refine and relate. IQ-bubble it is.

    There are now 6 times the number of snippets on a SERP

    Back in April, we sifted through every SERP in STAT to see just how large the initial carousel rollout was. Turns out, it made a decent-sized first impression.

    Appearing only in America, we discovered 40,977 desktop and mobile SERPs with carousel snippets, which makes up a hair over 9 percent of the US-en market. When we peeked again at the beginning of August, carousel snippets had grown by half but still had yet to reach non-US markets.

    Since one IQ-bubble equals one snippet, we deemed it essential to count every single bubble we saw. All told, there were a dizzying 224,508 IQ-bubbles on our SERPs. This means that 41,000 keywords managed to produce over 220,000 extra featured snippets. We’ll give you a minute to pick your jaw up off the floor.

    The lowest and most common number of bubbles we saw on a carousel snippet was three, and the highest was 10. The average number of bubbles per carousel snippet was 5.48 — an IQ of five if you round to the nearest whole bubble (they’re not that smart).

    Depending on whether you’re a glass-half-full or a glass-half-empty kind of person, this either makes for a lot of opportunity or a lot of competition, right at the top of the SERP.

    Most bubble-snippet URLs are nowhere else on the SERP

    When we’ve looked at “normal” snippets in the past, we’ve always been able to find the organic results that they’ve been sourced from. This wasn’t the case with carousel snippets — we could only find 10.76 percent of IQ-bubble URLs on the 100-result SERP. This left 89.24 percent unaccounted for, which is a metric heck-tonne of new results to contend with.

    Concerned about the potential competitor implications of this, we decided to take a gander at ownership at the domain level.

    Turns out things weren’t so bad. 63.05 percent of bubble snippets had come from sites that were already competing on the SERP — Google was just serving more varied content from them. It does mean, though, that there was a brand new competitor jumping onto the SERP 36.95 percent of the time. Which isn’t great.

    Just remember: these new pages or competitors aren’t there to answer the original search query. Sometimes you’ll be able to expand your content in order to tackle those new topics and snag a bubble snippet, and sometimes they’ll be beyond your reach.

    So, when IQ-bubble snippets do bother to source from the same SERP, what ranks do they prefer? Here we saw another big departure from what we’re used to.

    Normally, 97.88 percent of snippets source from the first page, and 29.90 percent typically pull from rank three alone. With bubble snippets, only 36.58 percent of their URLs came from the top 10 ranks. And while the most popular rank position that bubble snippets pulled from was on the first page (also rank three), just under five percent of them did this.

    We could apply the always helpful “just rank higher” rule here, but there appears to be plenty of exceptions to it. A top 10 spot just isn’t as essential to landing a bubble snippet as it is for a regular snippet.

    We think this is due to relevancy: Because bubble snippet queries only relate to the original search term — they’re not attempting to answer it directly — it makes sense that their organic URLs wouldn’t rank particularly high on the SERP.

    Multi-answer ownership is possible

    Next we asked ourselves, can you own more than one answer on a carousel snippet? And the answer was a resounding: you most definitely can.

    First we discovered that you can own both the parent snippet and a bubble snippet. We saw this occur on 16.71 percent of our carousel snippets.

    Then we found that owning multiple bubbles is also a thing that can happen. Just over half (57.37 percent) of our carousel snippets had two or more IQ-bubbles that sourced from the same domain. And as many as 2.62 percent had a domain that owned every bubble present — and most of those were 10-bubble snippets!

    Folks, it’s even possible for a single URL to own more than one IQ-bubble snippet, and it’s less rare than we’d have thought — 4.74 percent of bubble snippets in a carousel share a URL with a neighboring bubble.

    This begs the same obvious question that finding two snippets on the SERP did: Is your content ready to pull multi-snippet duty?

    “Search for” links don’t tend to surface the same snippet on the new SERP

    Since bubble snippets are technically providing answers to questions different from the original search term, we looked into what shows up when the bubble query is the keyword being searched.

    Specifically, we wanted to see if, when we click the “Search for” link in a bubble snippet, the subsequent SERP 1) had a featured snippet and 2) had a featured snippet that matched the bubble snippet from whence it came.

    To do this, we re-tracked our 40,977 SERPs and then tracked their 224,508 bubble “Search for” terms to ensure everything was happening at the same time.

    The answers to our two pressing questions were thus:

    1. Strange, but true, even though the bubble query was snippet-worthy on the first, related SERP, it wasn’t always snippet-worthy on its own SERP. 18.72 percent of “Search for” links didn’t produce a featured snippet on the new SERP.
    2. Stranger still, 78.11 percent of the time, the bubble snippet and its snippet on the subsequent SERP weren’t a match — Google surfaced two different answers for the same question. In fact, the bubble URL only showed up in the top 20 results on the new SERP 31.68 percent of the time.

    If we’re being honest, we’re not exactly sure what to make of all this. If you own the bubble snippet but not the snippet on the subsequent SERP, you’re clearly on Google’s radar for that keyword — but does that mean you’re next in line for full snippet status?

    And if the roles are reversed, you own the snippet for the keyword outright but not when it’s in a bubble, is your snippet in jeopardy? Let us know what you think!

    Paragraph and list formatting reign supreme (still!)

    Last, and somewhat least, we took a look at the shape all these snippets were turning up in.

    When it comes to the parent snippet, Heavens to Betsy if we weren’t surprised. For the first time ever, we saw an almost even split between paragraph and list formatting. Bubble snippets, on the other hand, went on to match the trend we’re used to seeing in regular ol’ snippets:

    We also discovered that bubble snippets aren’t beholden to one type of formatting even in their carousel. 32.21 percent of our carousel snippets did return bubbles with one format, but 59.71 percent had two and 8.09 percent had all three. This tells us that it’s best to pick the most natural format for your content.

    Get cracking with carousel snippet tracking

    If you can’t wait to get your mittens on carousel snippets, we track them in STAT, so you’ll know every keyword they appear for and have every URL housed within.

    If you’d like to learn more about SERP feature tracking and strategizing, say hello and request a demo!


    This article was originally published on the STAT blog on September 13, 2018.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


    Moz Blog

    Posted in IM NewsComments Off

    Google’s John Mueller: “Why Not Put A Date On It”

    As you all know, I am a huge fan of dates on articles and blog content. I even suggested Google should penalize pages that do not put dates on articles. Google has not gone that far, yet – but John Mueller seems to be a fan of dates on articles as well.


    Search Engine Roundtable

    Posted in IM NewsComments Off

    Google’s August 1st Core Update: Week 1

    Posted by Dr-Pete

    On August 1, Google (via Danny Sullivan’s @searchliaison account) announced that they released a “broad core algorithm update.” Algorithm trackers and webmaster chatter confirmed multiple days of heavy ranking flux, including our own MozCast system:

    Temperatures peaked on August 1-2 (both around 114°F), with a 4-day period of sustained rankings flux (purple bars are all over 100°F). While this has settled somewhat, yesterday’s data suggests that we may not be done.

    August 2nd set a 2018 record for MozCast at 114.4°F. Keep in mind that, while MozCast was originally tuned to an average temperature of 70°F, 2017-2018 average temperatures have been much higher (closer to 90° in 2018).

    Temperatures by Vertical

    There’s been speculation that this algo update targeted so called YMYL queries (Your Money or Your Life) and disproportionately impacted health and wellness sites. MozCast is broken up into 20 keyword categories (roughly corresponding to Google Ads categories). Here are the August 2nd temperatures by category:

    At first glance, the “Health” category does appear to be the most impacted. Keywords in that category had a daily average temperature of 124°F. Note, though, that all categories showed temperatures over 100°F on August 1st – this isn’t a situation where one category was blasted and the rest were left untouched. It’s also important to note that this pattern shifted during the other three days of heavy flux, with other categories showing higher average temperatures. The multi-day update impacted a wide range of verticals.

    Top 30 winners

    So, who were the big winners (so far) of this update? I always hesitate to do a winners/losers analysis – while useful, especially for spotting patterns, there are plenty of pitfalls. First and foremost, a site can gain or lose SERP share for many reasons that have nothing to do with algorithm updates. Second, any winners/losers analysis is only a snapshot in time (and often just one day).

    Since we know that this update spanned multiple days, I’ve decided to look at the percentage increase (or decrease) in SERP share between July 31st and August 7th. In this analysis, “Share” is a raw percentage of page-1 rankings in the MozCast 10K data set. I’ve limited this analysis to only sites that had at least 25 rankings across our data set on July 31 (below that the data gets very noisy). Here are the top 30…

    The first column is the percentage increase across the 7 days. The final column is the overall share – this is very low for all but mega-sites (Wikipedia hovers in the colossal 5% range).

    Before you over-analyze, note the second column – this is the percent change from the highest July SERP share for that site. What the 7-day share doesn’t tell us is whether the site is naturally volatile. Look at Time.com (#27) for a stark example. Time Magazine saw a +19.5% lift over the 7 days, which sounds great, except that they landed on a final share that was down 54.4% from their highest point in July. As a news site, Time’s rankings are naturally volatile, and it’s unclear whether this has much to do with the algorithm update.

    Similarly, LinkedIn, AMC Theaters, OpenTable, World Market, MapQuest, and RE/MAX all show highs in July that were near or above their August 7th peaks. Take their gains with a grain of salt.

    Top 30 losers

    We can run the same analysis for the sites that lost the most ground. In this case, the “Max %” is calculated against the July low. Again, we want to be mindful of any site where the 7-day drop looks a lot different than the drop from that site’s July low-point…

    Comparing the first two columns, Verywell Health immediately stands out. While the site ended the 7-day period down 52.3%, it was up just over 200% from July lows. It turns out that this site was sitting very low during the first week of July and then saw a jump in SERP share. Interestingly, Verywell Family and Verywell Fit also appear on our top 30 losers list, suggesting that there’s a deeper story here.

    Anecdotally, it’s easy to spot a pattern of health and wellness sites in this list, including big players like Prevention and LIVESTRONG. Whether this list represents the entire world of sites hit by the algorithm update is impossible to say, but our data certainly seems to echo what others are seeing.

    Are you what you E-A-T?

    There’s been some speculation that this update is connected to Google’s recent changes to their Quality Rater Guidelines. While it’s very unlikely that manual ratings based on the new guidelines would drive major ranking shifts (especially so quickly), it’s entirely plausible that the guideline updates and this algorithm update share a common philosophical view of quality and Google’s latest thinking on the subject.

    Marie Haynes’ post theorizing the YMYL connection also raises the idea that Google may be looking more closely at E-A-T signals (Expertise, Authoritativeness and Trust). While certainly an interesting theory, I can’t adequately address that question with this data set. Declines in sites like Fortune, IGN and Android Central pose some interesting questions about authoritativeness and trust outside of the health and wellness vertical, but I hesitate to speculate based only on a handful of outliers.

    If your site has been impacted in a material way (including significant traffic gains or drops), I’d love to hear more details in the comments section. If you’ve taken losses, try to isolate whether those losses are tied to specific keywords, keyword groups, or pages/content. For now, I’d advise that this update could still be rolling out or being tweaked, and we all need to keep our eyes open.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


    Moz Blog

    Posted in IM NewsComments Off

    Google’s John Mueller Shares His SEO Related Podcast List

    A Reddit thread asks folks to share their favorite SEO related podcasts. I spotted John Mueller of Google share his list of his favorite SEO podcasts as well…


    Search Engine Roundtable

    Posted in IM NewsComments Off

    Google’s Walled Garden: Are We Being Pushed Out of Our Own Digital Backyards?

    Posted by Dr-Pete

    Early search engines were built on an unspoken transaction — a pact between search engines and website owners — you give us your data, and we’ll send you traffic. While Google changed the game of how search engines rank content, they honored the same pact in the beginning. Publishers, who owned their own content and traditionally were fueled by subscription revenue, operated differently. Over time, they built walls around their gardens to keep visitors in and, hopefully, keep them paying.

    Over the past six years, Google has crossed this divide, building walls around their content and no longer linking out to the sources that content was originally built on. Is this the inevitable evolution of search, or has Google forgotten their pact with the people’s whose backyards their garden was built on?

    I don’t think there’s an easy answer to this question, but the evolution itself is undeniable. I’m going to take you through an exhaustive (yes, you may need a sandwich) journey of the ways that Google is building in-search experiences, from answer boxes to custom portals, and rerouting paths back to their own garden.


    I. The Knowledge Graph

    In May of 2012, Google launched the Knowledge Graph. This was Google’s first large-scale attempt at providing direct answers in search results, using structured data from trusted sources. One incarnation of the Knowledge Graph is Knowledge Panels, which return rich information about known entities. Here’s part of one for actor Chiwetel Ejiofor (note: this image is truncated)…

    The Knowledge Graph marked two very important shifts. First, Google created deep in-search experiences. As Knowledge Panels have evolved, searchers have access to rich information and answers without ever going to an external site. Second, Google started to aggressively link back to their own resources. It’s easy to overlook those faded blue links, but here’s the full Knowledge Panel with every link back to a Google property marked…

    Including links to Google Images, that’s 33 different links back to Google. These two changes — self-contained in-search experiences and aggressive internal linking — represent a radical shift in the nature of search engines, and that shift has continued and expanded over the past six years.

    More recently, Google added a sharing icon (on the right, directly below the top images). This provides a custom link that allows people to directly share rich Google search results as content on Facebook, Twitter, Google+, and by email. Google no longer views these pages as a path to a destination. Search results are the destination.

    The Knowledge Graph also spawned Knowledge Cards, more broadly known as “answer boxes.” Take any fact in the panel above and pose it as a question, and you’re likely to get a Knowledge Card. For example, “How old is Chiwetel Ejiofor?” returns the following…

    For many searchers, this will be the end of their journey. Google has answered their question and created a self-contained experience. Note that this example also contains links to additional Google searches.

    In 2015, Google launched Medical Knowledge Panels. These gradually evolved into fully customized content experiences created with partners in the medical field. Here’s one for “cardiac arrest” (truncated)…

    Note the fully customized design (these images were created specifically for these panels), as well as the multi-tabbed experience. It is now possible to have a complete, customized content experience without ever leaving Google.


    II. Live Results

    In some specialized cases, Google uses private data partnerships to create customized answer boxes. Google calls these “Live Results.” You’ve probably seen them many times now on weather, sports and stock market searches. Here’s one for “Seattle weather”…

    For the casual information seeker, these are self-contained information experiences with most or all of what we care about. Live Results are somewhat unique in that, unlike the general knowledge in the Knowledge Graph, each partnership represents a disruption to an industry.

    These partnerships have branched out over time into even more specialized results. Consider, for example, “Snoqualmie ski conditions”…

    Sports results are incredibly disruptive, and Google has expanded and enriched these results quite a bit over the past couple of years. Here’s one for “Super Bowl 2018″…

    Note that clicking any portion of this Live Result leads to a customized portal on Google that can no longer be called a “search result” in any traditional sense (more on portals later). Special sporting events, such as the 2018 Winter Olympics, have even more rich features. Here are some custom carousels for “Olympic snowboarding results”…

    Note that these are multi-column carousels that ultimately lead to dozens of smaller cards. All of these cards click to more Google search results. This design choice may look strange on desktop and marks another trend — Google’s shift to mobile-first design. Here’s the same set of results on a Google Pixel phone…

    Here, the horizontal scrolling feels more intuitive, and the carousel is the full-width of the screen, instead of feeling like a free-floating design element. These features are not only rich experiences on mobile screens, but dominate mobile results much more than they do two-column desktop results.


    III. Carousels

    Speaking of carousels, Google has been experimenting with a variety of horizontal result formats, and many of them are built around driving traffic back to Google searches and properties. One of the older styles of carousels is the list format, which runs across the top of desktop searches (above other results). Here’s one for “Seattle Sounders roster”…

    Each player links to a new search result with that player in a Knowledge Panel. This carousel expands to the width of the screen (which is unusual, since Google’s core desktop design is fixed-width). On my 1920×1080 screen, you can see 14 players, each linking to a new Google search, and the option to scroll for more…

    This type of list carousel covers a wide range of topics, from “cat breeds” to “types of cheese.” Here’s an interesting one for “best movies of 1984.” The image is truncated, but the full result includes drop-downs to select movie genres and other years…

    Once again, each result links to a new search with a Knowledge Panel dedicated to that movie. Another style of carousel is the multi-row horizontal scroller, like this one for “songs by Nirvana”…

    In this case, not only does each entry click to a new search result, but many of them have prominent featured videos at the top of the left column (more on that later). My screen shows at least partial information for 24 songs, all representing in-Google links above the traditional search results…

    A search for “laptops” (a very competitive, commercial term, unlike the informational searches above) has a number of interesting features. At the bottom of the search is this “Refine by brand” carousel…

    Clicking on one of these results leads to a new search with the brand name prepended (e.g. “Apple laptops”). The same search shows this “Best of” carousel…

    The smaller “Mentioned in:” links go to articles from the listed publishers. The main, product links go to a Google search result with a product panel. Here’s what I see when I click on “Dell XPS 13 9350″ (image is truncated)…

    This entity live in the right-hand column and looks like a Knowledge Panel, but is commercial in nature (notice the “Sponsored” label in the upper right). Here, Google is driving searchers directly into a paid/advertising channel.


    IV. Answers & Questions

    As Google realized that the Knowledge Graph would never scale at the pace of the wider web, they started to extract answers directly from their index (i.e. all of the content in the world, or at least most of it). This led to what they call “Featured Snippets”, a special kind of answer box. Here’s one for “Can hamsters eat cheese?” (yes, I have a lot of cheese-related questions)…

    Featured Snippets are an interesting hybrid. On the one hand, they’re an in-search experience (in this case, my basic question has been answered before I’ve even left Google). On the other hand, they do link out to the source site and are a form of organic search result.

    Featured Snippets also power answers on Google Assistant and Google Home. If I ask Google Home the same question about hamsters, I hear the following:

    On the website TheHamsterHouse.com, they say “Yes, hamsters can eat cheese! Cheese should not be a significant part of your hamster’s diet and you should not feed cheese to your hamster too often. However, feeding cheese to your hamster as a treat, perhaps once per week in small quantities, should be fine.”

    You’ll see the answer is identical to the Featured Snippet shown above. Note the attribution (which I’ve bolded) — a voice search can’t link back to the source, posing unique challenges. Google does attempt to provide attribution on Google Home, but as they use answers extracted from the web more broadly, we may see the way original sources are credited change depending on the use case and device.

    This broader answer engine powers another type of result, called “Related Questions” or the “People Also Ask” box. Here’s one on that same search…

    These questions are at least partially machine-generated, which is why the grammar can read a little oddly — that’s a fascinating topic for another time. If you click on “What can hamsters eat list?” you get what looks a lot like a Featured Snippet (and links to an outside source)…

    Notice two other things that are going on here. First, Google has included a link to search results for the question you clicked on (see the purple arrow). Second, the list has expanded. The two questions at the end are new. Let’s click “What do hamsters like to do for fun?” (because how can I resist?)…

    This opens up a second answer, a second link to a new Google search, and two more answers. You can continue this to your heart’s content. What’s especially interesting is that this isn’t just some static list that expands as you click on it. The new questions are generated based on your interactions, as Google tries to understand your intent and shape your journey around it.

    My colleague, Britney Muller, has done some excellent research on the subject and has taken to calling these infinite PAAs. They’re probably not quite infinite — eventually, the sun will explode and consume the Earth. Until then, they do represent a massively recursive in-Google experience.


    V. Videos & Movies

    One particularly interesting type of Featured Snippet is the Featured Video result. Search for “umbrella” and you should see a panel like this in the top-left column (truncated):

    This is a unique hybrid — it has Knowledge Panel features (that link back to Google results), but it also has an organic-style link and large video thumbnail. While it appears organic, all of the Featured Videos we’ve seen in the wild have come from YouTube (Vevo is a YouTube partner), which essentially means this is an in-Google experience. These Featured Videos consume a lot of screen real-estate and appear even on commercial terms, like Rihanna’s “umbrella” (shown here) or Kendrick Lamar’s “swimming pools”.

    Movie searches yield a rich array of features, from Live Results for local showtimes to rich Knowledge Panels. Last year, Google completely redesigned their mobile experience for movie results, creating a deep in-search experience. Here’s a mobile panel for “Black Panther”…

    Notice the tabs below the title. You can navigate within this panel to a wealth of information, including cast members and photos. Clicking on any cast member goes to a new search about that actor/actress.

    Although the search results eventually continue below this panel, the experience is rich, self-contained, and incredibly disruptive to high-ranking powerhouses in this space, including IMDB. You can even view trailers from the panel…

    On my phone, Google displayed 10 videos (at roughly two per screen), and nine of those were links to YouTube. Given YouTube’s dominance, it’s difficult to say if Google is purposely favoring their own properties, but the end result is the same — even seemingly “external” clicks are often still Google-owned clicks.


    VI. Local Results

    A similar evolution has been happening in local results. Take the local 3-pack — here’s one on a search for “Seattle movie theaters”…

    Originally, the individual business links went directly to each of those business’s websites. As of the past year or two, these instead go to local panels on Google Maps, like this one…

    On mobile, these local panels stand out even more, with prominent photos, tabbed navigation and easy access to click-to-call and directions.

    In certain industries, local packs have additional options to run a search within a search. Here’s a pack for Chicago taco restaurants, where you can filter results (from the broader set of Google Maps results) by rating, price, or hours…

    Once again, we have a fully embedded search experience. I don’t usually vouch for any of the businesses in my screenshots, but I just had the pork belly al pastor at Broken English Taco Pub and it was amazing (this is my personal opinion and in no way reflects the taco preferences of Moz, its employees, or its lawyers).

    The hospitality industry has been similarly affected. Search for an individual hotel, like “Kimpton Alexis Seattle” (one of my usual haunts when visiting the home office), and you’ll get a local panel like the one below. Pardon the long image, but I wanted you to have the full effect…

    This is an incredible blend of local business result, informational panel, and commercial result, allowing you direct access to booking information. It’s not just organic local results that have changed, though. Recently, Google started offering ads in local packs, primarily on mobile results. Here’s one for “tax attorneys”…

    Unlike traditional AdWords ads, these results don’t go directly to the advertiser’s website. Instead, like standard pack results, they go to a Google local panel. Here’s what the mobile version looks like…

    In addition, Google has launched specialized ads for local service providers, such as plumbers and electricians. These appear carousel-style on desktop, such as this one for “plumbers in Seattle”…

    Unlike AdWords advertisers, local service providers buy into a specialized program and these local service ads click to a fully customized Google sub-site, which brings us to the next topic — portals.


    VII. Custom Portals

    Some Google experiences have become so customized that they operate as stand-alone portals. If you click on a local service ad, you get a Google-owned portal that allows you to view the provider, check to see if they can handle your particular problem in your zip code, and (if not) view other, relevant providers…

    You’ve completely left the search result at this point, and can continue your experience fully within this Google property. These local service ads have now expanded to more than 30 US cities.

    In 2016, Google launched their own travel guides. Run a search like “things to do in Seattle” and you’ll see a carousel-style result like this one…

    Click on “Seattle travel guide” and you’ll be taken to a customized travel portal for the city of Seattle. The screen below is a desktop result — note the increasing similarity to rich mobile experiences.

    Once again, you’ve been taken to a complete Google experience outside of search results.

    Last year, Google jumped into the job-hunting game, launching a 3-pack of job listings covering all major players in this space, like this one for “marketing jobs in Seattle”…

    Click on any job listing, and you’ll be taken to a separate Google jobs portal. Let’s try Facebook…

    From here, you can view other listings, refine your search, and even save jobs and set up alerts. Once again, you’ve jumped from a specialized Google result to a completely Google-controlled experience.

    Like hotels, Google has dabbled in flight data and search for years. If I search for “flights to Seattle,” Google will automatically note my current location and offer me a search interface and a few choices…

    Click on one of these choices and you’re taken to a completely redesigned Google Flights portal…

    Once again, you can continue your journey completely within this Google-owned portal, never returning back to your original search. This is a trend we can expect to continue for the foreseeable future.


    VIII. Hard Questions

    If I’ve bludgeoned you with examples, then I apologize, but I want to make it perfectly clear that this is not a case of one or two isolated incidents. Google is systematically driving more clicks from search to new searches, in-search experiences, and other Google owned properties. This leads to a few hard questions…

    Why is Google doing this?

    Right about now, you’re rushing to the comments section to type “For the money!” along with a bunch of other words that may include variations of my name, “sheeple,” and “dumb-ass.” Yes, Google is a for-profit company that is motivated in part by making money. Moz is a for-profit company that is motivated in part by making money. Stating the obvious isn’t insight.

    In some cases, the revenue motivation is clear. Suggesting the best laptops to searchers and linking those to shopping opportunities drives direct dollars. In traditional walled gardens, publishers are trying to produce more page-views, driving more ad impressions. Is Google driving us to more searches, in-search experiences, and portals to drive more ad clicks?

    The answer isn’t entirely clear. Knowledge Graph links, for example, usually go to informational searches with few or no ads. Rich experiences like Medical Knowledge Panels and movie results on mobile have no ads at all. Some portals have direct revenues (local service providers have to pay for inclusion), but others, like travel guides, have no apparent revenue model (at least for now).

    Google is competing directly with Facebook for hours in our day — while Google has massive traffic and ad revenue, people on average spend much more time on Facebook. Could Google be trying to drive up their time-on-site metrics? Possibly, but it’s unclear what this accomplishes beyond being a vanity metric to make investors feel good.

    Looking to the long game, keeping us on Google and within Google properties does open up the opportunity for additional advertising and new revenue streams. Maybe Google simply realizes that letting us go so easily off to other destinations is leaving future money on the table.

    Is this good for users?

    I think the most objective answer I can give is — it depends. As a daily search user, I’ve found many of these developments useful, especially on mobile. If I can get an answer at a glance or in an in-search entity, such as a Live Result for weather or sports, or the phone number and address of a local restaurant, it saves me time and the trouble of being familiar with the user interface of thousands of different websites. On the other hand, if I feel that I’m being run in circles through search after search or am being given fewer and fewer choices, that can feel manipulative and frustrating.

    Is this fair to marketers?

    Let’s be brutally honest — it doesn’t matter. Google has no obligation to us as marketers. Sites don’t deserve to rank and get traffic simply because we’ve spent time and effort or think we know all the tricks. I believe our relationship with Google can be symbiotic, but that’s a delicate balance and always in flux.

    In some cases, I do think we have to take a deep breath and think about what’s good for our customers. As a marketer, local packs linking directly to in-Google properties is alarming — we measure our success based on traffic. However, these local panels are well-designed, consistent, and have easy access to vital information like business addresses, phone numbers, and hours. If these properties drive phone calls and foot traffic, should we discount their value simply because it’s harder to measure?

    Is this fair to businesses?

    This is a more interesting question. I believe that, like other search engines before it, Google made an unwritten pact with website owners — in exchange for our information and the privilege to monetize that information, Google would send us traffic. This is not altruism on Google’s part. The vast majority of Google’s $ 95B in 2017 advertising revenue came from search advertising, and that advertising would have no audience without organic search results. Those results come from the collective content of the web.

    As Google replaces that content and sends more clicks back to themselves, I do believe that the fundamental pact that Google’s success was built on is gradually being broken. Google’s garden was built on our collective property, and it does feel like we’re slowly being herded out of our own backyards.

    We also have to consider the deeper question of content ownership. If Google chooses to pursue private data partnerships — such as with Live Results or the original Knowledge Graph — then they own that data, or at least are leasing it fairly. It may seem unfair that they’re displacing us, but they have the right to do so.

    Much of the Knowledge Graph is built on human-curated sources such as Wikidata (i.e. Wikipedia). While Google undoubtedly has an ironclad agreement with Wikipedia, what about the people who originally contributed and edited that content? Would they have done so knowing their content could ultimately displace other content creators (including possibly their own websites) in Google results? Are those contributors willing participants in this experiment? The question of ownership isn’t as easy as it seems.

    If Google extracts the data we provide as part of the pact, such as with Featured Snippets and People Also Ask results, and begins to wall off those portions of the garden, then we have every right to protest. Even the concept of a partnership isn’t always black-and-white. Some job listing providers I’ve spoken with privately felt pressured to enter Google’s new jobs portal (out of fear of cutting off the paths to their own gardens), but they weren’t happy to see the new walls built.

    Google is also trying to survive. Search has to evolve, and it has to answer questions and fit a rapidly changing world of device formats, from desktop to mobile to voice. I think the time has come, though, for Google to stop and think about the pact that built their nearly hundred-billion-dollar ad empire.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


    Moz Blog

    Posted in IM NewsComments Off

    Google’s Project Zero Team Exposes Microsoft Edge Bug

    Microsoft has been pretty aggressive in marketing its Edge browser and even launched two commercials earlier this year specifically pointing out its advantages over rival Chrome. After being silent for a while, it appears that Google finally counterattacked by disclosing Edge’s security flaw.

    Google’s Project Zero, which found the vulnerability last November, h released the technical details of their discovery. Due to the existence of the flaw, it is theoretically possible for hackers to bypass Edge’s security features and insert their own malicious code into their target’s computer. While indeed a possibility, it must be noted there has been no reported instance of the problem being successfully taken advantage of by hackers so far.

    Posted in IM NewsComments Off

    Advert