Tag Archive | "Impact"

How Google’s Nofollow, Sponsored, & UGC Links Impact SEO

Posted by Cyrus-Shepard

Google shook up the SEO world by announcing big changes to how publishers should mark nofollow links. The changes — while beneficial to help Google understand the web — nonetheless caused confusion and raised a number of questions. We’ve got the answers to many of your questions here.


14 years after its introduction, Google today announced significant changes to how they treat the “nofollow” link attribute. The big points:

  1. Link attribution can be done in three ways: ”nofollow”, “sponsored”, and “ugc” — each signifying a different meaning. (The fourth way, default, means no value attributed)
  2. For ranking purposes, Google now treats each of the nofollow attributes as “hints” — meaning they likely won’t impact ranking, but Google may choose to ignore the directive and use nofollow links for rankings.
  3. Google continues to ignore nofollow links for crawling and indexing purposes, but this strict behavior changes March 1, 2020, at which point Google begins treating nofollow attributes as “hints”, meaning they may choose to crawl them.
  4. You can use the new attributes in combination with each other. For example, rel=”nofollow sponsored ugc” is valid.
  5. Paid links must either use the nofollow or sponsored attribute (either alone or in combination.) Simply using “ugc” on paid links could presumably lead to a penalty.
  6. Publishers don’t have to do anything. Google offers no incentive for changing, or punishment for not changing.
  7. Publishers using nofollow to control crawling may need to reconsider their strategy.

Why did Google change nofollow?

Google wants to take back the link graph.

Google introduced the nofollow attribute in 2005 as a way for publishers to address comment spam and shady links from user-generated content (UGC). Linking to spam or low-quality sites could hurt you, and nofollow offered publishers a way to protect themselves.

Google also required nofollow for paid or sponsored links. If you were caught accepting anything of value in exchange for linking out without the nofollow attribute, Google could penalize you.

The system generally worked, but huge portions of the web—sites like Forbes and Wikipedia—applied nofollow across their entire site for fear of being penalized, or not being able to properly police UGC.

This made entire portions of the link graph less useful for Google. Should curated links from trusted Wikipedia contributors really not count? Perhaps Google could better understand the web if they changed how they consider nofollow links.

By treating nofollow attributes as “hints”, they allow themselves to better incorporate these signals into their algorithms.

Hopefully, this is a positive step for deserving content creators, as a broader swath of the link graph opens up to more potential ranking influence. (Though for most sites, it doesn’t seem much will change.)

What is the ranking impact of nofollow links?

Prior to today, SEOs generally believed nofollow links worked like this:

  • Not used for crawling and indexing (Google didn’t follow them.)
  • Not used for ranking, as confirmed by Google. (Many SEOs have believed for years that this was in fact not the case)

To be fair, there’s a lot of debate and speculation around the second statement, and Google has been opaque on the issue. Experimental data and anecdotal evidence suggest Google has long considered nofollow links as a potential ranking signal.

As of today, Google’s guidance states the new link attributes—including sponsored and ugc—are treated like this:

  • Still not used for crawling and indexing (see the changes taking place in the future below)
  • For ranking purposes, all nofollow directives are now officially a “hint” — meaning Google may choose to ignore it and use it for ranking purposes. Many SEOs believe this is how Google has been treating nofollow for quite some time.

Beginning March 1, 2020, these link attributes will be treated as hints across the board, meaning:

  • In some cases, they may be used for crawling and indexing
  • In some cases, they may be used for ranking

Emphasis on the word “some.” Google is very explicit that in most cases they will continue to ignore nofollow links as usual.

Do publishers need to make changes?

For most sites, the answer is no — only if they want to. Google isn’t requiring sites to make changes, and as of yet, there is no business case to be made.

That said, there are a couple of cases where site owners may want to implement the new attributes:

  1. Sites that want to help Google better understand the sites they—or their contributors—are linking to. For example, it could be to everyone’s benefit for sites like Wikipedia to adopt these changes. Or maybe Moz could change how it marks up links in the user-generated Q&A section (which often links to high-quality sources.)
  2. Sites that use nofollow for crawl control. For sites with large faceted navigation, nofollow is sometimes an effective tool at preventing Google from wasting crawl budget. It’s too early to tell if publishers using nofollow this way will need to change anything before Google starts treating nofollow as a crawling “hint” but it may be important to pay attention to.

To be clear, if a site is properly using nofollow today, SEOs do not need to recommend any changes be made. Though sites are free to do so, they should not expect any rankings boost for doing so, or new penalties for not changing.

That said, Google’s use of these new link attributes may evolve, and it will be interesting to see in the future—through study and analysis—if a ranking benefit does emerge from using nofollow attributes in a certain way.

Which link attribute should you use?

If you choose to change your nofollow links to be more specific, Google’s guidelines are very clear, so we won’t repeat them in-depth here. In brief, your choices are:

  1. rel=”sponsored” – For paid or sponsored links. This would assumingly include affiliate links, although Google hasn’t explicitly said.
  2. rel=”ugc” – Links within all user-generated content. Google has stated if UGC is created by a trusted contributor, this may not be necessary.
  3. rel=”nofollow” – A catchall for all nofollow links. As with the other nofollow directives, these links generally won’t be used for ranking, crawling, or indexing purposes.

Additionally, attributes can be used in combination with one another. This means a declaration such as rel=”nofollow sponsored” is 100% valid.

Can you be penalized for not marking paid links?

Yes, you can still be penalized, and this is where it gets tricky.

Google advises to mark up paid/sponsored links with either “sponsored” or “nofollow” only, but not “ugc”.

This adds an extra layer of confusion. What if your UGC contributors are including paid or affiliate links in their content/comments? Google, so far, hasn’t been clear on this.

For this reason, we may likely see publishers continue to markup UGC content with “nofollow” as a default, or possibly “nofollow ugc”.

Can you use the nofollow attributes to control  crawling and indexing?

Nofollow has always been a very, very poor way to prevent Google from indexing your content, and it continues to be that way.

If you want to prevent Google from indexing your content, it’s recommended to use one of several other methods, most typically some form of “noindex”.

Crawling, on the other hand, is a slightly different story. Many SEOs use nofollow on large sites to preserve crawl budget, or to prevent Google from crawling unnecessary pages within faceted navigation.

Based on Google statements, it seems you can still attempt to use nofollow in this way, but after March 1, 2020, they may choose to ignore this. Any SEO using nofollow in this way may need to get creative in order to prevent Google from crawling unwanted sections of their sites.

Final thoughts: Should you implement the new nofollow attributes?

While there is no obvious compelling reason to do so, this is a decision every SEO will have to make for themselves.

Given the initial confusion and lack of clear benefits, many publishers will undoubtedly wait until we have better information.

That said, it certainly shouldn’t hurt to make the change (as long as you mark paid links appropriately with “nofollow” or “sponsored”.) For example, the Moz Blog may someday change comment links below to rel=”ugc”, or more likely rel=”nofollow ugc”.

Finally, will anyone actually use the “sponsored” attribute, at the risk of giving more exposure to paid links? Time will tell.

What are your thoughts on Google’s new nofollow attributes? Let us know in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

SMX Overtime: Your questions answered about Google penalties and their impact on websites

SMX London attendees asked SEO expert Fili Wiese a wide-ranging number of questions about penalties, indexing, crawling and more.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

How improved Google ratings impact conversions

When performing a search on Google these days you will often find it contains a local result. In most cases that means results in a map powered by Google My Business results.

In fact, during quarterly earnings call Google’s CEO said,

“I wouldn’t underestimate the focus we have on local. Just to give you a sense, local mobile searches are growing faster than just mobile searches overall, and have increased by almost 50% in the last year.”

When a statement like this is made it indicates two things to me:

1. Consumers are expecting more and more local results

2. Google My Business is really important to capture traffic.

With this trend clearly in sight, I wanted to dig into some data to look into a key factor in consumers decision making, ratings, and reviews. I was curious as to how much having a strong rating impacts consumes selecting a business. We already know ratings and reviews are important from numerous studies in the industry.

For example, Brightlocal found that 57% of consumers will only buy from businesses with a four-plus star rating. So I took a look at some data that included over 10 million Google My Business data points to try to understand the impact of increasing a business rating had on their conversion rates. I think what I found will seem very obvious, but certainly, validate the importance of good reviews.

Ratings really matter for non-branded searches

I thought I might start with something that is the most logical. When a consumer isn’t familiar with your brand and performs a generic, aka non-branded, search they are influenced greatly by a business’s rating. In the data set that I used businesses were found via non-branded searches 70% of the time vs. 30% of the time via a branded search. Meaning >2X the traffic is coming from consumers who aren’t yet sure what business they are going to choose.

Once they see the results, consumers took action on business that had a higher rating regardless of the type of search as ratings improved (duh), but they were more impacted by businesses with higher ratings when they performed a non-branded search. Conversion rates for consumers who took action (phone call, click, or got directions) on a Google My Business result were 68% higher vs. 63% higher for non-branded searches for companies with a <=2 rating vs. a 5 rating. Each star rating improvement directly leads to an increased conversion rate.

star ratings and non branded search

Source: Google My Business Insights

In our data set we had 70% of businesses with a rating between 2 & 4, with just 17% of businesses >4. For that 17 % of businesses who have received the highest reviews, they are receiving almost 30 more actions per 1,000 impressions than business with a <=2 rating. Think about how much this adds up over time? It’s massive.

While the fact that having a higher rating directly relates to having a higher conversion rate might seem obvious, I thought I’d add a data point that wasn’t as obvious, but potentially just as valuable. Our data shows that as your rating goes up consumers are more likely to click “get directions” vs. calling. While this doesn’t necessarily directly equal higher conversions, to me it indicates that consumers are more comfortable to trust the listing and head directly there vs. calling to get a sense of comfort prior to making any decisions. Also, they might call to validate the listing since the rating is so low. This introduces a potential barrier to conversion, maybe that call isn’t answered, or is requiring a customer service call since the rating is so low.

action types by rating

Source: Google My Business Insights

The simple takeaway from this data is that ratings drive action and business. The action to be taken is twofold;

  • Google My Business is important. Ensuring that your name, address, phone number, website, hours, etc… are accurate and well aligned across the web. Often using a location data management platform can help improve quality and results.
  • Soliciting and responding to ratings and reviews will help your business improve your ability to convert consumers. There are also software packages available to help improve ratings and reviews for your business. You don’t necessarily need one of these platforms, but similar to location data management they can help scale your marketing prowess.

We know from Google’s data and CEO that location is important. Hopefully, these data points can provide some additional firepower for your business to take these listing seriously. Improving your listings in Google My Business and other location data providers will have a positive impact on your business.

Jason Tabeling is EVP, Product at Brandmuscle. He can be found on Twitter @jtabeling.

The post How improved Google ratings impact conversions appeared first on Search Engine Watch.

Search Engine Watch

Posted in IM NewsComments Off

The Real Impact of Mobile-First Indexing & The Importance of Fraggles

Posted by Suzzicks

While SEOs have been doubling-down on content and quality signals for their websites, Google was building the foundation of a new reality for crawling — indexing and ranking. Though many believe deep in their hearts that “Content is King,” the reality is that Mobile-First Indexing enables a new kind of search result. This search result focuses on surfacing and re-publishing content in ways that feed Google’s cross-device monetization opportunities better than simple websites ever could.

For two years, Google honed and changed their messaging about Mobile-First Indexing, mostly de-emphasizing the risk that good, well-optimized, Responsive-Design sites would face. Instead, the search engine giant focused more on the use of the Smartphone bot for indexing, which led to an emphasis on the importance of matching SEO-relevant site assets between desktop and mobile versions (or renderings) of a page. Things got a bit tricky when Google had to explain that the Mobile-First Indexing process would not necessarily be bad for desktop-oriented content, but all of Google’s shifting and positioning eventually validated my long-stated belief: That Mobile-First Indexing is not really about mobile phones, per se, but mobile content.

I would like to propose an alternative to the predominant view, a speculative theory, about what has been going on with Google in the past two years, and it is the thesis of my 2019 MozCon talk — something we are calling Fraggles and Fraggle-based Indexing

 I’ll go through Fraggles and Fraggle-based indexing, and how this new method of indexing has made web content more ‘liftable’ for Google. I’ll also outline how Fraggles impact the Search Results Pages (SERPs), and why it fits with Google’s promotion of Progressive Web Apps. Next, I will provide information about how astute SEO’s can adapt their understanding of SEO and leverage Fraggles and Fraggle-Based Indexing to meet the needs of their clients and companies. Finally, I’ll go over the implications that this new method of indexing will have on Google’s monetization and technology strategy as a whole.

Ready? Let’s dive in.

Fraggles & Fraggle-based indexing

The SERP has changed in many ways. These changes can be thought of and discussed separately, but I believe that they are all part of a larger shift at Google. This shift includes “Entity-First Indexing” of crawled information around the existing structure of Google’s Knowledge Graph, and the concept of “Portable-prioritized Organization of Information,” which favors information that is easy to lift and re-present in Google’s properties — Google describes these two things together as “Mobile-First Indexing.”

As SEOs, we need to remember that the web is getting bigger and bigger, which means that it’s getting harder to crawl. Users now expect Google to index and surface content instantly. But while webmasters and SEOs were building out more and more content in flat, crawlable HTML pages, the best parts of the web were moving towards more dynamic websites and web-apps. These new assets were driven by databases of information on a server, populating their information into websites with JavaScript, XML or C++, rather than flat, easily crawlable HTML. 

For many years, this was a major problem for Google, and thus, it was a problem for SEOs and webmasters. Ultimately though, it was the more complex code that forced Google to shift to this more advanced, entity-based system of indexing — something we at MobileMoxie call Fraggles and Fraggle-Based Indexing, and the credit goes to JavaScript’s “Fragments.”

Fraggles represent individual parts (fragments) of a page for which Google overlayed a “handle” or “jump-link” (aka named-anchor, bookmark, etc.) so that a click on the result takes the users directly to the part of the page where the relevant fragment of text is located. These Fraggles are then organized around the relevant nodes on the Knowledge Graph, so that the mapping of the relationships between different topics can be vetted, built-out, and maintained over time, but also so that the structure can be used and reused, internationally — even if different content is ranking. 

More than one Fraggle can rank for a page, and the format can vary from a text-link with a “Jump to” label, an unlabeled text link, a site-link carousel, a site-link carousel with pictures, or occasionally horizontal or vertical expansion boxes for the different items on a page.

The most notable thing about Fraggles is the automatic scrolling behavior from the SERP. While Fraggles are often linked to content that has an HTML or JavaScript jump-links, sometimes, the jump-links appear to be added by Google without being present in the code at all. This behavior is also prominently featured in AMP Featured Snippets, for which Google has the same scrolling behavior, but also includes Google’s colored highlighting — which is superimposed on the page — to show the part of the page that was displayed in the Featured Snippet, which allows the searcher to see it in context. I write about this more in the article: What the Heck are Fraggles.

How Fraggles & Fraggle-based indexing works with JavaScript

Google’s desire to index Native Apps and Web Apps, including single-page apps, has necessitated Google’s switch to indexing based on Fragments and Fraggles, rather than pages. In JavaScript, as well as in Native Apps, a “Fragment” is a piece of content or information that is not necessarily a full page. 

The easiest way for an SEO to think about a Fragment is within the example of an AJAX expansion box: The piece of text or information that is fetched from the server to populate the AJAX expander when clicked could be described as a Fragment. Alternatively, if it is indexed for Mobile-First Indexing, it is a Fraggle. 

It is no coincidence that Google announced the launch of Deferred JavaScript Rendering at roughly the same time as the public roll-out of Mobile-First Indexing without drawing-out the connection, but here it is: When Google can index fragments of information from web pages, web apps and native apps, all organized around the Knowledge Graph, the data itself becomes “portable” or “mobile-first.”

We have also recently discovered that Google has begun to index URLs with a # jump-link, after years of not doing so, and is reporting on them separately from the primary URL in Search Console. As you can see below from our data, they aren’t getting a lot of clicks, but they are getting impressions. This is likely because of the low average position. 

Before Fraggles and Fraggle-Based Indexing, indexing # URLs would have just resulted in a massive duplicate content problem and extra work indexing for Google. Now that Fraggle-based Indexing is in-place, it makes sense to index and report on # URLs in Search Console — especially for breaking up long, drawn-out JavaScript experiences like PWA’s and Single-Page-Apps that don’t have separate URLs, databases, or in the long-run, possibly even for indexing native apps without Deep Links. 

Why index fragments & Fraggles?

If you’re used to thinking of rankings with the smallest increment being a URL, this idea can be hard to wrap your brain around. To help, consider this thought experiment: How useful would it be for Google to rank a page that gave detailed information about all different kinds of fruits and vegetables? It would be easy for a query like “fruits and vegetables,” that’s for sure. But if the query is changed to “lettuce” or “types of lettuce,” then the page would struggle to rank, even if it had the best, most authoritative information. 

This is because the “lettuce” keywords would be diluted by all the other fruit and vegetable content. It would be more useful for Google to rank the part of the page that is about lettuce for queries related to lettuce, and the part of the page about radishes well for queries about radishes. But since users don’t want to scroll through the entire page of fruits and vegetables to find the information about the particular vegetable they searched for, Google prioritizes pages with keyword focus and density, as they relate to the query. Google will rarely rank long pages that covered multiple topics, even if they were more authoritative.

With featured snippets, AMP featured snippets, and Fraggles, it’s clear that Google can already find the important parts of a page that answers a specific question — they’ve actually been able to do this for a while. So, if Google can organize and index content like that, what would the benefit be in maintaining an index that was based only on per-pages statistics and ranking? Why would Google want to rank entire pages when they could rank just the best parts of pages that are most related to the query?

To address these concerns, historically, SEO’s have worked to break individual topics out into separate pages, with one page focused on each topic or keyword cluster. So, with our vegetable example, this would ensure that the lettuce page could rank for lettuce queries and the radish page could rank for radish queries. With each website creating a new page for every possible topic that they would like to rank for, there’s lot of redundant and repetitive work for webmasters. It also likely adds a lot of low-quality, unnecessary pages to the index. Realistically, how many individual pages on lettuce does the internet really need, and how would Google determine which one is the best? The fact is, Google wanted to shift to an algorithm that focused less on links and more on topical authority to surface only the best content — and Google circumvents this with the scrolling feature in Fraggles.

Even though the effort to switch to Fraggle-based indexing, and organize the information around the Knowledge Graph, was massive, the long-term benefits of the switch far out-pace the costs to Google because they make Google’s system for flexible, monetizable and sustainable, especially as the amount of information and the number of connected devices expands exponentially. It also helps Google identify, serve and monetize new cross-device search opportunities, as they continue to expand. This includes search results on TV’s, connected screens, and spoken results from connected speakers. A few relevant costs and benefits are outlined below for you to contemplate, keeping Google’s long-term perspective in mind:

Why Fraggles and Fraggle-based indexing are important for PWAs

What also makes the shift to Fraggle-based Indexing relevant to SEOs is how it fits in with Google’s championing of Progressive Web Apps or AMP Progressive Web Apps, (aka PWAs and PWA-AMP websites/web apps). These types of sites have become the core focus of Google’s Chrome Developer summits and other smaller Google conferences.

From the perspective of traditional crawling and indexing, Google’s focus on PWAs is confusing. PWAs often feature heavy JavaScript and are still frequently built as Single-Page Apps (SPA’s), with only one or only a few URLs. Both of these ideas would make PWAs especially difficult and resource-intensive for Google to index in a traditional way — so, why would Google be so enthusiastic about PWAs? 

The answer is because PWA’s require ServiceWorkers, which uses Fraggles and Fraggle-based indexing to take the burden off crawling and indexing of complex web content.

In case you need a quick refresher: ServiceWorker is a JavaScript file — it instructs a device (mobile or computer) to create a local cache of content to be used just for the operation of the PWA. It is meant to make the loading of content much faster (because the content is stored locally) instead of just left on a server or CDN somewhere on the internet and it does so by saving copies of text and images associated with certain screens in the PWA. Once a user accesses content in a PWA, the content doesn’t need to be fetched again from the server. It’s a bit like browser caching, but faster — the ServiceWorker stores the information about when content expires, rather than storing it on the web. This is what makes PWAs seem to work offline, but it is also why content that has not been visited yet is not stored in the ServiceWorker.

ServiceWorkers and SEO

Most SEOs who understand PWAs understand that a ServiceWorker is for caching and load time, but they may not understand that it is likely also for indexing. If you think about it, ServiceWorkers mostly store the text and images of a site, which is exactly what the crawler wants. A crawler that uses Deferred JavaScript Rendering could go through a PWA and simulate clicking on all the links and store static content using the framework set forth in the ServiceWorker. And it could do this without always having to crawl all the JavaScript on the site, as long as it understood how the site was organized, and that organization stayed consistent. 

Google would also know exactly how often to re-crawl, and therefore could only crawl certain items when they were set to expire in the ServiceWorker cache. This saves Google a lot of time and effort, allowing them to get through or possibly skip complex code and JavaScript.

For a PWA to be indexed, Google requires webmasters to ‘register their app in Firebase,’ but they used to require webmasters to “register their ServiceWorker.” Firebase is the Google platform that allows webmasters to set up and manage indexing and deep linking for their native apps, chat-bots and, now, PWA’s

Direct communication with a PWA specialist at Google a few years ago revealed that Google didn’t crawl the ServiceWorker itself, but crawled the API to the ServiceWorker. It’s likely that when webmasters register their ServiceWorker with Google, Google is actually creating an API to the ServiceWorker, so that the content can be quickly and easily indexed and cached on Google’s servers. Since Google has already launched an Indexing API and appears to now favor API’s over traditional crawling, we believe Google will begin pushing the use of ServiceWorkers to improve page speed, since they can be used on non-PWA sites, but this will actually be to help ease the burden on Google to crawl and index the content manually.

Flat HTML may still be the fastest way to get web information crawled and indexed with Google. For now, JavaScript still has to be deferred for rendering, but it is important to recognize that this could change and crawling and indexing is not the only way to get your information to Google. Google’s Indexing API, which was launched for indexing time-sensitive information like job postings and live-streaming video, will likely be expanded to include different types of content. 

It’s important to remember that this is how AMP, Schema, and many other types of powerful SEO functionalities have started with a limited launch; beyond that, some great SEO’s have already tested submitting other types of content in the API and seen success. Submitting to APIs skips Google’s process of blindly crawling the web for new content and allows webmasters to feed the information to them directly.

It is possible that the new Indexing API follows a similar structure or process to PWA indexing. Submitted URLs can already get some kinds of content indexed or removed from Google’s index, usually in about an hour, and while it is only currently officially available for the two kinds of content, we expect it to be expanded broadly.

How will this impact SEO strategy?

Of course, every SEO wants to know how to leverage this speculative theory — how can we make the changes in Google to our benefit? 

The first thing to do is take a good, long, honest look at a mobile search result. Position #1 in the organic rankings is just not what it used to be. There’s a ton of engaging content that is often pushing it down, but not counting as an organic ranking position in Search Console. This means that you may be maintaining all your organic rankings while also losing a massive amount of traffic to SERP features like Knowledge Graph results, Featured Snippets, Google My Business, maps, apps, Found on the Web, and other similar items that rank outside of the normal organic results. 

These results, as well as Pay-per-Click results (PPC), are more impactful on mobile because they are stacked above organic rankings. Rather than being off to the side, as they might be in a desktop view of the search, they push organic rankings further down the results page. There has been some great reporting recently about the statistical and large-scale impact of changes to the SERP and how these changes have resulted in changes to user-behavior in search, especially from Dr. Pete Meyers, Rand Fishkin, and JumpTap.

Dr. Pete has focused on the increasing number of changes to the Google Algorithm recorded in his MozCast, which heated up at the end of 2016 when Google started working on Mobile-First Indexing, and again after it launched the Medic update in 2018. 

Rand, on the other hand, focused on how the new types of rankings are pushing traditional organic results down, resulting in less traffic to websites, especially on mobile. All this great data from these two really set the stage for a fundamental shift in SEO strategy as it relates to Mobile-First Indexing.

The research shows that Google re-organized its index to suit a different presentation of information — especially if they are able to index that information around an entity-concept in the Knowledge Graph. Fraggle-based Indexing makes all of the information that Google crawls even more portable because it is intelligently nested among related Knowledge Graph nodes, which can be surfaced in a variety of different ways. Since Fraggle-based Indexing focuses more on the meaningful organization of data than it does on pages and URLs, the results are a more “windowed” presentation of the information in the SERP. SEOs need to understand that search results are now based on entities and use-cases (think micro-moments), instead of pages and domains.

Google’s Knowledge Graph

To really grasp how this new method of indexing will impact your SEO strategy, you first have to understand how Google’s Knowledge Graph works. 

Since it is an actual “graph,” all Knowledge Graph entries (nodes) include both vertical and lateral relationships. For instance, an entry for “bread” can include lateral relationships to related topics like cheese, butter, and cake, but may also include vertical relationships like “standard ingredients in bread” or “types of bread.” 

Lateral relationships can be thought of as related nodes on the Knowledge Graph, and hint at “Related Topics” whereas vertical relationships point to a broadening or narrowing of the topic; which hints at the most likely filters within a topic. In the case of bread, a vertical relationship-up would be topics like “baking,” and down would include topics like “flour” and other ingredients used to make bread, or “sourdough” and other specific types of bread.

SEOs should note that Knowledge Graph entries can now include an increasingly wide variety of filters and tabs that narrow the topic information to benefit different types of searcher intent. This includes things like helping searchers find videos, books, images, quotes, locations, but in the case of filters, it can be topic-specific and unpredictable (informed by active machine learning). This is the crux of Google’s goal with Fraggle-based Indexing: To be able to organize the information of the web-based on Knowledge Graph entries or nodes, otherwise discussed in SEO circles as “entities.” 

Since the relationships of one entity to another remain the same, regardless of the language a person is speaking or searching in, the Knowledge Graph information is language-agnostic, and thus easily used for aggregation and machine learning in all languages at the same time. Using the Knowledge Graph as a cornerstone for indexing is, therefore, a much more useful and efficient means for Google to access and serve information in multiple languages for consumption and ranking around the world. In the long-term, it’s far superior to the previous method of indexing.

Examples of Fraggle-based indexing in the SERPs 

Knowledge Graph

Google has dramatically increased the number of Knowledge Graph entries and the categories and relationships within them. The build-out is especially prominent for topics for which Google has a high amount of structured data and information already. This includes topics like:

  • TV and Movies — from Google Play
  • Food and Recipe — from Recipe Schema, recipe AMP pages, and external food and nutrition databases 
  • Science and medicine — from trusted sources (like WebMD) 
  • Businesses — from Google My Business. 

Google is adding more and more nodes and relationships to their graph and existing entries are also being built-out with more tabs and carousels to break a single topic into smaller, more granular topics or type of information.

As you can see below, the build-out of the Knowledge Graph has also added to the number of filters and drill-down options within many queries, even outside of the Knowledge Graph. This increase can be seen throughout all of the Google properties, including Google My Business and Shopping, both of which we believe are now sections of the Knowledge Graph:


Google Search for ‘Blazers’ with Visual Filters at the Top for Shopping Oriented Queries

Google My Business (Business Knowledge Graph) with Filters for Information about Googleplex

Other similar examples include the additional filters and “Related Topics” results in Google Images, which we also believe to represent nodes on the Knowledge Graph:

0

 Advanced issues found

 


Google Images Increase in Filters & Inclusion of Related Topics Means that These Are Also Nodes on the Knowledge Graph

The Knowedge Graph is also being presented in a variety of different ways. Sometimes there’s a sticky navigation that persists at the top of the SERP, as seen in many media-oriented queries, and sometimes it’s broken up to show different information throughout the SERP, as you may have noticed in many of the local business-oriented search results, both shown below.


Media Knowledge Graph with Sticky Top Nav (Query for ‘Ferris Bueller’s Day Off’)

Local Business Knowledge Graph (GMB) With Information Split-up Throughout the SERP

Since the launch of Fraggle-based indexing is essentially a major Knowledge Graph build-out, Knowledge Graph results have also begun including more engaging content which makes it even less likely that users will click through to a website. Assets like playable video and audio, live sports scores, and location-specific information such as transportation information and TV time-tables can all be accessed directly in the search results. There’s more to the story, though. 

Increasingly, Google is also building out their own proprietary content by re-mixing existing information that they have indexed to create unique, engaging content like animated ‘AMP Stories’ which webmasters are also encouraged to build-out on their own. They have also started building a zoo of AR animals that can show as part of a Knowledge Graph result, all while encouraging developers to use their AR kit to build their own AR assets that will, no doubt, eventually be selectively incorporated into the Knowledge Graph too.


Google AR Animals in Knowledge Graph

Google AMP Stories Now Called ‘Life in Images’

SEO Strategy for Knowledge Graphs

Companies who want to leverage the Knowledge Graph should take every opportunity to create your own assets, like AR models and AMP Stories, so that Google will have no reason to do it. Beyond that, companies should submit accurate information directly to Google whenever they can. The easiest way to do this is through Google My Business (GMB). Whatever types of information are requested in GMB should be added or uploaded. If Google Posts are available in your business category, you should be doing Posts regularly, and making sure that they link back to your site with a call to action. If you have videos or photos that are relevant for your company, upload them to GMB. Start to think of GMB as a social network or newsletter — any assets that are shared on Facebook or Twitter can also be shared on Google Posts, or at least uploaded to the GMB account.

You should also investigate the current Knowledge Graph entries that are related to your industry, and work to become associated with recognized companies or entities in that industry. This could be from links or citations on the entity websites, but it can also include being linked by third-party lists that give industry-specific advice and recommendations, such as being listed among the top competitors in your industry (“Best Plumbers in Denver,” “Best Shoe Deals on the Web,” or “Top 15 Best Reality TV Shows”). Links from these posts also help but are not required — especially if you can get your company name on enough lists with the other top players. Verify that any links or citations from authoritative third-party sites like Wikipedia, Better Business Bureau, industry directories, and lists are all pointing to live, active, relevant pages on the site, and not going through a 301 redirect.

While this is just speculation and not a proven SEO strategy, you might also want to make sure that your domain is correctly classified in Google’s records by checking the industries that it is associated with. You can do so in Google’s MarketFinder tool. Make updates or recommend new categories as necessary. Then, look into the filters and relationships that are given as part of Knowledge Graph entries and make sure you are using the topic and filter words as keywords on your site.

Featured snippets 

Featured Snippets or “Answers” first surfaced in 2014 and have also expanded quite a bit, as shown in the graph below. It is useful to think of Featured Snippets as rogue facts, ideas or concepts that don’t have a full Knowledge Graph result, though they might actually be associated with certain existing nodes on the Knowledge Graph (or they could be in the vetting process for eventual Knowledge Graph build-out). 

Featured Snippets seem to surface when the information comes from a source that Google does not have an incredibly high level of trust for, like it does for Wikipedia, and often they come from third party sites that may or may not have a monetary interest in the topic — something that makes Google want to vet the information more thoroughly and may prevent Google from using it, if a less bias option is available.

Like the Knowledge Graph, Featured Snippets results have grown very rapidly in the past year or so, and have also begun to include carousels — something that Rob Bucci writes about extensively here. We believe that these carousels represent potentially related topics that Google knows about from the Knowledge Graph. Featured Snippets now look even more like mini-Knowledge Graph entries: Carousels appear to include both lateral and vertically related topics, and their appearance and maintenance seem to be driven by click volume and subsequent searches. However, this may also be influenced by aggregated engagement data for People Also Ask and Related Search data.

The build-out of Featured Snippets has been so aggressive that sometimes the answers that Google lifts are obviously wrong, as you can see in the example image below. It is also important to understand that Featured Snippet results can change from location to location and are not language-agnostic, and thus, are not translated to match the Search Language or the Phone Language settings. Google also does not hold themselves to any standard of consistency, so one Featured Snippet for one query might present an answer one way, and a similar query for the same fact could present a Featured Snippet with slightly different information. For instance, a query for “how long to boil an egg” could result in an answer that says “5 minutes” and a different query for “how to make a hard-boiled egg” could result in an answer that says “boil for 1 minute, and leave the egg in the water until it is back to room temperature.”


Featured Snippet with Carousel Featured

Snippet that is Wrong

The data below was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.


This Data Was Collected by Moz & represents an average of roughly 10,000 that skews slightly towards ‘head’ terms

SEO strategy for featured snippets

All of the standard recommendations for driving Featured Snippets apply here. This includes making sure that you keep the information that you are trying to get ranked in a Featured Snippet clear, direct, and within the recommended character count. It also includes using simple tables, ordered lists, and bullets to make the data easier to consume, as well as modeling your content after existing Featured Snippet results in your industry.

This is still speculative, but it seems likely that the inclusion of Speakable Schema markup for things like “How To,” “FAQ,” and “Q&A” may also drive Featured Snippets. These kinds of results are specially designated as content that works well in a voice-search. Since Google has been adamant that there is not more than one index, and Google is heavily focused on improving voice-results from Google Assistant devices, anything that could be a good result in the Google Assistant, and ranks well, might also have a stronger chance at ranking in a Featured Snippet.

People Also Ask & Related Searches

Finally, the increased occurrence of “Related Searches” as well as the inclusion of People Also Ask (PAA) questions, just below most Knowledge Graph and Featured Snippet results, is undeniable. The Earl Tea screenshot shows that PAA’s along with Interesting Finds are both part of the Knowledge Graph too.

The graph below shows the steady increase in PAA’s. PAA results appear to be an expansion of Featured Snippets because once expanded, the answer to the question is displayed, with the citation below it. Similarly, some Related Search results also now include a result that looks like a Featured Snippet, instead of simply linking over to a different search result. You can now find ‘Related Searches’ throughout the SERP, often as part of a Knowledge Graph results, but sometimes also in a carousel in the middle of the SERP, and always at the bottom of the SERP — sometimes with images and expansion buttons to surface Featured Snippets within the Related Search results directly in the existing SERP.

Boxes with Related Searches are now also included with Image Search results. It’s interesting to note that Related Search results in Google Images started surfacing at the same time that Google began translating image Title Tags and Alt Tags. It coincides well with the concept that Entity-First Indexing, that Entities and Knowledge Graph are language-agnostic, and that Related Searches are somehow related to the Knowledge Graph.


This data was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.


People Also Ask

Related Searches

SEO STRATEGY for PAA and related searches

Since PAAs and some Related Searches now appear to simply include Featured Snippets, driving Featured Snippet results for your site is also a strong strategy here. It often appears that PAA results include at least two versions of the same question, re-stated with a different language, before including questions that are more related to lateral and vertical nodes on the Knowledge Graph. If you include information on your site that Google thinks is related to the topic, based on Related Searches and PAA questions, it could help make your site appear relevant and authoritative.

Finally, it is crucial to remember that you don’t have a website to rank in Google now and SEO’s should consider non-website rankings as part of their job too. 

If a business doesn’t have a website, or if you just want to cover all the bases, you can let Google host your content directly — in as many places as possible. We have seen that Google-hosted content generally seems to get preferential treatment in Google search results and Google Discover, especially when compared to the decreasing traffic from traditional organic results. Google is now heavily focused on surfacing multimedia content, so anything that you might have previously created a new page on your website for should now be considered for a video.

Google My Business (GMB) is great for companies that don’t have websites, or that want to host their websites directly with Google. YouTube is great for videos, TV, video-podcasts, clips, animations, and tutorials. If you have an app, a book, an audio-book, a podcast, a movie, TV show, class or music, or PWA, you can submit that directly to GooglePlay (much of the video content in GooglePlay is now cross-populated in YouTube and YouTube TV, but this is not necessarily true of the other assets). This strategy could also include books in Google Books, flights in Google Flights, Hotels in Google Hotel listings, and attractions in Google Explore. It also includes having valid AMP code, since Google hosts AMP content, and includes Google News if your site is an approved provider of news.

Changes to SEO tracking for Fraggle-based indexing

The biggest problem for SEOs is the missing organic traffic, but it is also the fact that current methods of tracking organic results generally don’t show whether things like Knowledge Graph, Featured Snippets, PAA, Found on the Web, or other types of results are appearing at the top of the query or somewhere above your organic result. Position one in organic results is not what it used to be, nor is anything below it, so you can’t expect those rankings to drive the same traffic. If Google is going to be lifting and representing everyone’s content, the traffic will never arrive at the site and SEOs won’t know if their efforts are still returning the same monetary value. This problem is especially poignant for publishers, who have only been able to sell advertising on their websites based on the expected traffic that the website could drive.

The other thing to remember is that results differ — especially on mobile, which varies from device to device (generally based on screen size) but also can vary based on the phone IOS. They can also change significantly based on the location or the language settings of the phone, and they definitely do not always match with desktop results for the same query. Most SEO’s don’t know much about the reality of their mobile search results because most SEO reporting tools still focus heavily on desktop results, even though Google has switched to Mobile-First. 

As well, SEO tools generally only report on rankings from one location — the location of their servers — rather than being able to test from different locations. 

The only thing that good SEO’s can do to address this problem is to use tools like the MobileMoxie SERP Test to check what rankings look like on top keywords from all the locations where their users may be searching. While the free tool only provides results with one location at a time, subscribers can test search results in multiple locations, based on a service-area radius or based on an uploaded CSV of addresses. The tool has integrations with Google Sheets, and a connector with Data Studio, to help with SEO reporting, but APIs are also available, for deeper integrations in content editing tools, dashboards and for use within other SEO tools.

Conclusion

At MozCon 2017, I expressed my belief that the impact of Mobile-First Indexing requires a re-interpretation of the words “Mobile,” “First,” and “Indexing.” Re-defined in the context of Mobile-First Indexing, the words should be understood to mean “portable,” “preferred,” and “organization of information.” The potential of a shift to Fraggle-based indexing and the recent changes to the SERPs, especially in the past year, certainly seems to prove the accuracy of this theory. And though they have been in the works for more than two years, the changes to the SERP now seem to be rolling-out faster and are making the SERP unrecognizable from what it was only three or four years ago.

In this post, we described Fraggles and Fraggle-based indexing for SEO as a theory that speculates the true nature of the change to Mobile-First Indexing, how the index itself — and the units of indexing — may have changed to accommodate faster and more nuanced organization of information based on the Knowledge Graph, rather than simply links and URLs. We covered how Fraggles and Fraggle-based Indexing works, how it is related to JavaScript and PWA’s and what strategies SEOs can take to leverage it for additional exposure in the search results as well as how they can update their success tracking to account for all the variabilities that impact mobile search results.

SEOs need to consider the opportunities and change the way we view our overall indexing strategy, and our jobs as a whole. If Google is organizing the index around the Knowledge Graph, that makes it much easier for Google to constantly mention near-by nodes of the Knowledge Graph in “Related Searches” carousels, links from the Knowledge Graph, and topics in PAAs. It might also make it easier to believe that featured snippets are simply pieces of information being vetted (via Google’s click-crowdsourcing) for inclusion or reference in the Knowledge Graph.

Fraggles and Fraggled indexing re-frames the switch to Mobile-First Indexing, which means that SEOs and SEO tool companies need to start thinking mobile-first — i.e. the portability of their information. While it is likely that pages and domains still carry strong ranking signals, the changes in the SERP all seem to focus less on entire pages, and more on pieces of pages, similar to the ones surfaced in Featured Snippets, PAAs, and some Related Searches. If Google focuses more on windowing content and being an “answer engine” instead of a “search engine,” then this fits well with their stated identity, and their desire to build a more efficient, sustainable, international engine.

SEOs also need to find ways to serve their users better, by focusing more on the reality of the mobile SERP, and how much it can vary for real users. While Google may not call the smallest rankable units Fraggles, it is what we call them, and we think they are critical to the future of SEO.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

I Want Every American To See The Transformative Impact Of 5G, Says Verizon CEO

“We pride ourselves with the best network,” says Verizon CEO Hans Vestberg. “We have had that all the time in 4G and we’re going to have it on 5G. We’ve invested very prudently with our network, but network is our strategy and it has been that since the inception of the company. At Verizon, we’re proud of it. I just want every American to have a 5G phone in their hands and see the huge impact it will have in a transformative way that 5G will make in this country.”

Hans Vestberg, CEO of Verizon, discusses his desire for every American to soon experience the huge transformative impact that 5G will have in an interview on CNBC at The Allen & Company Sun Valley Conference”

I Want Every American To See The Transformative Impact Of 5G

We have been on to 5G for seven years now. We were first in the world to launch 5G Home broadband. We were first in the world with 5G mobility. We now have four cities up and we’re going to have 30 cities this year. We have three 5G phones already out. So we, of course, are ahead of the game but we respect all the competition. We pride ourselves with the best network. We have had that all the time in 4G and we’re going to have it on 5G. 

We’ve invested very prudently with our network, but network is our strategy and it has been that since the inception of the company. At Verizon, we’re proud of it. That’s important to us. I just want every American to have a 5G phone in their hands and see the huge impact it will have in a transformative way that 5G will make in this country.

We’re In the Middle Of a Very Big Transformation

It’s always been a competitive market. I mean the wireless market in the US is extremely competitive. It’s nothing new to us and we are prepared. We’re in the middle of a very big transformation of the company. We have changed the network, we have a new go-to-market, and we have a voluntary offering where almost 10,400 people are leaving us. So we are prepared. Whatever comes up Verizon will respond quickly and we will manage our shareholders or customers or employees and society in general. That’s our work.

It’s a very exciting market to be in with mobility and broadband and 5G and all of that. Of course, there is a lot of hype and discussion about it and the US is in the lead with it. It’s an exciting time to be here and work. We will compete. I think that we already have the best 4G network and we’re ready with being first in word with 5G. We will just hammer on and execute. I have a great team that is doing that every day. Our main focus is really to execute right now and then a lot of things will happen around us.

Regulation Of Tech Is Difficult

First of all, we understand the concerns (around big tech) and all of that. Ultimately, we need to remember that mobility, broadband, and cloud, that combination is a 21st-century infrastructure. If you can scale that you can actually solve problems in the rest of the world that you have never thought about. If we start to chop that up by regulation we cannot give the same opportunities for everyone in this world. So that’s very important. 

Secondly, I think the technology is moving so fast that if you do regulation, it’s just moving so fast that it’s hard. I think it’s up to responsible leaders and ultimately the customer will be after you if you do stupid things. We’re building our brand on trust and innovation. We know that we need to fight every day to get that trust and one thing you do wrong you lose the trust. That has to regulate and that’s more important in the end.

I think that regulation is difficult in the tech sector and customers will ultimately judge them. I’m worried that if you’re going to have different regulations all around the world for platforms, for example, which means that the officials that were getting from them today, that people can get digital health care and digital educational platforms, we’re going to lose that. With the sustainable goals that we have in the world, we want everybody to have the same chance. I think that would be bad.

I Want Every American To See The Transformative Impact Of 5G, Says Verizon CEO Hans Vestberg

The post I Want Every American To See The Transformative Impact Of 5G, Says Verizon CEO appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Searchmetrics: Google’s diversity update did impact search results

Here is another study on the Google diversity update that shows the update did help with diversity.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

SearchCap: Google hotel ads, Bing testing icon, reviews impact ranking & more

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Machine Learning and It’s Impact on Search

The terms machine learning (ML) and artificial intelligence (AI) have been cropping up more often when it comes to organic and paid search. Now a recent report by Acquisio has confirmed just how effective machine learning for search results.

According to Acquisio, paid search accounts that have been optimised for machine learning have 71% higher conversion rates and have lower cost-per-click (CPC). But these were not the only benefits that accounts using machine learning enjoyed. The web marketing company also revealed these accounts were also able to reach their target spending levels and had lower churn rates.

The data implies that small marketing teams and CMOs now stand on an even playing field with more established companies now that ML is more affordable, effective and accessible to everyone.

This doesn’t mean that marketers should ignore organic search and original, value-laden content. Paid search might be the easiest way to rank high in search engines, particularly since AI will be doing the bulk of the work, developing campaigns that have greater odds of being seen by the right searchers at the proper time. However, organic search is more authentic and will last longer than paid searches.

The goal now is to understand how ML impacts the search system and how to take advantage of the technology’s evolution that made paid and organic searches more effective.

Paid vs Organic Search: Which Wins in the End?

There’s been an ongoing debate as to which is better – paid or organic searches. Interestingly, both have come out on top, but at different times and conditions. The results have depended on the type of research done and other outside factors. For instance, a study conducted in 2011 showed that organic search was more effective. However, paid search has outpaced its counterpart from 2013 onwards. But this appears to be due to the changes Google has made to its algorithm.

So which is better? Andy Taylor, the Associate Director of Research at Merkle, believes that flexibility is the best option. Instead of just sticking to one approach, companies should determine what search strategy is ideal for their business at the moment and the technology that’s currently available. After all, the ideal marketing strategy for your company now will probably change in a few months as customers change their expectations and technologies expand.

Machine Learning is Changing More Than Search

The rise of machine learning has also resulted in a shift to data-driven models instead of the conventional attribution models. This multi-touch attribution model (MTA) relies on an analytics scale that’s more descriptive and takes into account various touchpoint outputs, like ad interactions, ad creative, or exposure order. It also allows marketers to have a better understanding of how factors, like a distinct set of keywords and ad words, can affect a conversion.

But it’s not just search capacities that machine learning has an impact on. The technology is also being used to refine and make algorithm changes. It has been theorized that Google’s RankBrain utilizes machine learning to assess if the company has to revise its own rankings based on what the consumer searches for and whether the user was satisfied with the result.

Machine Learning Will Push for More Sophisticated Content

Because machine learning technology is developing more advanced SEM capacities and sophisticated algorithms, search engines are pushing marketers and content producers to deliver more refined content. This would eventually lead to search engines becoming more discerning to the quality of online content a company is putting out. This means producing high-quality content that particularly targets what the consumer is looking for becomes more vital than ever before.

Machine learning and AI are impacting every aspect of marketing. Companies should start understanding them and how to utilize ML-optimized tools effectively in their marketing campaigns.  

The post Machine Learning and It's Impact on Search appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Blockchain: How Will it Impact Digital Marketing?

The marketing industry generates billions of dollars every year. After all, every company needs ads and various marketing strategies in order to reach their target consumers.

Forrester, a leading market research company, even said that by 2021, digital marketing costs will reach $ 120 billion. Unfortunately, about half of ad traffic is created by bots. It’s a decidedly dishonest practice, especially when you consider how much money companies put out just to reach prospective clients. But this practice might soon come to an end once businesses have a greater capacity to focus on specific customers.

Related image

Graphic via Techspot.com

It’s a good thing then that digital marketing is very dynamic and open to change. It easily adapts to new technology and the shifting perceptions of customers. At the moment, there’s one tech advancement that has the potential to change digital marketing (and the world) like never before – the blockchain.

What is Blockchain?

Blockchain might seem too technical for most people to fully grasp, but it’s a fairly simple concept. The technology is essentially a public ledger that stores and distributes data. More importantly, everyone that uses blockchain can see and share all its data and by doing so, each user plays a role in keeping it updated and transparent.

The system works by keeping data stored in a chain-like pattern and the transaction history is stored in “blocks.” Information stored in a blockchain can only be added to. It can’t be changed or copied. If someone were to attempt to change the history or hack the system, the ledger would have to be updated on all the users’ computers. Considering the number of users in a blockchain, this would be almost impossible to do, making the service very secure.

How Will it Impact Digital Marketing?

Blockchain is often linked to cryptocurrency. It’s decentralized nature, the freedom it offers, and heightened cybersecurity features makes it perfect storage for virtual money. However, blockchain also has a major impact on digital marketing.

It Will Take Out the Middleman

There’s always a middleman in digital marketing which means businesses only get half the value of what they have paid. Blockchain can do away with these intermediaries and help create better value for marketing campaigns.

Related image            Related image

Graphic via Linkedin.com

With a blockchain, companies can forego the ad buy process and just target their prospective customers directly by paying them to view the ads. Businesses can use “microcurrencies” that customers can avail of once they’ve proven that they have watched the ad. The Brave browser has already started this, using their Basic Attention Token (BAT) to ensure that companies only pay for the ads that have been viewed by a real person.

Trust is Built With Transparency

One concern that companies have with online advertising is that it’s virtually impossible to know if the stats provided are accurate. There’s no way to check if the counted site clicks or followers are real customers, or even real people, for that matter. After all, ad companies can hire “clickers” or use bots to boost ad stats so distributors can charge higher fees.

Blockchain will definitely have a significant impact here. Since the system is encrypted and transparent, companies can easily check if those viewing their ads are part of their target audience or not.

Improves Accountability

There’s nothing more disheartening than spending your hard earned cash on a counterfeit product. Blockchain can lessen the odds of this happening by improving merchants’ accountability in every step of the supply chain.

Blockchain’s vaunted digital ledger system enables transparency that cannot be tampered with. Customers can check details like where the product came from, if it’s legit or fake, whether it’s bought from a physical store or an online action. Simply put, blockchain empowers the customer and improves their buying experience.

There’s no question that the idea behind blockchain is a powerful one. The technology has the potential to impact cryptocurrency, digital marketing, and customer experience. The system is still in its infancy but is expected to see significant growth in the coming year.  

The post Blockchain: How Will it Impact Digital Marketing? appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

How Links in Headers, Footers, Content, and Navigation Can Impact SEO – Whiteboard Friday

Posted by randfish

Which link is more valuable: the one in your nav, or the one in the content of your page? Now, how about if one of those in-content links is an image, and one is text? Not all links are created equal, and getting familiar with the details will help you build a stronger linking structure.

How Links in Headers, Footers, Content, and Navigation Can Impact SEO

Click on the whiteboard image above to open a high-resolution version in a new tab!


Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about links in headers and footers, in navigation versus content, and how that can affect both internal and external links and the link equity and link value that they pass to your website or to another website if you’re linking out to them.

So I’m going to use Candy Japan here. They recently crossed $ 1 million in sales. Very proud of Candy Japan. They sell these nice boxes of random assortments of Japanese candy that come to your house. Their website is actually remarkably simplistic. They have some footer links. They have some links in the content, but not a whole lot else. But I’m going to imagine them with a few more links in here just for our purposes.

It turns out that there are a number of interesting items when it comes to internal linking. So, for example, some on-page links matter more and carry more weight than other kinds. If you are smart and use these across your entire site, you can get some incremental or potentially some significant benefits depending on how you do it.

Do some on-page links matter more than others?

So, first off, good to know that…

I. Content links tend to matter more

…just broadly speaking, than navigation links. That shouldn’t be too surprising, right? If I have a link down here in the content of the page pointing to my Choco Puffs or my Gummies page, that might actually carry more weight in Google’s eyes than if I point to it in my navigation.

Now, this is not universally true, but observably, it seems to be the case. So when something is in the navigation, it’s almost always universally in that navigation. When something is in here, it’s often only specifically in here. So a little tough to tell cause and effect, but we can definitely see this when we get to external links. I’ll talk about that in a sec.

II. Links in footers often get devalued

So if there’s a link that you’ve got in your footer, but you don’t have it in your primary navigation, whether that’s on the side or the top, or in the content of the page, a link down here may not carry as much weight internally. In fact, sometimes it seems to carry almost no weight whatsoever other than just the indexing.

III. More used links may carry more weight

This is a theory for now. But we’ve seen some papers on this, and there has been some hypothesizing in the SEO community that essentially Google is watching as people browse the web, and they can get that data and sort of see that, hey, this is a well-trafficked page. It gets a lot of visits from this other page. This navigation actually seems to get used versus this other navigation, which doesn’t seem to be used.

There are a lot of ways that Google might interpret that data or might collect it. It could be from the size of it or the CSS qualities. It could be from how it appears on the page visually. But regardless, that also seems to be the case.

IV. Most visible links may get more weight

This does seem to be something that’s testable. So if you have very small fonts, very tiny links, they are not nearly as accessible or obvious to visitors. It seems to be the case that they also don’t carry as much weight in Google’s rankings.

V. On pages with multiple links to the same URL

For example, let’s say I’ve got this products link up here at the top, but I also link to my products down here under Other Candies, etc. It turns out that Google will see both links. They both point to the same page in this case, both pointing to the same page over here, but this page will only inherit the value of the anchor text from the first link on the page, not both of them.

So Other Candies, etc., that anchor text will essentially be treated as though it doesn’t exist. Google ignores multiple links to the same URL. This is actually true internal and external. For this reason, if you’re going ahead and trying to stuff in links in your internal content to other pages, thinking that you can get better anchor text value, well look, if they’re already in your navigation, you’re not getting any additional value. Same case if they’re up higher in the content. The second link to them is not carrying the anchor text value.

Can link location/type affect external link impact?

Other items to note on the external side of things and where they’re placed on pages.

I. In-content links are going to be more valuable than footers or nav links

In general, nav links are going to do better than footers. But in content, this primary content area right in here, that is where you’re going to get the most link value if you have the option of where you’re going to get an external link from on a page.

II. What if you have links that open in a new tab or in a new window versus links that open in the same tab, same window?

It doesn’t seem to matter at all. Google does not appear to carry any different weight from the experiments that we’ve seen and the ones we’ve conducted.

III. Text links do seem to perform better, get more weight than image links with alt attributes

They also seem to perform better than JavaScript links and other types of links, but critically important to know this, because many times what you will see is that a website will do something like this. They’ll have an image. This image will be a link that will point off to a page, and then below it they’ll have some sort of caption with keyword-rich anchors down here, and that will also point off. But Google will treat this first link as though it is the one, and it will be the alt attribute of this image that passes the anchor text, unless this is all one href tag, in which case you do get the benefit of the caption as the anchor. So best practice there.

IV. Multiple links from same page — only the first anchor counts

Well, just like with internal links, only the first anchor is going to count. So if I have two links from Candy Japan pointing to me, it’s only the top one that Google sees first in the HTML. So it’s not where it’s organized in the site as it renders visually, but where it comes up in the HTML of the page as Google is rendering that.

V. The same link and anchor on many or most or all pages on a website tends to get you into trouble.

Not always, not universally. Sometimes it can be okay. Is Amazon allowed to link to Whole Foods from their footer? Yes, they are. They’re part of the same company and group and that kind of thing. But if, for example, Amazon were to go crazy spamming and decided to make it “cheap avocados delivered to your home” and put that in the footer of all their pages and point that to the WholeFoods.com/avocadodelivery page, that would probably get penalized, or it may just be devalued. It might not rank at all, or it might not pass any link equity. So notable that in the cases where you have the option of, “Should I get a link on every page of a website? Well, gosh, that sounds like a good deal. I’d pass all this page rank and all this link equity.” No, bad deal.

Instead, far better would be to get a link from a page that’s already linked to by all of these pages, like, hey, if we can get a link from the About page or from the Products page or from the homepage, a link on the homepage, those are all great places to get links. I don’t want a link on every page in the footer or on every page in a sidebar. That tends to get me in trouble, especially if it is anchor text-rich and clearly keyword targeted and trying to manipulate SEO.

All right, everyone. I look forward to your questions. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Advert