Tag Archive | "Importance"

The Real Impact of Mobile-First Indexing & The Importance of Fraggles

Posted by Suzzicks

While SEOs have been doubling-down on content and quality signals for their websites, Google was building the foundation of a new reality for crawling — indexing and ranking. Though many believe deep in their hearts that “Content is King,” the reality is that Mobile-First Indexing enables a new kind of search result. This search result focuses on surfacing and re-publishing content in ways that feed Google’s cross-device monetization opportunities better than simple websites ever could.

For two years, Google honed and changed their messaging about Mobile-First Indexing, mostly de-emphasizing the risk that good, well-optimized, Responsive-Design sites would face. Instead, the search engine giant focused more on the use of the Smartphone bot for indexing, which led to an emphasis on the importance of matching SEO-relevant site assets between desktop and mobile versions (or renderings) of a page. Things got a bit tricky when Google had to explain that the Mobile-First Indexing process would not necessarily be bad for desktop-oriented content, but all of Google’s shifting and positioning eventually validated my long-stated belief: That Mobile-First Indexing is not really about mobile phones, per se, but mobile content.

I would like to propose an alternative to the predominant view, a speculative theory, about what has been going on with Google in the past two years, and it is the thesis of my 2019 MozCon talk — something we are calling Fraggles and Fraggle-based Indexing

 I’ll go through Fraggles and Fraggle-based indexing, and how this new method of indexing has made web content more ‘liftable’ for Google. I’ll also outline how Fraggles impact the Search Results Pages (SERPs), and why it fits with Google’s promotion of Progressive Web Apps. Next, I will provide information about how astute SEO’s can adapt their understanding of SEO and leverage Fraggles and Fraggle-Based Indexing to meet the needs of their clients and companies. Finally, I’ll go over the implications that this new method of indexing will have on Google’s monetization and technology strategy as a whole.

Ready? Let’s dive in.

Fraggles & Fraggle-based indexing

The SERP has changed in many ways. These changes can be thought of and discussed separately, but I believe that they are all part of a larger shift at Google. This shift includes “Entity-First Indexing” of crawled information around the existing structure of Google’s Knowledge Graph, and the concept of “Portable-prioritized Organization of Information,” which favors information that is easy to lift and re-present in Google’s properties — Google describes these two things together as “Mobile-First Indexing.”

As SEOs, we need to remember that the web is getting bigger and bigger, which means that it’s getting harder to crawl. Users now expect Google to index and surface content instantly. But while webmasters and SEOs were building out more and more content in flat, crawlable HTML pages, the best parts of the web were moving towards more dynamic websites and web-apps. These new assets were driven by databases of information on a server, populating their information into websites with JavaScript, XML or C++, rather than flat, easily crawlable HTML. 

For many years, this was a major problem for Google, and thus, it was a problem for SEOs and webmasters. Ultimately though, it was the more complex code that forced Google to shift to this more advanced, entity-based system of indexing — something we at MobileMoxie call Fraggles and Fraggle-Based Indexing, and the credit goes to JavaScript’s “Fragments.”

Fraggles represent individual parts (fragments) of a page for which Google overlayed a “handle” or “jump-link” (aka named-anchor, bookmark, etc.) so that a click on the result takes the users directly to the part of the page where the relevant fragment of text is located. These Fraggles are then organized around the relevant nodes on the Knowledge Graph, so that the mapping of the relationships between different topics can be vetted, built-out, and maintained over time, but also so that the structure can be used and reused, internationally — even if different content is ranking. 

More than one Fraggle can rank for a page, and the format can vary from a text-link with a “Jump to” label, an unlabeled text link, a site-link carousel, a site-link carousel with pictures, or occasionally horizontal or vertical expansion boxes for the different items on a page.

The most notable thing about Fraggles is the automatic scrolling behavior from the SERP. While Fraggles are often linked to content that has an HTML or JavaScript jump-links, sometimes, the jump-links appear to be added by Google without being present in the code at all. This behavior is also prominently featured in AMP Featured Snippets, for which Google has the same scrolling behavior, but also includes Google’s colored highlighting — which is superimposed on the page — to show the part of the page that was displayed in the Featured Snippet, which allows the searcher to see it in context. I write about this more in the article: What the Heck are Fraggles.

How Fraggles & Fraggle-based indexing works with JavaScript

Google’s desire to index Native Apps and Web Apps, including single-page apps, has necessitated Google’s switch to indexing based on Fragments and Fraggles, rather than pages. In JavaScript, as well as in Native Apps, a “Fragment” is a piece of content or information that is not necessarily a full page. 

The easiest way for an SEO to think about a Fragment is within the example of an AJAX expansion box: The piece of text or information that is fetched from the server to populate the AJAX expander when clicked could be described as a Fragment. Alternatively, if it is indexed for Mobile-First Indexing, it is a Fraggle. 

It is no coincidence that Google announced the launch of Deferred JavaScript Rendering at roughly the same time as the public roll-out of Mobile-First Indexing without drawing-out the connection, but here it is: When Google can index fragments of information from web pages, web apps and native apps, all organized around the Knowledge Graph, the data itself becomes “portable” or “mobile-first.”

We have also recently discovered that Google has begun to index URLs with a # jump-link, after years of not doing so, and is reporting on them separately from the primary URL in Search Console. As you can see below from our data, they aren’t getting a lot of clicks, but they are getting impressions. This is likely because of the low average position. 

Before Fraggles and Fraggle-Based Indexing, indexing # URLs would have just resulted in a massive duplicate content problem and extra work indexing for Google. Now that Fraggle-based Indexing is in-place, it makes sense to index and report on # URLs in Search Console — especially for breaking up long, drawn-out JavaScript experiences like PWA’s and Single-Page-Apps that don’t have separate URLs, databases, or in the long-run, possibly even for indexing native apps without Deep Links. 

Why index fragments & Fraggles?

If you’re used to thinking of rankings with the smallest increment being a URL, this idea can be hard to wrap your brain around. To help, consider this thought experiment: How useful would it be for Google to rank a page that gave detailed information about all different kinds of fruits and vegetables? It would be easy for a query like “fruits and vegetables,” that’s for sure. But if the query is changed to “lettuce” or “types of lettuce,” then the page would struggle to rank, even if it had the best, most authoritative information. 

This is because the “lettuce” keywords would be diluted by all the other fruit and vegetable content. It would be more useful for Google to rank the part of the page that is about lettuce for queries related to lettuce, and the part of the page about radishes well for queries about radishes. But since users don’t want to scroll through the entire page of fruits and vegetables to find the information about the particular vegetable they searched for, Google prioritizes pages with keyword focus and density, as they relate to the query. Google will rarely rank long pages that covered multiple topics, even if they were more authoritative.

With featured snippets, AMP featured snippets, and Fraggles, it’s clear that Google can already find the important parts of a page that answers a specific question — they’ve actually been able to do this for a while. So, if Google can organize and index content like that, what would the benefit be in maintaining an index that was based only on per-pages statistics and ranking? Why would Google want to rank entire pages when they could rank just the best parts of pages that are most related to the query?

To address these concerns, historically, SEO’s have worked to break individual topics out into separate pages, with one page focused on each topic or keyword cluster. So, with our vegetable example, this would ensure that the lettuce page could rank for lettuce queries and the radish page could rank for radish queries. With each website creating a new page for every possible topic that they would like to rank for, there’s lot of redundant and repetitive work for webmasters. It also likely adds a lot of low-quality, unnecessary pages to the index. Realistically, how many individual pages on lettuce does the internet really need, and how would Google determine which one is the best? The fact is, Google wanted to shift to an algorithm that focused less on links and more on topical authority to surface only the best content — and Google circumvents this with the scrolling feature in Fraggles.

Even though the effort to switch to Fraggle-based indexing, and organize the information around the Knowledge Graph, was massive, the long-term benefits of the switch far out-pace the costs to Google because they make Google’s system for flexible, monetizable and sustainable, especially as the amount of information and the number of connected devices expands exponentially. It also helps Google identify, serve and monetize new cross-device search opportunities, as they continue to expand. This includes search results on TV’s, connected screens, and spoken results from connected speakers. A few relevant costs and benefits are outlined below for you to contemplate, keeping Google’s long-term perspective in mind:

Why Fraggles and Fraggle-based indexing are important for PWAs

What also makes the shift to Fraggle-based Indexing relevant to SEOs is how it fits in with Google’s championing of Progressive Web Apps or AMP Progressive Web Apps, (aka PWAs and PWA-AMP websites/web apps). These types of sites have become the core focus of Google’s Chrome Developer summits and other smaller Google conferences.

From the perspective of traditional crawling and indexing, Google’s focus on PWAs is confusing. PWAs often feature heavy JavaScript and are still frequently built as Single-Page Apps (SPA’s), with only one or only a few URLs. Both of these ideas would make PWAs especially difficult and resource-intensive for Google to index in a traditional way — so, why would Google be so enthusiastic about PWAs? 

The answer is because PWA’s require ServiceWorkers, which uses Fraggles and Fraggle-based indexing to take the burden off crawling and indexing of complex web content.

In case you need a quick refresher: ServiceWorker is a JavaScript file — it instructs a device (mobile or computer) to create a local cache of content to be used just for the operation of the PWA. It is meant to make the loading of content much faster (because the content is stored locally) instead of just left on a server or CDN somewhere on the internet and it does so by saving copies of text and images associated with certain screens in the PWA. Once a user accesses content in a PWA, the content doesn’t need to be fetched again from the server. It’s a bit like browser caching, but faster — the ServiceWorker stores the information about when content expires, rather than storing it on the web. This is what makes PWAs seem to work offline, but it is also why content that has not been visited yet is not stored in the ServiceWorker.

ServiceWorkers and SEO

Most SEOs who understand PWAs understand that a ServiceWorker is for caching and load time, but they may not understand that it is likely also for indexing. If you think about it, ServiceWorkers mostly store the text and images of a site, which is exactly what the crawler wants. A crawler that uses Deferred JavaScript Rendering could go through a PWA and simulate clicking on all the links and store static content using the framework set forth in the ServiceWorker. And it could do this without always having to crawl all the JavaScript on the site, as long as it understood how the site was organized, and that organization stayed consistent. 

Google would also know exactly how often to re-crawl, and therefore could only crawl certain items when they were set to expire in the ServiceWorker cache. This saves Google a lot of time and effort, allowing them to get through or possibly skip complex code and JavaScript.

For a PWA to be indexed, Google requires webmasters to ‘register their app in Firebase,’ but they used to require webmasters to “register their ServiceWorker.” Firebase is the Google platform that allows webmasters to set up and manage indexing and deep linking for their native apps, chat-bots and, now, PWA’s

Direct communication with a PWA specialist at Google a few years ago revealed that Google didn’t crawl the ServiceWorker itself, but crawled the API to the ServiceWorker. It’s likely that when webmasters register their ServiceWorker with Google, Google is actually creating an API to the ServiceWorker, so that the content can be quickly and easily indexed and cached on Google’s servers. Since Google has already launched an Indexing API and appears to now favor API’s over traditional crawling, we believe Google will begin pushing the use of ServiceWorkers to improve page speed, since they can be used on non-PWA sites, but this will actually be to help ease the burden on Google to crawl and index the content manually.

Flat HTML may still be the fastest way to get web information crawled and indexed with Google. For now, JavaScript still has to be deferred for rendering, but it is important to recognize that this could change and crawling and indexing is not the only way to get your information to Google. Google’s Indexing API, which was launched for indexing time-sensitive information like job postings and live-streaming video, will likely be expanded to include different types of content. 

It’s important to remember that this is how AMP, Schema, and many other types of powerful SEO functionalities have started with a limited launch; beyond that, some great SEO’s have already tested submitting other types of content in the API and seen success. Submitting to APIs skips Google’s process of blindly crawling the web for new content and allows webmasters to feed the information to them directly.

It is possible that the new Indexing API follows a similar structure or process to PWA indexing. Submitted URLs can already get some kinds of content indexed or removed from Google’s index, usually in about an hour, and while it is only currently officially available for the two kinds of content, we expect it to be expanded broadly.

How will this impact SEO strategy?

Of course, every SEO wants to know how to leverage this speculative theory — how can we make the changes in Google to our benefit? 

The first thing to do is take a good, long, honest look at a mobile search result. Position #1 in the organic rankings is just not what it used to be. There’s a ton of engaging content that is often pushing it down, but not counting as an organic ranking position in Search Console. This means that you may be maintaining all your organic rankings while also losing a massive amount of traffic to SERP features like Knowledge Graph results, Featured Snippets, Google My Business, maps, apps, Found on the Web, and other similar items that rank outside of the normal organic results. 

These results, as well as Pay-per-Click results (PPC), are more impactful on mobile because they are stacked above organic rankings. Rather than being off to the side, as they might be in a desktop view of the search, they push organic rankings further down the results page. There has been some great reporting recently about the statistical and large-scale impact of changes to the SERP and how these changes have resulted in changes to user-behavior in search, especially from Dr. Pete Meyers, Rand Fishkin, and JumpTap.

Dr. Pete has focused on the increasing number of changes to the Google Algorithm recorded in his MozCast, which heated up at the end of 2016 when Google started working on Mobile-First Indexing, and again after it launched the Medic update in 2018. 

Rand, on the other hand, focused on how the new types of rankings are pushing traditional organic results down, resulting in less traffic to websites, especially on mobile. All this great data from these two really set the stage for a fundamental shift in SEO strategy as it relates to Mobile-First Indexing.

The research shows that Google re-organized its index to suit a different presentation of information — especially if they are able to index that information around an entity-concept in the Knowledge Graph. Fraggle-based Indexing makes all of the information that Google crawls even more portable because it is intelligently nested among related Knowledge Graph nodes, which can be surfaced in a variety of different ways. Since Fraggle-based Indexing focuses more on the meaningful organization of data than it does on pages and URLs, the results are a more “windowed” presentation of the information in the SERP. SEOs need to understand that search results are now based on entities and use-cases (think micro-moments), instead of pages and domains.

Google’s Knowledge Graph

To really grasp how this new method of indexing will impact your SEO strategy, you first have to understand how Google’s Knowledge Graph works. 

Since it is an actual “graph,” all Knowledge Graph entries (nodes) include both vertical and lateral relationships. For instance, an entry for “bread” can include lateral relationships to related topics like cheese, butter, and cake, but may also include vertical relationships like “standard ingredients in bread” or “types of bread.” 

Lateral relationships can be thought of as related nodes on the Knowledge Graph, and hint at “Related Topics” whereas vertical relationships point to a broadening or narrowing of the topic; which hints at the most likely filters within a topic. In the case of bread, a vertical relationship-up would be topics like “baking,” and down would include topics like “flour” and other ingredients used to make bread, or “sourdough” and other specific types of bread.

SEOs should note that Knowledge Graph entries can now include an increasingly wide variety of filters and tabs that narrow the topic information to benefit different types of searcher intent. This includes things like helping searchers find videos, books, images, quotes, locations, but in the case of filters, it can be topic-specific and unpredictable (informed by active machine learning). This is the crux of Google’s goal with Fraggle-based Indexing: To be able to organize the information of the web-based on Knowledge Graph entries or nodes, otherwise discussed in SEO circles as “entities.” 

Since the relationships of one entity to another remain the same, regardless of the language a person is speaking or searching in, the Knowledge Graph information is language-agnostic, and thus easily used for aggregation and machine learning in all languages at the same time. Using the Knowledge Graph as a cornerstone for indexing is, therefore, a much more useful and efficient means for Google to access and serve information in multiple languages for consumption and ranking around the world. In the long-term, it’s far superior to the previous method of indexing.

Examples of Fraggle-based indexing in the SERPs 

Knowledge Graph

Google has dramatically increased the number of Knowledge Graph entries and the categories and relationships within them. The build-out is especially prominent for topics for which Google has a high amount of structured data and information already. This includes topics like:

  • TV and Movies — from Google Play
  • Food and Recipe — from Recipe Schema, recipe AMP pages, and external food and nutrition databases 
  • Science and medicine — from trusted sources (like WebMD) 
  • Businesses — from Google My Business. 

Google is adding more and more nodes and relationships to their graph and existing entries are also being built-out with more tabs and carousels to break a single topic into smaller, more granular topics or type of information.

As you can see below, the build-out of the Knowledge Graph has also added to the number of filters and drill-down options within many queries, even outside of the Knowledge Graph. This increase can be seen throughout all of the Google properties, including Google My Business and Shopping, both of which we believe are now sections of the Knowledge Graph:

Google Search for ‘Blazers’ with Visual Filters at the Top for Shopping Oriented Queries

Google My Business (Business Knowledge Graph) with Filters for Information about Googleplex

Other similar examples include the additional filters and “Related Topics” results in Google Images, which we also believe to represent nodes on the Knowledge Graph:


 Advanced issues found


Google Images Increase in Filters & Inclusion of Related Topics Means that These Are Also Nodes on the Knowledge Graph

The Knowedge Graph is also being presented in a variety of different ways. Sometimes there’s a sticky navigation that persists at the top of the SERP, as seen in many media-oriented queries, and sometimes it’s broken up to show different information throughout the SERP, as you may have noticed in many of the local business-oriented search results, both shown below.

Media Knowledge Graph with Sticky Top Nav (Query for ‘Ferris Bueller’s Day Off’)

Local Business Knowledge Graph (GMB) With Information Split-up Throughout the SERP

Since the launch of Fraggle-based indexing is essentially a major Knowledge Graph build-out, Knowledge Graph results have also begun including more engaging content which makes it even less likely that users will click through to a website. Assets like playable video and audio, live sports scores, and location-specific information such as transportation information and TV time-tables can all be accessed directly in the search results. There’s more to the story, though. 

Increasingly, Google is also building out their own proprietary content by re-mixing existing information that they have indexed to create unique, engaging content like animated ‘AMP Stories’ which webmasters are also encouraged to build-out on their own. They have also started building a zoo of AR animals that can show as part of a Knowledge Graph result, all while encouraging developers to use their AR kit to build their own AR assets that will, no doubt, eventually be selectively incorporated into the Knowledge Graph too.

Google AR Animals in Knowledge Graph

Google AMP Stories Now Called ‘Life in Images’

SEO Strategy for Knowledge Graphs

Companies who want to leverage the Knowledge Graph should take every opportunity to create your own assets, like AR models and AMP Stories, so that Google will have no reason to do it. Beyond that, companies should submit accurate information directly to Google whenever they can. The easiest way to do this is through Google My Business (GMB). Whatever types of information are requested in GMB should be added or uploaded. If Google Posts are available in your business category, you should be doing Posts regularly, and making sure that they link back to your site with a call to action. If you have videos or photos that are relevant for your company, upload them to GMB. Start to think of GMB as a social network or newsletter — any assets that are shared on Facebook or Twitter can also be shared on Google Posts, or at least uploaded to the GMB account.

You should also investigate the current Knowledge Graph entries that are related to your industry, and work to become associated with recognized companies or entities in that industry. This could be from links or citations on the entity websites, but it can also include being linked by third-party lists that give industry-specific advice and recommendations, such as being listed among the top competitors in your industry (“Best Plumbers in Denver,” “Best Shoe Deals on the Web,” or “Top 15 Best Reality TV Shows”). Links from these posts also help but are not required — especially if you can get your company name on enough lists with the other top players. Verify that any links or citations from authoritative third-party sites like Wikipedia, Better Business Bureau, industry directories, and lists are all pointing to live, active, relevant pages on the site, and not going through a 301 redirect.

While this is just speculation and not a proven SEO strategy, you might also want to make sure that your domain is correctly classified in Google’s records by checking the industries that it is associated with. You can do so in Google’s MarketFinder tool. Make updates or recommend new categories as necessary. Then, look into the filters and relationships that are given as part of Knowledge Graph entries and make sure you are using the topic and filter words as keywords on your site.

Featured snippets 

Featured Snippets or “Answers” first surfaced in 2014 and have also expanded quite a bit, as shown in the graph below. It is useful to think of Featured Snippets as rogue facts, ideas or concepts that don’t have a full Knowledge Graph result, though they might actually be associated with certain existing nodes on the Knowledge Graph (or they could be in the vetting process for eventual Knowledge Graph build-out). 

Featured Snippets seem to surface when the information comes from a source that Google does not have an incredibly high level of trust for, like it does for Wikipedia, and often they come from third party sites that may or may not have a monetary interest in the topic — something that makes Google want to vet the information more thoroughly and may prevent Google from using it, if a less bias option is available.

Like the Knowledge Graph, Featured Snippets results have grown very rapidly in the past year or so, and have also begun to include carousels — something that Rob Bucci writes about extensively here. We believe that these carousels represent potentially related topics that Google knows about from the Knowledge Graph. Featured Snippets now look even more like mini-Knowledge Graph entries: Carousels appear to include both lateral and vertically related topics, and their appearance and maintenance seem to be driven by click volume and subsequent searches. However, this may also be influenced by aggregated engagement data for People Also Ask and Related Search data.

The build-out of Featured Snippets has been so aggressive that sometimes the answers that Google lifts are obviously wrong, as you can see in the example image below. It is also important to understand that Featured Snippet results can change from location to location and are not language-agnostic, and thus, are not translated to match the Search Language or the Phone Language settings. Google also does not hold themselves to any standard of consistency, so one Featured Snippet for one query might present an answer one way, and a similar query for the same fact could present a Featured Snippet with slightly different information. For instance, a query for “how long to boil an egg” could result in an answer that says “5 minutes” and a different query for “how to make a hard-boiled egg” could result in an answer that says “boil for 1 minute, and leave the egg in the water until it is back to room temperature.”

Featured Snippet with Carousel Featured

Snippet that is Wrong

The data below was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.

This Data Was Collected by Moz & represents an average of roughly 10,000 that skews slightly towards ‘head’ terms

SEO strategy for featured snippets

All of the standard recommendations for driving Featured Snippets apply here. This includes making sure that you keep the information that you are trying to get ranked in a Featured Snippet clear, direct, and within the recommended character count. It also includes using simple tables, ordered lists, and bullets to make the data easier to consume, as well as modeling your content after existing Featured Snippet results in your industry.

This is still speculative, but it seems likely that the inclusion of Speakable Schema markup for things like “How To,” “FAQ,” and “Q&A” may also drive Featured Snippets. These kinds of results are specially designated as content that works well in a voice-search. Since Google has been adamant that there is not more than one index, and Google is heavily focused on improving voice-results from Google Assistant devices, anything that could be a good result in the Google Assistant, and ranks well, might also have a stronger chance at ranking in a Featured Snippet.

People Also Ask & Related Searches

Finally, the increased occurrence of “Related Searches” as well as the inclusion of People Also Ask (PAA) questions, just below most Knowledge Graph and Featured Snippet results, is undeniable. The Earl Tea screenshot shows that PAA’s along with Interesting Finds are both part of the Knowledge Graph too.

The graph below shows the steady increase in PAA’s. PAA results appear to be an expansion of Featured Snippets because once expanded, the answer to the question is displayed, with the citation below it. Similarly, some Related Search results also now include a result that looks like a Featured Snippet, instead of simply linking over to a different search result. You can now find ‘Related Searches’ throughout the SERP, often as part of a Knowledge Graph results, but sometimes also in a carousel in the middle of the SERP, and always at the bottom of the SERP — sometimes with images and expansion buttons to surface Featured Snippets within the Related Search results directly in the existing SERP.

Boxes with Related Searches are now also included with Image Search results. It’s interesting to note that Related Search results in Google Images started surfacing at the same time that Google began translating image Title Tags and Alt Tags. It coincides well with the concept that Entity-First Indexing, that Entities and Knowledge Graph are language-agnostic, and that Related Searches are somehow related to the Knowledge Graph.

This data was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.

People Also Ask

Related Searches

SEO STRATEGY for PAA and related searches

Since PAAs and some Related Searches now appear to simply include Featured Snippets, driving Featured Snippet results for your site is also a strong strategy here. It often appears that PAA results include at least two versions of the same question, re-stated with a different language, before including questions that are more related to lateral and vertical nodes on the Knowledge Graph. If you include information on your site that Google thinks is related to the topic, based on Related Searches and PAA questions, it could help make your site appear relevant and authoritative.

Finally, it is crucial to remember that you don’t have a website to rank in Google now and SEO’s should consider non-website rankings as part of their job too. 

If a business doesn’t have a website, or if you just want to cover all the bases, you can let Google host your content directly — in as many places as possible. We have seen that Google-hosted content generally seems to get preferential treatment in Google search results and Google Discover, especially when compared to the decreasing traffic from traditional organic results. Google is now heavily focused on surfacing multimedia content, so anything that you might have previously created a new page on your website for should now be considered for a video.

Google My Business (GMB) is great for companies that don’t have websites, or that want to host their websites directly with Google. YouTube is great for videos, TV, video-podcasts, clips, animations, and tutorials. If you have an app, a book, an audio-book, a podcast, a movie, TV show, class or music, or PWA, you can submit that directly to GooglePlay (much of the video content in GooglePlay is now cross-populated in YouTube and YouTube TV, but this is not necessarily true of the other assets). This strategy could also include books in Google Books, flights in Google Flights, Hotels in Google Hotel listings, and attractions in Google Explore. It also includes having valid AMP code, since Google hosts AMP content, and includes Google News if your site is an approved provider of news.

Changes to SEO tracking for Fraggle-based indexing

The biggest problem for SEOs is the missing organic traffic, but it is also the fact that current methods of tracking organic results generally don’t show whether things like Knowledge Graph, Featured Snippets, PAA, Found on the Web, or other types of results are appearing at the top of the query or somewhere above your organic result. Position one in organic results is not what it used to be, nor is anything below it, so you can’t expect those rankings to drive the same traffic. If Google is going to be lifting and representing everyone’s content, the traffic will never arrive at the site and SEOs won’t know if their efforts are still returning the same monetary value. This problem is especially poignant for publishers, who have only been able to sell advertising on their websites based on the expected traffic that the website could drive.

The other thing to remember is that results differ — especially on mobile, which varies from device to device (generally based on screen size) but also can vary based on the phone IOS. They can also change significantly based on the location or the language settings of the phone, and they definitely do not always match with desktop results for the same query. Most SEO’s don’t know much about the reality of their mobile search results because most SEO reporting tools still focus heavily on desktop results, even though Google has switched to Mobile-First. 

As well, SEO tools generally only report on rankings from one location — the location of their servers — rather than being able to test from different locations. 

The only thing that good SEO’s can do to address this problem is to use tools like the MobileMoxie SERP Test to check what rankings look like on top keywords from all the locations where their users may be searching. While the free tool only provides results with one location at a time, subscribers can test search results in multiple locations, based on a service-area radius or based on an uploaded CSV of addresses. The tool has integrations with Google Sheets, and a connector with Data Studio, to help with SEO reporting, but APIs are also available, for deeper integrations in content editing tools, dashboards and for use within other SEO tools.


At MozCon 2017, I expressed my belief that the impact of Mobile-First Indexing requires a re-interpretation of the words “Mobile,” “First,” and “Indexing.” Re-defined in the context of Mobile-First Indexing, the words should be understood to mean “portable,” “preferred,” and “organization of information.” The potential of a shift to Fraggle-based indexing and the recent changes to the SERPs, especially in the past year, certainly seems to prove the accuracy of this theory. And though they have been in the works for more than two years, the changes to the SERP now seem to be rolling-out faster and are making the SERP unrecognizable from what it was only three or four years ago.

In this post, we described Fraggles and Fraggle-based indexing for SEO as a theory that speculates the true nature of the change to Mobile-First Indexing, how the index itself — and the units of indexing — may have changed to accommodate faster and more nuanced organization of information based on the Knowledge Graph, rather than simply links and URLs. We covered how Fraggles and Fraggle-based Indexing works, how it is related to JavaScript and PWA’s and what strategies SEOs can take to leverage it for additional exposure in the search results as well as how they can update their success tracking to account for all the variabilities that impact mobile search results.

SEOs need to consider the opportunities and change the way we view our overall indexing strategy, and our jobs as a whole. If Google is organizing the index around the Knowledge Graph, that makes it much easier for Google to constantly mention near-by nodes of the Knowledge Graph in “Related Searches” carousels, links from the Knowledge Graph, and topics in PAAs. It might also make it easier to believe that featured snippets are simply pieces of information being vetted (via Google’s click-crowdsourcing) for inclusion or reference in the Knowledge Graph.

Fraggles and Fraggled indexing re-frames the switch to Mobile-First Indexing, which means that SEOs and SEO tool companies need to start thinking mobile-first — i.e. the portability of their information. While it is likely that pages and domains still carry strong ranking signals, the changes in the SERP all seem to focus less on entire pages, and more on pieces of pages, similar to the ones surfaced in Featured Snippets, PAAs, and some Related Searches. If Google focuses more on windowing content and being an “answer engine” instead of a “search engine,” then this fits well with their stated identity, and their desire to build a more efficient, sustainable, international engine.

SEOs also need to find ways to serve their users better, by focusing more on the reality of the mobile SERP, and how much it can vary for real users. While Google may not call the smallest rankable units Fraggles, it is what we call them, and we think they are critical to the future of SEO.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Posted in IM NewsComments Off

The growing importance of remarketing audiences in Google paid search management

With the explosive growth of click share coming from remarketing audiences, contributor Andy Taylor feels it’s important to consider both incrementality and personalization when using audiences for paid search management.

Please visit Search Engine Land for the full article.

Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

War of Words: Elon Musk and Mark Zuckerberg Spar on Importance of AI

Nothing gets a geek’s dander up than a discussion of whether a Skynet-like AI will become part of our future, as seen in the beef apparently brewing between Elon Musk and Mark Zuckerberg.

The two billionaires have opposing views with regards to artificial intelligence. While Musk is known for issuing warnings regarding the dangers of artificial intelligence, Facebook’s CEO has expressed optimism on how AI can improve people’s lives. A mindset that Tesla’s chief thinks is a pretty “limited” understanding of the topic.

The word war apparently started after Zuckerberg conducted a Facebook Live session. As he relaxed at home and manned the grill, the tech icon answered various question, including one about AI.

According to Zuckerberg, people who keep trying to drum up fear of AI are “really negative” and “pretty irresponsible.” He emphasized that any technology, including AI, can be used for either good or bad and that it’s up to designers and developers to be careful of what they create.

Zuckerberg added that he has a hard time understanding those who are against the development and evolution of AI technology, saying that these people are “arguing against safer cars that aren’t going to have accidents” and “against being able to better diagnose people when they’re sick.”

It’s safe to assume that Tesla’s boss was among those people Zuckerberg is talking about. Musk met a group of US governors earlier this month and proposed that regulations on artificial intelligence should be enacted.

Musk explained that AI technology posed a huge risk to society, hinting at a future similar to what the Terminator movies have implied.

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” Musk said then.

Upon hearing Zuckerberg’s comments on AI, Musk hit back on Twitter, saying that he has talked to his contemporary about this. He also said that Zuckerberg’s “understanding of the subject is limited.”

However, Zuckerberg is sticking to his guns as he once more defended his views on AI in a recent Facebook post. He reiterated his optimism about AI and the technology’s potential to improve the world.

[Featured image via YouTube]

The post War of Words: Elon Musk and Mark Zuckerberg Spar on Importance of AI appeared first on WebProNews.


Posted in IM NewsComments Off

The Importance of Entrepreneurial Mental Health


Today we’re joined by Cory Miller. Cory is a former newspaper journalist turned full-time entrepreneur. In 2008, he started iThemes, which builds web design software and offers cutting-edge web design training for thousands of customers around the globe.

Cory is a passionate entrepreneur who believes in finding and maintaining work happiness (for himself and others) that aligns with your purpose and plays to your strengths, talents, and ambitions, while challenging you to do great things with your life.

In this 43-minute episode Brian Gardner, Lauren Mancke, and Cory Miller discuss:

  • The founding of iThemes in 2008
  • Comradery and Co-opetition in the WordPress community
  • What lies beneath the surface of entrepreneurship
  • The importance of talking openly about mental health
  • How mental health can affect your business
  • How to find lasting career happiness

Listen to this Episode Now

The post The Importance of Entrepreneurial Mental Health appeared first on Copyblogger.


Posted in IM NewsComments Off

Importance of Wind Energy – Pros and Cons to Save Energy

Wind power is a clean and renewable energy to produce electricity. Today, wind energy is considered a renewable energy leader after the second solar energy. Until the composition of this post, the absolute capacity of wind turbines in place in the world is more than 3,500 TW.

With a whole lot of individuals can receive benefits of wind energy, there are millions of people around the world who put wind turbines to provide electricity to the house for your property. However, wind power is really worth? Can you really save hard earned money of the application of this power? The following are some of the advantages and disadvantages of wind energy that can enhance your knowledge of this renewable energy.

Well, if you need to know the advantages and disadvantages of wind energy. It is essential to think of different energy sources as non-renewable energy resources could not last forever. In this post, you might start by listing the benefits in this list of points cons, then finish with a balanced conclusion.

1. No pollution or radioactive waste is created. Similarly, the development and installation of wind turbines is less destructive to the environment.
2. Wind power is much more autonomous. If a normal disaster took place, so fundamental power plants inactive individuals with their own custom turbines certainly still have the ability to apply power.
3. do not exhaust all kinds of non-renewable resources such as coal, oil or fuel.

1. Some people say that wind turbines spoil the charm of the court. This can be especially true when they are very large, or when there are many of them in one place.
2. Wind turbines can be adversely affected by extreme storms such as an example of a thunder weather.
3. The blades are capable of killing the steal attempt. However, it is good for the thoughts that other methods of energy production, including damage caused by air pollution.

As they pass finite places in the world, is an inescapable truth that we discover options if you prefer environmentally continue living here for many years to come. Similarly, rather than how it is these days, many researchers agree that we must find methods of energy production that will never pollute the atmosphere of how other sources of energy do.

However, it is true that sometimes make wind turbines are substantial one eye sore on certain landscapes. To provide every person with enough electricity, certainly it requires much more wind than we currently have. One way around this is for people to develop their own wind turbines as that can only provide electricity for your home. This has two major advantages:

• Less need for a number of large wind turbines landscape demolition.
• People can be more independent

Latest solar news

Posted in IM NewsComments Off

The importance of relatability in social media marketing

Do you own a small business? If so, you’ve likely been told by a marketer, a customer or even a relative that you need a presence on whatever social media site is flavor-of-the-month.

Search Engine Watch

Posted in IM NewsComments Off

The importance of user reviews for local SEO

Reviews are a massive part of the web now, and an absolute essential for online retailers.

Search Engine Watch

Posted in IM NewsComments Off

The Importance of Ad Extensions – Five Ways to Create That Personal Touch

Since Google AdWords determines the best combination, having extensions, such as site links and callouts, gives your ad a better chance to have more possible ad combinations.

Home – SearchEngineWatch

Posted in IM NewsComments Off

An SEO’s Advice: the importance of fixing outbound links

Fixing linksThis is a guest post by Michael Martinez.

As Bing and Google rule out more beloved link building strategies, marketers increasingly turn to supposedly “safe” strategies like broken link replacement (a form of “link reclamation”).  I’m not convinced this is as safe a link building strategy as its proponents want to believe, but so far the search engines are not hinting at future changes in their guidelines.

You will always have the right to ask for a link.  No search engine can take that away from you.  But when you do ask for a link because you believe it will help you build up your search referral traffic then you should assume there is some potential risk involved with that request.  The fully realized potential risk is that you will be penalized (delisted) by a search engine for acquiring the link.  But you should think of potential risk as a partially-filled balloon that may or may not inflate until it explodes.

Risk potential changes over time, but not all the risks you face concern search engine guidelines, penalties, and algorithms.  Let’s just talk about the simple act of placing a link in an article that you publish today.

WIKI LINKS:  One of my favorite examples of a high-risk outbound link is a link to any Wiki site that can be changed by its visitors or an active user community.  Wiki articles may seem very good to you today but in 2-3 years (or 10 years) they will be very, very different from the content you linked to.

I am a long-time critic of Wikipedia because of the amateurish revert wars that experienced Wiki editors start in order to pervert the content.  The way Wikipedia handles these disagreements is to penalize the 2nd person (the one who responds to the reversion) instead of the trouble-maker.  Many tens of thousands of people have gone into Wikipedia, made good changes, and then watched in horror as some more experienced user comes along, changes everything back, and watches the article to ensure that the original contributor is blocked by Wikipedia’s reversion rule from keeping the good changes in the article.

If you want to link to a Wiki site that is your choice but you are linking to every idiot, troll, and well-meaning but clueless admin who uses the rules to make good content look bad.  There is a lot of risk entailed in linking to any Wiki site, especially if you are expressing an opinion and you feel you are linking to an article that supports your opinion.  Someone who disagrees with you can change the Wiki article to contradict what you are saying.  Good luck fixing that.

LINKS TO BLOGS: As bloggers we should be linking to other people’s blogs.  After all, supporting the community that supports you keeps the community strong.  But most bloggers don’t stay with their blogs.  If you just link to the home page of the blog in 3 years you may be linking to a dead blog that hasn’t been updated in 2 years.

If you deep-link to an article on a blog your link may survive for a few years but eventually something will change.  Blogs are often deleted.  They are often moved.  The URL structures are changed.  And the worst part of this is that you may be the worst offender in your rogues gallery of bloggers who have changed things without notifying you.

I started the SEO Theory blog as a subdomain on Blogspot in December 2006.  In early 2007 we moved it to the SEO Theory domain everyone knows today.  So that was a double-whammy on changes in URL structures: we went from subdomain.domain.tld to domain.tld.

The article URLs were converted to use the correct root, but at the time we decided to go with just seo-theory.com instead of www.seo-theory.com because we thought the shorter domain URL would be the visitors’ preferred choice (that turned out not to be the case).

When we finally added the www-prefix to the domain and redirected the non-www version I decided that would be good enough.  But another decision I made at the time was to host the blog in a subdirectory.  I did that because I thought that my employer (who at the time owned all legal rights to the blog) might want to develop some marketing content on the root page.  But they already had an “official” Website and, frankly, their offline sales channel was bringing in enough business that they didn’t feel like marketing directly to the Web.

Eventually we dropped the “/wordpress/” folder from all the URLs and moved the content up to the root folder.  But I never went back and changed all the links (it would have required far too much time for review because I was writing 5 posts a week at the time AND doing my day job).

And yet as the years rolled by I often found myself linking back to older articles, and the more of those links I generated with the domain.tld/wordpress/ format in the early days the more I unintentionally set up TWO automated redirects.  This is one reason why pages on the site sometimes flash when you load them (another being the speed optimizations we have implemented).

Search engines can now handle up to 5 hops in a redirect chain.  That’s great for SEO but frankly it creates a bad user experience for me.  As I reshare old articles that I feel are relevant I occasionally find to my amused horror that the self-referential links do not reflect the correct structure.  I have learned that leaving too many legacy structures in self-referential links eventually leads to trouble so now I review old articles on a random basis to improve the quality of self-referential linking.

REBRANDING KILLS LINKS: I don’t have an estimate of how many sites I have linked to through the years that moved to new domains, but there are a LOT of them.  Given the number of Websites for which I write content it is humanly impossible to monitor all the outbound links and keep them updated.  Even my close personal friends, who have listened to me rant on and on about how Websites break with rebranded moves, occasionally break links by rebranding their sites.

“Oh, but we always advise people to set up 301 Redirects,” you say.  Yes, I tell people to do that, too.  In my daydreams people listen to me.  In real life they “just don’t have time” or “forgot to do that” or “asked IT to take care of it” and have a thousand other explanations for why it never happened.  And there are many of YOU digital marketers whose content I have linked to who have broken my outbound links.  Even the most experienced marketers don’t always fix their problems.

Old content may be taken offline simply because it’s “old, outdated, and irrelevant”.  And for fear of incurring some sort of imaginary search engine penalty people won’t even redirect the dead URLs to a “that content is gone” page.  So there I am, left with dead outbound links on my page and my visitors have no clue as to what I was linking to or why.

Whenever possible I replace rebranded links either with the appropriate URLs or, if the content has changed (or if the page now loads 20 advertisements) I just link to the oldest legible copy I can find on Archive.Org.

But even Archive.Org can fail me because if you set up a “robots.txt” file that disallows ia_archiver it won’t show people the page.  I have done this myself simply to fight Website scraping (which, thankfully, is not nearly as bad as it used to be).

My final choice for fixing a rebranded link is to convert the anchor text to an italicized expression, to indicate to me (not so much to you) that there was once a link there to something I felt was useful and the other guy killed it.

iStock_000001241176XSmallIDIOCY KILLS LINKS: Sometimes I will link to an article written by someone I don’t know.  They may be saying something I agree with today but eventually it becomes apparent to me that they got lucky with that first article.  It’s a bit like being a Skeptic who links to an article about the silliness of Paranormal Research, only to find a year later that the writer is someone who advocates an alternative form of paranormal research (for the record, I try to stay out of Skeptics-vs-Paranormal debates as much as possible).

So there you are, linking to a Website that you now believe is full of nonsense.  What should you do?  Keep sending your visitors to a lunatic asylum and they will eventually assume you must belong there, too.

Maybe you feel I’m using too strong language here: “idiocy”, “lunatic asylum” are insulting, after all.  But think about the way a site you linked to in the past now makes you feel.  Would you link to it today?  If not, why not?  And if you did link to it in the past then you need to realize that you ARE linking to it today as long as your old link is still published and indexable.

Your feelings should play a huge role in how you decide where to direct your links.  Trust your feelings, Luke, the Force of your emotions will guide you.

When I see that I once linked to a site that I now feel is substandard I kill the links.  If possible I’ll find something else to link to but about half the time I just throw the carcass out into the cold and don’t even italicize the old anchor text.  I want to forget that I ever linked to such a site.  I want the search engines to stop passing credit, too.

OPTIMIZATION KILLS LINKS: If you have written 10-15 articles on the same topic over the past 3-5 years you’ll eventually come to the realization that you need to clean up that mess.  It doesn’t always turn out to be a mess.  News sites, for example, need to keep their content differentiated chronologically (and shame on the sites that continually add updates to old content).

But we as digital marketers realize that eventually we start repeating ourselves, and so we either reduce the amount of content we publish on a site or we start consolidating content.  I recently did that on SEO Theory and I have done it for other sites.  Content consolidation is a great way to reset the clock and give you some breathing space so that you can write about the topic again.

But every now and then when I am reviewing old links I find they now lead to redirected destinations which are terrible attempts to consolidate old content.  For example, just before I decided to write this article I reviewed some outbound links on an old SEO Theory article.  One of them led to a specific article that has been included in some sort of a category page.  I could not find any trace of the article itself on the first page of results in the category listings, so I replaced the link with a link on Archive.Org.

When you redirect your old URLs to a consolidation page you need to show visitors who follow old links that the content they want is still there, easily reached, and important to you.  Just following my (and may other SEO bloggers’) advice to implement redirects when you consolidate old content is not good enough (at least not for me).

I want to know what happened to the old content.  I want my visitors to know that I am still providing a meaningful linking experience.

I rarely receive any requests from marketers for link reclamation.  I would almost never agree to such a request anyway unless I knew the person and thought they were legitimately making a good recommendation for my site.  Sorry, digital marketing world, but most of you appear to be hawking really bad content with your guest posting and link reclamation strategies.  I have probably agreed to two link reclamation requests in the last five years.

Optimization outreach may lead me to replace old links, but the new links may not be as good as the old links were.  At best I am improving a degraded user experience; at worst I am compromising with reality and killing bad links.  What I would prefer is for the old article publishers to be consistent in supporting the sites that linked to them in the past.

Sure, it may be hard to show that those links still exist (or still help in any way), but if people are visiting your site through old links you owe it to them and yourself to give them the most relevant experience possible.


As an advocate of writing timeless content (and I concede that not all my content is timeless), I feel that the links are just as important as the words and images on the page.  I want people to know that when they land on an old article (and those old articles get a LOT of traffic) that they can trust what I am telling them.

Sometimes I do update the old articles.   It’s necessary to provide some context (such as “this article refers to a service that went offline in 2012”).

Sometimes I take the old articles offline.  When I do so I have to decide if I want to redirect the URLs to some other content or leave them “dead”.  Yes, I do occasionally orphan inbound links that other people gave me in the past (or that I gave myself).

I know I am creating a bad user experience, but if you have done this then you’ll probably agree that you are compromising with reality and substituting a less bad user experience for a worse one.  We may be right or wrong in our judgements.

Eventually I’ll figure out what to do about the content I have taken offline.  I don’t want to leave a bad experience in place.  But at least now that I can mark posts a PRIVATE on WordPress installations I can quickly see which articles are no longer useful and I’ll be able to think of ways to manage that user traffic.

To me, it says a lot about a marketer’s dedication to the consumer experience when I see them make an effort to resolve dead link problems in a meaningful, user-friendly way.  When you just do it for search engines you really imply that you don’t think much about what kind of impression your site makes on visitors.  I feel YOUR pain when I take content offline.  I want you to feel MY pain when you take content offline.

About Michael Martinez

Michael Martinez has been developing and promoting Websites since 1996 and began practicing search engine optimization in 1998.  He is the principal author of the SEO Theory blog. 

Marketing Pilgrim – Internet News and Opinion

Posted in IM NewsComments Off

The Critical Importance Of Targeted Mindset Training

I just released a brand new E-Guide called -

“Master Your Mindset: Productivity And Mindset
Training For Professional Bloggers”

You can purchase the guide from here -

Master Your Mindset Order Page

For a long time I’ve wanted to have a resource like this available. I am very excited to finally publish it.

My writing here on EJ is … Read the rest of this entry »

Entrepreneurs-Journey.com by Yaro Starak

Posted in IM NewsComments Off