Archive | IM News

Daily Search Forum Recap: October 14, 2019

Here is a recap of what happened in the search forums today…


Search Engine Roundtable

Posted in IM NewsComments Off

Shopify SEO: The Guide to Optimizing Shopify

Posted by cml63

A trend we’ve been noticing at Go Fish Digital is that more and more of our clients have been using the Shopify platform. While we initially thought this was just a coincidence, we can see that the data tells a different story:

Graph Of Shopify Usage Statistics

The Shopify platform is now more popular than ever. Looking at BuiltWith usage statistics, we can see that usage of the CMS has more than doubled since July 2017. Currently, 4.47% of the top 10,000 sites are using Shopify.

Since we’ve worked with a good amount of Shopify stores, we wanted to share our process for common SEO improvements we help our clients with. The guide below should outline some common adjustments we make on Shopify stores.

What is Shopify SEO?

Shopify SEO simply means SEO improvements that are more unique to Shopify than other sites. While Shopify stores come with some useful things for SEO, such as a blog and the ability to redirect, it can also create SEO issues such as duplicate content. Some of the most common Shopify SEO recommendations are:

  • Remove duplicate URLs from internal linking architecture
  • Remove duplicate paginated URLs
  • Create blog content for keywords with informational intent
  • Add “Product,” “Article,” & “BreadcrumbList” structured data
  • Determine how to handle product variant pages
  • Compress images using crush.pics
  • Remove unnecessary Shopify apps

We’ll go into how we handle each of these recommendations below:

Duplicate content

In terms of SEO, duplicate content is the highest priority issue we’ve seen created by Shopify. Duplicate content occurs when either duplicate or similar content exists on two separate URLs. This creates issues for search engines as they might not be able to determine which of the two pages should be the canonical version. On top of this, often times link signals are split between the pages.

We’ve seen Shopify create duplicate content in several different ways:

  1. Duplicate product pages
  2. Duplicate collections pages through pagination

Duplicate product pages

Shopify creates this issue within their product pages. By default, Shopify stores allow their /products/ pages to render at two different URL paths:

  • Canonical URL path: /products/
  • Non-canonical URL path: /collections/.*/products/

Shopify accounts for this by ensuring that all /collections/.*/products/ pages include a canonical tag to the associated /products/ page. Notice how the URL in the address differs from the “canonical” field:

URL In Address Bar Is Different Than Canonical Link

While this certainly helps Google consolidate the duplicate content, a more alarming issue occurs when you look at the internal linking structure. By default, Shopify will link to the non-canonical version of all of your product pages.

Shopify collection page links to non-canonical URLs

As well, we’ve also seen Shopify link to the non-canonical versions of URLs when websites utilize “swatch” internal links that point to other color variants.

Thus, Shopify creates your entire site architecture around non-canonical links by default. This creates a high-priority SEO issue because the website is sending Google conflicting signals:

  1. “Here are the pages we internally link to the most often”
  2. “However, the pages we link to the most often are not the URLs we actually want to be ranking in Google. Please index these other URLs with few internal links”

While canonical tags are usually respected, remember Google does treat these as hints instead of directives. This means that you’re relying on Google to make a judgement about whether or not the content is duplicate each time that it crawls these pages. We prefer not to leave this up to chance, especially when dealing with content at scale.

Adjusting internal linking structure

Fortunately, there is a relatively easy fix for this. We’ve been able to work with our dev team to adjust the code in the product.grid-item.liquid file. Following those instructions will allow your Shopify site’s collections pages to point to the canonical /product/ URLs.

Duplicate collections pages

As well, we’ve seen many Shopify sites that create duplicate content through the site’s pagination. More specifically, a duplicate is created of the first collections page in a particular series. This is because once you’re on a paginated URL in a series, the link to the first page will contain “?page=1”:

First page in Shopify pagination links to ?page=1 link

However, this will almost always be a duplicate page. A URL with “?page=1” will almost always contain the same content as the original non-parameterized URL. Once again, we recommend having a developer adjust the internal linking structure so that the first paginated result points to the canonical page.

Product variant pages

While this is technically an extension of Shopify’s duplicate content from above, we thought this warranted its own section because this isn’t necessarily always an SEO issue.

It’s not uncommon to see Shopify stores where multiple product URLs are created for the same product with slight variations. In this case, this can create duplicate content issues as often times the core product is the same, but only a slight attribute (color for instance) changes. This means that multiple pages can exist with duplicate/similar product descriptions and images. Here is an example of duplicate pages created by a variant: https://recordit.co/x6YRPkCDqG

If left alone, this once again creates an instance of duplicate content. However, variant URLs do not have to be an SEO issue. In fact, some sites could benefit from these URLs as they allow you to have indexable pages that could be optimized for very specific terms. Whether or not these are beneficial is going to differ on every site. Some key questions to ask yourself are:

  • Do your customers perform queries based on variant phrases?
  • Do you have the resources to create unique content for all of your product variants?
  • Is this content unique enough to stand on its own?

For a more in-depth guide, Jenny Halasz wrote a great article on determining the best course of action for product variations. If your Shopify store contains product variants, than it’s worth determining early on whether or not these pages should exist at a separate URL. If they should, then you should create unique content for every one and optimize each for that variant’s target keywords.

Crawling and indexing

After analyzing quite a few Shopify stores, we’ve found some SEO items that are unique to Shopify when it comes to crawling and indexing. Since this is very often an important component of e-commerce SEO, we thought it would be good to share the ones that apply to Shopify.

Robots.txt file

A very important note is that in Shopify stores, you cannot adjust the robots.txt file. This is stated in their official help documentation. While you can add the “noindex” to pages through the theme.liquid, this is not as helpful if you want to prevent Google from crawling your content all together.

An example robots.txt file in Shopify

Here are some sections of the site that Shopify will disallow crawling in:

  • Admin area
  • Checkout
  • Orders
  • Shopping cart
  • Internal search
  • Policies page

While it’s nice that Shopify creates some default disallow commands for you, the fact that you cannot adjust the robots.txt file can be very limiting. The robots.txt is probably the easiest way to control Google’s crawl of your site as it’s extremely easy to update and allows for a lot of flexibility. You might need to try other methods of adjusting Google’s crawl such as “nofollow” or canonical tags.

Adding the “noindex” tag

While you cannot adjust the robots.txt, Shopify does allow you to add the “noindex” tag. You can exclude a specific page from the index by adding the following code to your theme.liquid file.

{% if template contains 'search' %}
<meta name="robots" content="noindex">
{% endif %}

As well, if you want to exclude an entire template, you can use this code:

{% if handle contains 'page-handle-you-want-to-exclude' %}
<meta name="robots" content="noindex">
{% endif %}

Redirects

Shopify does allow you to implement redirects out-of-the-box, which is great. You can use this for consolidating old/expired pages or any other content that no longer exists. You can do this by going to Online Store > Navigation > URL Redirects.

So far, we havn’t found a way to implement global redirects via Shopify. This means that your redirects will likely need to be 1:1.

Log files

Similar to the robots.txt, it’s important to note that Shopify does not provide you with log file information. This has been confirmed by Shopify support.

Structured data

Product structured data

Overall, Shopify does a pretty good job with structured data. Many Shopify themes should contain “Product” markup out-of-the-box that provides Google with key information such as your product’s name, description, price etc. This is probably the highest priority structured data to have on any e-commerce site, so it’s great that many themes do this for you.

Shopify sites might also benefit from expanding the Product structured data to collections pages as well. This involves adding the Product structured data to define each individual product link in a product listing page. The good folks at Distilled recommend including this structured data on category pages.

Every product in Shopify collections page marked up with Product structured data

Article structured data

As well, if you use Shopify’s blog functionality, you should use “Article” structured data. This is a fantastic schema type that lets Google know that your blog content is more editorial in nature. We’ve seen that Google seems to pull content with “Article” structured data into platforms such as Google Discover and the “Interesting Finds” sections in the SERPs. Ensuring your content contains this structured data may increase the chances your site’s content is included in these sections.

BreadcrumbList structured data

Finally, one addition that we routinely add to Shopify sites are breadcrumb internal links with BreadcrumbList structured data. We believe breadcrumbs are crucial to any e-commerce site, as they provide users with easy-to-use internal links that indicate where they’re at within the hierarchy of a website. As well, these breadcrumbs can help Google better understand the website’s structure. We typically suggest adding site breadcrumbs to Shopify sites and marking those up with BreadcrumbList structured data to help Google better understand those internal links.

Keyword research

Performing keyword research for Shopify stores will be very similar to the research you would perform for other e-commerce stores.

Some general ways to generate keywords are:

  • Export your keyword data from Google AdWords. Track and optimize for those that generate the most revenue for the site.
  • Research your AdWords keywords that have high conversion rates. Even if the volume is lower, a high conversion rate indicates that this keyword is more transactional.
  • Review the keywords the site currently gets clicks/impressions for in Google Search Console.
  • Research your high priority keywords and generate new ideas using Moz’s Keyword Explorer.
  • Run your competitors through tools like Ahrefs. Using the “Content Gap” report, you can find keyword opportunities where competitor sites are ranking but yours is not.
  • If you have keywords that use similar modifiers, you can use MergeWords to automatically generate a large variety of keyword variations.

Keyword optimization

Similar to Yoast SEO, Shopify does allow you to optimize key elements such as your title tags, meta descriptions, and URLs. Where possible, you should be using your target keywords in these elements.

To adjust these elements, you simply need to navigate to the page you wish to adjust and scroll down to “Search Engine Listing Preview”:

Optimization Options For Metadata in Shopify

Adding content to product pages

If you decide that each individual product should be indexed, ideally you’ll want to add unique content to each page. Initially, your Shopify products may not have unique on-page content associated with them. This is a common issue for Shopify stores, as oftentimes the same descriptions are used across multiple products or no descriptions are present. Adding product descriptions with on-page best practices will give your products the best chance of ranking in the SERPs.

However, we understand that it’s time-consuming to create unique content for every product that you offer. With clients in the past, we’ve taken a targeted approach as to which products to optimize first. We like to use the “Sales By Product” report which can help prioritize which are the most important products to start adding content to. You can find this report in Analytics > Dashboard > Top Products By Units Sold.

Shopify revenue by product report

By taking this approach, we can quickly identify some of the highest priority pages in the store to optimize. We can then work with a copywriter to start creating content for each individual product. Also, keep in mind that your product descriptions should always be written from a user-focused view. Writing about the features of the product they care about the most will give your site the best chance at improving both conversions and SEO.

Shopify blog

Shopify does include the ability to create a blog, but we often see this missing from a large number of Shopify stores. It makes sense, as revenue is the primary goal of an e-commerce site, so the initial build of the site is product-focused.

However, we live in an era where it’s getting harder and harder to rank product pages in Google. For instance, the below screenshot illustrates the top 3 organic results for the term “cloth diapers”:

SERP for "cloth diaper" keyword.

While many would assume that this is primarily a transactional query, we’re seeing Google is ranking two articles and a single product listing page in the top three results. This is just one instance of a major trend we’ve seen where Google is starting to prefer to rank more informational content above transactional.

By excluding a blog from a Shopify store, we think this results in a huge missed opportunity for many businesses. The inclusion of a blog allows you to have a natural place where you can create this informational content. If you’re seeing that Google is ranking more blog/article types of content for the keywords mapped to your Shopify store, your best bet is to go out and create that content yourself.

If you run a Shopify store (or any e-commerce site), we would urge you to take the following few steps:

  1. Identify your highest priority keywords
  2. Manually perform a Google query for each one
  3. Make note of the types of content Google is ranking on the first page. Is it primarily informational, transactional, or a mix of both?
  4. If you’re seeing primarily mixed or informational content, evaluate your own content to see if you have any that matches the user intent. If so, improve the quality and optimize.
  5. If you do not have this content, consider creating new blog content around informational topics that seems to fulfill the user intent

As an example, we have a client that was interested in ranking for the term “CRM software,” an extremely competitive keyword. When analyzing the SERPs, we found that Google was ranking primarily informational pages about “What Is CRM Software?” Since they only had a product page that highlighted their specific CRM, we suggested the client create a more informational page that talked generally about what CRM software is and the benefits it provides. After creating and optimizing the page, we soon saw a significant increase in organic traffic (credit to Ally Mickler):

The issue that we see on many Shopify sites is that there is very little focus on informational pages despite the fact that those perform well in the search engines. Most Shopify sites should be using the blogging platform, as this will provide an avenue to create informational content that will result in organic traffic and revenue.

Apps

Similar to WordPress’s plugins, Shopify offers “Apps” that allow you to add advanced functionality to your site without having to manually adjust the code. However, unlike WordPress, most of the Shopify Apps you’ll find are paid. This will require either a one-time or monthly fee.

Shopify apps for SEO

While your best bet is likely teaming up with a developer who’s comfortable with Shopify, here are some Shopify apps that can help improve the SEO of your site.

  • Crush.pics: A great automated way of compressing large image files. Crucial for most Shopify sites as many of these sites are heavily image-based.
  • JSON-LD for SEO: This app may be used if you do not have a Shopify developer who is able to add custom structured data to your site.
  • Smart SEO: An app that can add meta tags, alt tags, & JSON-LD
  • Yotpo Reviews: This app can help you add product reviews to your site, making your content eligible for rich review stars in the SERPs.

Is Yoast SEO available for Shopify?

Yoast SEO is exclusively a WordPress plugin. There is currently no Yoast SEO Shopify App.

Limiting your Shopify apps

Similar to WordPress plugins, Shopify apps will inject additional code onto your site. This means that adding a large number of apps can slow down the site. Shopify sites are especially susceptible to bloat, as many apps are focused on improving conversions. Often times, these apps will add more JavaScript and CSS files which can hurt page load times. You’ll want to be sure that you regularly audit the apps you’re using and remove any that are not adding value or being utilized by the site.

Client results

We’ve seen pretty good success in our clients that use Shopify stores. Below you can find some of the results we’ve been able to achieve for them. However, please note that these case studies do not just include the recommendations above. For these clients, we have used a combination of some of the recommendations outlined above as well as other SEO initiatives.

In one example, we worked with a Shopify store that was interested in ranking for very competitive terms surrounding the main product their store focused on. We evaluated their top performing products in the “Sales by product” report. This resulted in a large effort to work with the client to add new content to their product pages as they were not initially optimized. This combined with other initiatives has helped improve their first page rankings by 113 keywords (credit to Jennifer Wright & LaRhonda Sparrow).

Graph of first-page keyword rankings over time

In another instance, a client came to us with an issue that they were not ranking for their branded keywords. Instead, third-party retailers that also carried their products were often outranking them. We worked with them to adjust their internal linking structure to point to the canonical pages instead of the duplicate pages created by Shopify. We also optimized their content to better utilize the branded terminology on relevant pages. As a result, they’ve seen a nice increase in overall rankings in just several months time.

Graph of total ranking improvements over time.

Moving forward

As Shopify usage continues to grow, it will be increasingly important to understand the SEO implications that come with the platform. Hopefully, this guide has provided you with additional knowledge that will help make your Shopify store stronger in the search engines.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

It’s Time to Start the Critical Activity You Can No Longer Afford to Postpone

I’ve always been stubborn. So, it wasn’t out-of-character for me to stand my ground when everyone I knew was quite…

The post It’s Time to Start the Critical Activity You Can No Longer Afford to Postpone appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

Blue Shield of California to Integrate Apple Watches in Doctor Visits

According to a report by Health Data Management, Blue Shield of California (BSC) is planning on using the Apple Watch to improve doctor’s visits.

“BSC, together with its health services partner Altais, have partnered with Notable Health, a company that provides technology to captures office visits through the use of artificial intelligence.

“The data is captured, then the tool adds lab results, prescriptions and referrals, and prepares everything for sign-off for addition to the EHR. In short, the doctor wears the watch, speaks naturally during the office visit, and the technology does the rest.”

BCS and Altais hope to roll out the technology to BSC’s network of doctors first, then expand as it proves successful. The Paradise (Calif.) Medical Group will be the first to receive the new tech and is slated to start using it soon.

Notable’s platform uses artificial intelligence (AI), machine learning and natural language processing to pair down the audio it hears to the most relevant data necessary for EHR records. Even after the visit, the AI continues to organize the information to expedite claims.

As Jeff Bailet, MD, president and CEO of Altais said: “Our goal is to help physicians seamlessly leverage technology to improve the health and well-being of their patients—all while reducing administrative hassles and enhancing their professional gratification. Notable Health will help us get there with its digital assistant technology that automates manual tasks across any electronic health record.”

With a 2016 study showing that physicians only spend a total of 27 percent of their day interacting with patients, as opposed to 49.2 percent spent on EHR and desk work, this initiative could prove to be a boon to doctors everywhere.

The post Blue Shield of California to Integrate Apple Watches in Doctor Visits appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Programming Note: Offline For Sukkot

This is just a quick note that I will be offline for the Jewish holiday of Sukkot. All stories, social media posts…


Search Engine Roundtable

Posted in IM NewsComments Off

A Breakdown of HTML Usage Across ~8 Million Pages (& What It Means for Modern SEO)

Posted by Catalin.Rosu

Not long ago, my colleagues and I at Advanced Web Ranking came up with an HTML study based on about 8 million index pages gathered from the top twenty Google results for more than 30 million keywords.

We wrote about the markup results and how the top twenty Google results pages implement them, then went even further and obtained HTML usage insights on them.

What does this have to do with SEO?

The way HTML is written dictates what users see and how search engines interpret web pages. A valid, well-formatted HTML page also reduces possible misinterpretation — of structured data, metadata, language, or encoding — by search engines.

This is intended to be a technical SEO audit, something we wanted to do from the beginning: a breakdown of HTML usage and how the results relate to modern SEO techniques and best practices.

In this article, we’re going to address things like meta tags that Google understands, JSON-LD structured data, language detection, headings usage, social links & meta distribution, AMP, and more.

Meta tags that Google understands

When talking about the main search engines as traffic sources, sadly it’s just Google and the rest, with Duckduckgo gaining traction lately and Bing almost nonexistent.

Thus, in this section we’ll be focusing solely on the meta tags that Google listed in the Search Console Help Center.

chart (3).png
Pie chart showing the total numbers for the meta tags that Google understands, described in detail in the sections below.

<meta name=”description” content=”…”>

The meta description is a ~150 character snippet that summarizes a page’s content. Search engines show the meta description in the search results when the searched phrase is contained in the description.

SELECTOR

COUNT

<meta name="description" content="*">

4,391,448

<meta name="description" content="">

374,649

<meta name="description">

13,831

On the extremes, we found 685,341 meta elements with content shorter than 30 characters and 1,293,842 elements with the content text longer than 160 characters.

<title>

The title is technically not a meta tag, but it’s used in conjunction with meta name=”description”.

This is one of the two most important HTML tags when it comes to SEO. It’s also a must according to W3C, meaning no page is valid with a missing title tag.

Research suggests that if you keep your titles under a reasonable 60 characters then you can expect your titles to be rendered properly in the SERPs. In the past, there were signs that Google’s search results title length was extended, but it wasn’t a permanent change.

Considering all the above, from the full 6,263,396 titles we found, 1,846,642 title tags appear to be too long (more than 60 characters) and 1,985,020 titles had lengths considered too short (under 30 characters).

titles.png
Pie chart showing the title tag length distribution, with a length less than 30 chars being 31.7% and a length greater than 60 chars being about 29.5%.

A title being too short shouldn’t be a problem —after all, it’s a subjective thing depending on the website business. Meaning can be expressed with fewer words, but it’s definitely a sign of wasted optimization opportunity.

SELECTOR

COUNT

<title>*</title>

6,263,396

missing <title> tag

1,285,738

Another interesting thing is that, among the sites ranking on page 1–2 of Google, 351,516 (~5% of the total 7.5M) are using the same text for the title and h1 on their index pages.

Also, did you know that with HTML5 you only need to specify the HTML5 doctype and a title in order to have a perfectly valid page?

<!DOCTYPE html>
<title>red</title>

<meta name=”robots|googlebot”>

“These meta tags can control the behavior of search engine crawling and indexing. The robots meta tag applies to all search engines, while the “googlebot” meta tag is specific to Google.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="robots" content="..., ...">

1,577,202

<meta name="googlebot" content="..., ...">

139,458


HTML snippet with a meta robots and its content parameters.

So the robots meta directives provide instructions to search engines on how to crawl and index a page’s content. Leaving aside the googlebot meta count which is kind of low, we were curious to see the most frequent robots parameters, considering that a huge misconception is that you have to add a robots meta tag in your HTML’s head. Here’s the top 5:

SELECTOR

COUNT

<meta name="robots" content="index,follow">

632,822

<meta name="robots" content="index">

180,226

<meta name="robots" content="noodp">

115,128

<meta name="robots" content="all">

111,777

<meta name="robots" content="nofollow">

83,639

<meta name=”google” content=”nositelinkssearchbox”>

“When users search for your site, Google Search results sometimes display a search box specific to your site, along with other direct links to your site. This meta tag tells Google not to show the sitelinks search box.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="google" content="nositelinkssearchbox">

1,263

Unsurprisingly, not many websites choose to explicitly tell Google not to show a sitelinks search box when their site appears in the search results.

<meta name=”google” content=”notranslate”>

“This meta tag tells Google that you don’t want us to provide a translation for this page.” - Meta tags that Google understands

There may be situations where providing your content to a much larger group of users is not desired. Just as it says in the Google support answer above, this meta tag tells Google that you don’t want them to provide a translation for this page.

SELECTOR

COUNT

<meta name="google" content="notranslate">

7,569

<meta name=”google-site-verification” content=”…”>

“You can use this tag on the top-level page of your site to verify ownership for Search Console.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="google-site-verification" content="...">

1,327,616

While we’re on the subject, did you know that if you’re a verified owner of a Google Analytics property, Google will now automatically verify that same website in Search Console?

<meta charset=”…” >

“This defines the page’s content type and character set.”
Meta tags that Google understands

This is basically one of the good meta tags. It defines the page’s content type and character set. Considering the table below, we noticed that just about half of the index pages we analyzed define a meta charset.

SELECTOR

COUNT

<meta charset="..." >

3,909,788

<meta http-equiv=”refresh” content=”…;url=…”>

“This meta tag sends the user to a new URL after a certain amount of time and is sometimes used as a simple form of redirection.”
Meta tags that Google understands

It’s preferable to redirect your site using a 301 redirect rather than a meta refresh, especially when we assume that 30x redirects don’t lose PageRank and the W3C recommends that this tag not be used. Google is not a fan either, recommending you use a server-side 301 redirect instead.

SELECTOR

COUNT

<meta http-equiv="refresh" content="...;url=...">

7,167

From the total 7.5M index pages we parsed, we found 7,167 pages that are using the above redirect method. Authors do not always have control over server-side technologies and apparently they use this technique in order to enable redirects on the client side.

Also, using Workers is a cutting-edge alternative n order to overcome issues when working with legacy tech stacks and platform limitations.

<meta name=”viewport” content=”…”>

“This tag tells the browser how to render a page on a mobile device. Presence of this tag indicates to Google that the page is mobile-friendly.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="viewport" content="...">

4,992,791

Starting July 1, 2019, all sites started to be indexed using Google’s mobile-first indexing. Lighthouse checks whether there’s a meta name=”viewport” tag in the head of the document, so this meta should be on every webpage, no matter what framework or CMS you’re using.

Considering the above, we would have expected more websites than the 4,992,791 out of 7.5 million index pages analyzed to use a valid meta name=”viewport” in their head sections.

Designing mobile-friendly sites ensures that your pages perform well on all devices, so make sure your web page is mobile-friendly here.

<meta name=”rating” content=”…” />

“Labels a page as containing adult content, to signal that it be filtered by SafeSearch results.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="rating" content="..." />

133,387

This tag is used to denote the maturity rating of content. It was not added to the meta tags that Google understands list until recently. Check out this article by Kate Morris on how to tag adult content.

JSON-LD structured data

Structured data is a standardized format for providing information about a page and classifying the page content. The format of structured data can be Microdata, RDFa, and JSON-LD — all of these help Google understand the content of your site and trigger special search result features for your pages.

While having a conversation with the awesome Dan Shure, he came up with a good idea to look for structured data, such as the organization’s logo, in search results and in the Knowledge Graph.

In this section, we’ll be using JSON-LD (JavaScript Object Notation for Linked Data) only in order to gather structured data info.This is what Google recommends anyway for providing clues about the meaning of a web page.

Some useful bits on this:

  • At Google I/O 2019, it was announced that the structured data testing tool will be superseded by the rich results testing tool.
  • Now Googlebot indexes web pages using the latest Chromium rather than the old Chrome 42, meaning you can mitigate the SEO issues you may have had in the past, with structured data support as well.
  • Jason Barnard had an interesting talk at SMX London 2019 on how Google Search ranking works and according to his theory, there are seven ranking factors we can count on; structured data is definitely one of them.
  • Builtvisible‘s guide on Microdata, JSON-LD, & Schema.org contains everything you need to know about using structured data on your website.
  • Here’s an awesome guide to JSON-LD for beginners by Alexis Sanders.
  • Last but not least, there are lots of articles, presentations, and posts to dive in on the official JSON for Linking Data website.

Advanced Web Ranking’s HTML study relies on analyzing index pages only. What’s interesting is that even though it’s not stated in the guidelines, Google doesn’t seem to care about structured data on index pages, as stated in a Stack Overflow answer by Gary Illyes several years ago. Yet, on JSON-LD structured data types that Google understands, we found a total of 2,727,045 features:

json-ld-chart.png
Pie chart showing the structured data types that Google understands, with Sitelinks searchbox being 49.7% — the highest value.

STRUCTURED DATA FEATURES

COUNT

Article

35,961

Breadcrumb

30,306

Book

143

Carousel

13,884

Corporate contact

41,588

Course

676

Critic review

2,740

Dataset

28

Employer aggregate rating

7

Event

18,385

Fact check

7

FAQ page

16

How-to

8

Job posting

355

Livestream

232

Local business

200,974

Logo

442,324

Media

1,274

Occupation

0

Product

16,090

Q&A page

20

Recipe

434

Review snippet

72,732

Sitelinks searchbox

1,354,754

Social profile

478,099

Software app

780

Speakable

516

Subscription and paywalled content

363

Video

14,349

rel=canonical

The rel=canonical element, often called the “canonical link,” is an HTML element that helps webmasters prevent duplicate content issues. It does this by specifying the “canonical URL,” the “preferred” version of a web page.

SELECTOR

COUNT

<link rel=canonical href="*">

3,183,575

meta name=”keywords”

It’s not new that <meta name=”keywords”> is obsolete and Google doesn’t use it anymore. It also appears as though <meta name=”keywords”> is a spam signal for most of the search engines.

“While the main search engines don’t use meta keywords for ranking, they’re very useful for onsite search engines like Solr.”
JP Sherman on why this obsolete meta might still be useful nowadays.

SELECTOR

COUNT

<meta name="keywords" content="*">

2,577,850

<meta name="keywords" content="">

256,220

<meta name="keywords">

14,127

Headings

Within 7.5 million pages, h1 (59.6%) and h2 (58.9%) are among the twenty-eight elements used on the most pages. Still, after gathering all the headings, we found that h3 is the heading with the largest number of appearances — 29,565,562 h3s out of 70,428,376  total headings found.

Random facts:

  • The h1–h6 elements represent the six levels of section headings. Here are the full stats on headings usage, but we found 23,116 of h7s and 7,276 of h8s too. That’s a funny thing because plenty of people don’t even use h6s very often.
  • There are 3,046,879 pages with missing h1 tags and within the rest of the 4,502,255 pages, the h1 usage frequency is 2.6, with a total of 11,675,565 h1 elements.
  • While there are 6,263,396 pages with a valid title, as seen above, only 4,502,255 of them are using a h1 within the body of their content.

Missing alt tags

This eternal SEO and accessibility issue still seems to be common after analyzing this set of data. From the total of 669,591,743 images, almost 90% are missing the alt attribute or use it with a blank value.

chart (4).png
Pie chart showing the img tag alt attribute distribution, with missing alt being predominant — 81.7% from a total of about 670 million images we found.

SELECTOR

COUNT

img

669,591,743

img alt=”*”

79,953,034

img alt=”"

42,815,769

img w/ missing alt

546,822,940

Language detection

According to the specs, the language information specified via the lang attribute may be used by a user agent to control rendering in a variety of ways.

The part we’re interested in here is about “assisting search engines.”

“The HTML lang attribute is used to identify the language of text content on the web. This information helps search engines return language specific results, and it is also used by screen readers that switch language profiles to provide the correct accent and pronunciation.”
Léonie Watson

A while ago, John Mueller said Google ignores the HTML lang attribute and recommended the use of link hreflang instead. The Google Search Console documentation states that Google uses hreflang tags to match the user’s language preference to the right variation of your pages.

lang-vs-hreflang.png
Bar chart showing that 65% of the 7.5 million index pages use the lang attribute on the html element, at the same time 21.6% use at least a link hreflang.

Of the 7.5 million index pages that we were able to look into, 4,903,665 use the lang attribute on the html element. That’s about 65%!

When it comes to the hreflang attribute, suggesting the existence of a multilingual website, we found about 1,631,602 pages — that means around 21.6% index pages use at least a link rel=”alternate” href=”*” hreflang=”*” element.

Google Tag Manager

From the beginning, Google Analytics’ main task was to generate reports and statistics about your website. But if you want to group certain pages together to see how people are navigating through that funnel, you need a unique Google Analytics tag. This is where things get complicated.

Google Tag Manager makes it easier to:

  • Manage this mess of tags by letting you define custom rules for when and what user actions your tags should fire
  • Change your tags whenever you want without actually changing the source code of your website, which sometimes can be a headache due to slow release cycles
  • Use other analytics/marketing tools with GTM, again without touching the website’s source code

We searched for *googletagmanager.com/gtm.js references and saw that about 345,979 pages are using the Google Tag Manager.

rel=”nofollow”

“Nofollow” provides a way for webmasters to tell search engines “don’t follow links on this page” or “don’t follow this specific link.”

Google does not follow these links and likewise does not transfer equity. Considering this, we were curious about rel=”nofollow” numbers. We found a total of 12,828,286 rel=”nofollow” links within 7.5 million index pages, with a computed average of 1.69 rel=”nofollow” per page.

Last month, Google announced two new link attributes values that should be used in order to mark the nofollow property of a link: rel=”sponsored” and rel=”ugc”. I’d recommend you go read Cyrus Shepard’s article on how Google’s nofollow, sponsored, & ugc links impact SEO, learn why Google changed nofollow,  the ranking impact of nofollow links, and more.


A table showing how Google’s nofollow, sponsored, and UGC link attributes impact SEO, from Cyrus Shepard’s article.

We went a bit further and looked up these new link attributes values, finding 278 rel=”sponsored” and 123 rel=”ugc”. To make sure we had the relevant data for these queries, we updated the index pages data set specifically two weeks after the Google announcement on this matter. Then, using Moz authority metrics, we sorted out the top URLs we found that use at least one of the rel=”sponsored” or rel=”ugc” pair:

  • https://www.seroundtable.com/
  • https://letsencrypt.org/
  • https://www.newsbomb.gr/
  • https://thehackernews.com/
  • https://www.ccn.com/
  • https://www.chip.pl/
  • https://www.gamereactor.se/
  • https://www.tribes.co.uk/

AMP

Accelerated Mobile Pages (AMP) are a Google initiative which aims to speed up the mobile web. Many publishers are making their content available parallel to the AMP format.

To let Google and other platforms know about it, you need to link AMP and non-AMP pages together.

Within the millions of pages we looked at, we found only 24,807 non-AMP pages referencing their AMP version using rel=amphtml.

Social

We wanted to know how shareable or social a website is nowadays, so knowing that Josh Buchea made an awesome list with everything that could go in the head of your webpage, we extracted the social sections from there and got the following numbers:

Facebook Open Graph

chart.png
Bar chart showing the Facebook Open Graph meta tags distribution, described in detail in the table below.

SELECTOR

COUNT

meta property="fb:app_id" content="*"

277,406

meta property="og:url" content="*"

2,909,878

meta property="og:type" content="*"

2,660,215

meta property="og:title" content="*"

3,050,462

meta property="og:image" content="*"

2,603,057

meta property="og:image:alt" content="*"

54,513

meta property="og:description" content="*"

1,384,658

meta property="og:site_name" content="*"

2,618,713

meta property="og:locale" content="*"

1,384,658

meta property="article:author" content="*"

14,289

Twitter card

chart (1).png
Bar chart showing the Twitter Card meta tags distribution, described in detail in the table below.

SELECTOR

COUNT

meta name="twitter:card" content="*"

1,535,733

meta name="twitter:site" content="*"

512,907

meta name="twitter:creator" content="*"

283,533

meta name="twitter:url" content="*"

265,478

meta name="twitter:title" content="*"

716,577

meta name="twitter:description" content="*"

1,145,413

meta name="twitter:image" content="*"

716,577

meta name="twitter:image:alt" content="*"

30,339

And speaking of links, we grabbed all of them that were pointing to the most popular social networks.

chart (2).png
Pie chart showing the external social links distribution, described in detail in the table below.

SELECTOR

COUNT

<a href*="facebook.com">

6,180,313

<a href*="twitter.com">

5,214,768

<a href*="linkedin.com">

1,148,828

<a href*="plus.google.com">

1,019,970

Apparently there are lots of websites that still link to their Google+ profiles, which is probably an oversight considering the not-so-recent Google+ shutdown.

rel=prev/next

According to Google, using rel=prev/next is not an indexing signal anymore, as announced earlier this year:

“As we evaluated our indexing signals, we decided to retire rel=prev/next. Studies show that users love single-page content, aim for that when possible, but multi-part is also fine for Google Search.”
Tweeted by Google Webmasters

However, in case it matters for you, Bing says it uses them as hints for page discovery and site structure understanding.

“We’re using these (like most markup) as hints for page discovery and site structure understanding. At this point, we’re not merging pages together in the index based on these and we’re not using prev/next in the ranking model.”
Frédéric Dubut from Bing

Nevertheless, here are the usage stats we found while looking at millions of index pages:

SELECTOR

COUNT

<link rel="prev" href="*"

20,160

<link rel="next" href="*"

242,387

That’s pretty much it!

Knowing how the average web page looks using data from about 8 million index pages can give us a clearer idea of trends and help us visualize common usage of HTML when it comes to SEO modern and emerging techniques. But this may be a never-ending saga — while having lots of numbers and stats to explore, there are still lots of questions that need answering:

  • We know how structured data is used in the wild now. How will it evolve and how much structured data will be considered enough?
  • Should we expect AMP usage to increase somewhere in the future?
  • How will rel=”sponsored” and rel=“ugc” change the way we write HTML on a daily basis? When coding external links, besides the target=”_blank” and rel=“noopener” combo, we now have to consider the rel=”sponsored” and rel=“ugc” combinations as well.
  • Will we ever learn to always add alt attributes values for images that have a purpose beyond decoration?
  • How many more additional meta tags or attributes will we have to add to a web page to please the search engines? Do we really needed the newly announced data-nosnippet HTML attribute? What’s next, data-allowsnippet?

There are other things we would have liked to address as well, like “time-to-first-byte” (TTFB) values, which correlates highly with ranking; I’d highly recommend HTTP Archive for that. They periodically crawl the top sites on the web and record detailed information about almost everything. According to the latest info, they’ve analyzed 4,565,694 unique websites, with complete Lighthouse scores and having stored particular technologies like jQuery or WordPress for the whole data set. Huge props to Rick Viscomi who does an amazing job as its “steward,” as he likes to call himself.

Performing this large-scale study was a fun ride. We learned a lot and we hope you found the above numbers as interesting as we did. If there is a tag or attribute in particular you would like to see the numbers for, please let me know in the comments below.

Once again, check out the full HTML study results and let me know what you think!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Facebook Libra Backers Back Out

A week ago we covered a Wall Street Journal article highlighting potential trouble for Facebook’s Libra cryptocurrency, as multiple backers were reconsidering their commitment to the project.

Fast-forward a week and things have only gone from bad to worse. As Bloomberg reports, PayPal was the first to announce they were leaving on October 6, with Visa, Mastercard, eBay, Stripe and Mercado Pago following suit. Each of these companies provided a brief statement, expressing their interest in monitoring Libra’s progress. Nonetheless, without these companies’ support, Libra is left without a single payment processor in the U.S.

The high-profile exits follow increased pressure from U.S. senators, who cautioned Mastercard, Visa and Stripe about backing the currency. Since Facebook first announced the Libra project, governments around the world have been critical of the endeavor, expressing concern about how the cryptocurrency will impact financial markets. In the days leading up to the companies pulling their support, senators cautioned them about how Libra could impact their broader payment business as well.

Critics are mixed about whether these high-profile defections spell doom for Libra or a new opportunity. Michael Pachter, an analyst for Wedbush Securities, told Bloomberg TV that he “didn’t think Facebook can do this by itself. Short of a big bank stepping in like JPMorgan, I don’t think this could ever happen.”

As SiliconANGLE highlights, however, several other companies emphasized their support, “including Kiva, Mercy Corps, Andreessen Horowitz, Anchorage and Xapo. Arguably, the change sees Libra look more like a startup now with the lack of mainstream company support.”

The news comes days before the Libra Association is scheduled to meet to sign a charter agreement. It’s probably a safe bet there will be far more to talk about in the wake of these defections.

The post Facebook Libra Backers Back Out appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

The State of SEO 2019 – Infographic

Zazzle Media’s second annual “State of SEO survey” has assessed the value and ROI of SEO, looking at its impact in securing funds or resources.

The data suggested that 60% of marketers find that resources and a shortage of budget are the main reasons they don’t spend more on organic search activity. However, almost a third of surveyed marketers still don’t know how to measure the impact of SEO on their results.

The survey reviewed 70% of in-house marketers and 30% of agency heads from various companies. It called for marketers to develop a better understanding of attribution models, measurement tools, brand value, and purpose when it comes to spending more on SEO.

The main reasons cited for marketers struggling to secure investment are competitor awareness, revealing that marketers are too aware of their competitor’s activity, even noting that their branded keywords were being targeted by their competitors.

The report noted that data-led objectives can act as investment enablers as they can easily quantify and measure consumer traffic. They also help marketers prove ROI, by reviewing how marketing practices are improving year on year.

Yet the survey revealed that there is still a lack of understanding around best practices for marketers to use. A quarter of those surveyed called for clearer guidelines on best practice from Google Webmasters, revealing that there is, in fact, a knowledge and skills gap around SEO.

Zazzle Media’s head of search and strategy, Stuart Shaw, said

“As an industry, we’ve needed to educate, educate, educate – at almost every level of client infrastructure. That challenge still remains, in fact, it probably changes monthly but now with more noise than ever.

However knowledge has always been power in this industry, keeping up with updates, marketing news and best practice guidelines across Google and Bing can be the difference in the results marketers need to secure that extra budget.”

You can download the full results of The State of SEO here, and check out the top-line stats on the infographic below.

State of SEO 2019 Infographic

The post The State of SEO 2019 – Infographic appeared first on Search Engine Watch.

Search Engine Watch

Posted in IM NewsComments Off

We Plan to Have 30 5G Cities By Year-End, Says Verizon CEO

“We have a plan to have 30 5G cities by year-end,” says Verizon CEO Hans Vestberg. “We are at 13 right now so we’re adding every week. We added New York last week. We have also updated 13 NFL stadiums with 5G and the NBA season hasn’t even started. We believe that our 5G for the consumer is just crushing it. That’s where we are focusing right now on our mobility build. We also do 5G Home. In the fourth quarter, we are going to launch our 5G Mobile Edge Compute which is an enterprise service with the all-new capabilities of 5G.”

Hans Vestberg, CEO of Verizon Communications, discusses where the company is at in its 5G build-out and how 5G is going to be a dramatic technology shift for consumers and enterprises in an interview on Bloomberg Technology:

We Plan to Have 30 5G Cities By Year-End

For the mobility case, we have a plan to have 30 5G cities by year-end. We are at 13 right now so we’re adding every week. We added New York last week. We have also updated 13 NFL stadiums with 5G and the NBA season hasn’t even started. We believe that our 5G for the consumer is just crushing it. That’s where we are focusing right now on our mobility build. We also do 5G Home. In the fourth quarter, we are going to launch our 5G Mobile Edge Compute which is an enterprise service with the all-new capabilities of 5G.

We have the best 4G network in the market and we will continue to see that our customers get the best experience on the technology we have. We are giving them the first experience on 5G. We were first with the 5G Home and we were first with a 5G smartphone. At the same time, we keep our 4G network (state of the art). We will continue to do that and when we see that the market is ready then we will have national 5G coverage as well. Usually, we speak less and we execute when we have it and then we talk. That’s our strategy.

5G To Enable Factory Wireless For All Robots

I think that all (consumers and businesses) will benefit from 5G but the bonus design from the beginning was very much to make the world cordless for enterprise in society. So the 5G mobile edge compute where we’re going to launch the first Center at the end of this year, that’s really where you can as an enterprise start innovating. You can implement factory wireless for all your robots for example, or put up a 5G campus network, or a private 5G network. This is all with throughput speed and latency that is unparalleled to what you have today. Suddenly you can innovate around that.

I have met many of the 1,400 enterprises in this country over the last six months to talk to them and show them the platform that we’re going to create and how they can innovate to it. This is a partnership between us with the customer and probably in some cases some software developers as well that have software that is needed for it.

5G To Make Home Internet Wireless

Going from 3G to 4G was, of course, an improvement in latency and speed which was visible. But the movement from 4G to 5G is even greater. The speed is so much faster, the throughput is so much more, and the latency (is 10 times better). Of course, it’s all about an ecosystem where you get devices out. Sometimes we talk a lot about consumers and right now we have four phones already out now in the market and all of them are 5G enabled. We see that the whole ecosystem coming from consumers. 

Then you have an enterprise business and we also have a 5G home business. We’re actually doing a lot more with 5G instead of fiber. This is a totally different way of thinking about the business model for fixed wireless access bringing broadband to your home. 5G is very different because you can have several business cases on the same infrastructure. It’s the same network and it’s the same infrastructure below. It’s not a separate network for all these business cases we are talking about.

We Plan to Have 30 5G Cities By Year-End, Says Verizon CEO Hans Vestberg

The post We Plan to Have 30 5G Cities By Year-End, Says Verizon CEO appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Daily Search Forum Recap: October 11, 2019

Here is a recap of what happened in the search forums today, through the eyes of the Search Engine Roundtable and other search forums on the web…


Search Engine Roundtable

Posted in IM NewsComments Off

Advert