Tag Archive | "Pages"

A Breakdown of HTML Usage Across ~8 Million Pages (& What It Means for Modern SEO)

Posted by Catalin.Rosu

Not long ago, my colleagues and I at Advanced Web Ranking came up with an HTML study based on about 8 million index pages gathered from the top twenty Google results for more than 30 million keywords.

We wrote about the markup results and how the top twenty Google results pages implement them, then went even further and obtained HTML usage insights on them.

What does this have to do with SEO?

The way HTML is written dictates what users see and how search engines interpret web pages. A valid, well-formatted HTML page also reduces possible misinterpretation — of structured data, metadata, language, or encoding — by search engines.

This is intended to be a technical SEO audit, something we wanted to do from the beginning: a breakdown of HTML usage and how the results relate to modern SEO techniques and best practices.

In this article, we’re going to address things like meta tags that Google understands, JSON-LD structured data, language detection, headings usage, social links & meta distribution, AMP, and more.

Meta tags that Google understands

When talking about the main search engines as traffic sources, sadly it’s just Google and the rest, with Duckduckgo gaining traction lately and Bing almost nonexistent.

Thus, in this section we’ll be focusing solely on the meta tags that Google listed in the Search Console Help Center.

chart (3).png
Pie chart showing the total numbers for the meta tags that Google understands, described in detail in the sections below.

<meta name=”description” content=”…”>

The meta description is a ~150 character snippet that summarizes a page’s content. Search engines show the meta description in the search results when the searched phrase is contained in the description.

SELECTOR

COUNT

<meta name="description" content="*">

4,391,448

<meta name="description" content="">

374,649

<meta name="description">

13,831

On the extremes, we found 685,341 meta elements with content shorter than 30 characters and 1,293,842 elements with the content text longer than 160 characters.

<title>

The title is technically not a meta tag, but it’s used in conjunction with meta name=”description”.

This is one of the two most important HTML tags when it comes to SEO. It’s also a must according to W3C, meaning no page is valid with a missing title tag.

Research suggests that if you keep your titles under a reasonable 60 characters then you can expect your titles to be rendered properly in the SERPs. In the past, there were signs that Google’s search results title length was extended, but it wasn’t a permanent change.

Considering all the above, from the full 6,263,396 titles we found, 1,846,642 title tags appear to be too long (more than 60 characters) and 1,985,020 titles had lengths considered too short (under 30 characters).

titles.png
Pie chart showing the title tag length distribution, with a length less than 30 chars being 31.7% and a length greater than 60 chars being about 29.5%.

A title being too short shouldn’t be a problem —after all, it’s a subjective thing depending on the website business. Meaning can be expressed with fewer words, but it’s definitely a sign of wasted optimization opportunity.

SELECTOR

COUNT

<title>*</title>

6,263,396

missing <title> tag

1,285,738

Another interesting thing is that, among the sites ranking on page 1–2 of Google, 351,516 (~5% of the total 7.5M) are using the same text for the title and h1 on their index pages.

Also, did you know that with HTML5 you only need to specify the HTML5 doctype and a title in order to have a perfectly valid page?

<!DOCTYPE html>
<title>red</title>

<meta name=”robots|googlebot”>

“These meta tags can control the behavior of search engine crawling and indexing. The robots meta tag applies to all search engines, while the “googlebot” meta tag is specific to Google.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="robots" content="..., ...">

1,577,202

<meta name="googlebot" content="..., ...">

139,458


HTML snippet with a meta robots and its content parameters.

So the robots meta directives provide instructions to search engines on how to crawl and index a page’s content. Leaving aside the googlebot meta count which is kind of low, we were curious to see the most frequent robots parameters, considering that a huge misconception is that you have to add a robots meta tag in your HTML’s head. Here’s the top 5:

SELECTOR

COUNT

<meta name="robots" content="index,follow">

632,822

<meta name="robots" content="index">

180,226

<meta name="robots" content="noodp">

115,128

<meta name="robots" content="all">

111,777

<meta name="robots" content="nofollow">

83,639

<meta name=”google” content=”nositelinkssearchbox”>

“When users search for your site, Google Search results sometimes display a search box specific to your site, along with other direct links to your site. This meta tag tells Google not to show the sitelinks search box.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="google" content="nositelinkssearchbox">

1,263

Unsurprisingly, not many websites choose to explicitly tell Google not to show a sitelinks search box when their site appears in the search results.

<meta name=”google” content=”notranslate”>

“This meta tag tells Google that you don’t want us to provide a translation for this page.” - Meta tags that Google understands

There may be situations where providing your content to a much larger group of users is not desired. Just as it says in the Google support answer above, this meta tag tells Google that you don’t want them to provide a translation for this page.

SELECTOR

COUNT

<meta name="google" content="notranslate">

7,569

<meta name=”google-site-verification” content=”…”>

“You can use this tag on the top-level page of your site to verify ownership for Search Console.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="google-site-verification" content="...">

1,327,616

While we’re on the subject, did you know that if you’re a verified owner of a Google Analytics property, Google will now automatically verify that same website in Search Console?

<meta charset=”…” >

“This defines the page’s content type and character set.”
Meta tags that Google understands

This is basically one of the good meta tags. It defines the page’s content type and character set. Considering the table below, we noticed that just about half of the index pages we analyzed define a meta charset.

SELECTOR

COUNT

<meta charset="..." >

3,909,788

<meta http-equiv=”refresh” content=”…;url=…”>

“This meta tag sends the user to a new URL after a certain amount of time and is sometimes used as a simple form of redirection.”
Meta tags that Google understands

It’s preferable to redirect your site using a 301 redirect rather than a meta refresh, especially when we assume that 30x redirects don’t lose PageRank and the W3C recommends that this tag not be used. Google is not a fan either, recommending you use a server-side 301 redirect instead.

SELECTOR

COUNT

<meta http-equiv="refresh" content="...;url=...">

7,167

From the total 7.5M index pages we parsed, we found 7,167 pages that are using the above redirect method. Authors do not always have control over server-side technologies and apparently they use this technique in order to enable redirects on the client side.

Also, using Workers is a cutting-edge alternative n order to overcome issues when working with legacy tech stacks and platform limitations.

<meta name=”viewport” content=”…”>

“This tag tells the browser how to render a page on a mobile device. Presence of this tag indicates to Google that the page is mobile-friendly.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="viewport" content="...">

4,992,791

Starting July 1, 2019, all sites started to be indexed using Google’s mobile-first indexing. Lighthouse checks whether there’s a meta name=”viewport” tag in the head of the document, so this meta should be on every webpage, no matter what framework or CMS you’re using.

Considering the above, we would have expected more websites than the 4,992,791 out of 7.5 million index pages analyzed to use a valid meta name=”viewport” in their head sections.

Designing mobile-friendly sites ensures that your pages perform well on all devices, so make sure your web page is mobile-friendly here.

<meta name=”rating” content=”…” />

“Labels a page as containing adult content, to signal that it be filtered by SafeSearch results.”
Meta tags that Google understands

SELECTOR

COUNT

<meta name="rating" content="..." />

133,387

This tag is used to denote the maturity rating of content. It was not added to the meta tags that Google understands list until recently. Check out this article by Kate Morris on how to tag adult content.

JSON-LD structured data

Structured data is a standardized format for providing information about a page and classifying the page content. The format of structured data can be Microdata, RDFa, and JSON-LD — all of these help Google understand the content of your site and trigger special search result features for your pages.

While having a conversation with the awesome Dan Shure, he came up with a good idea to look for structured data, such as the organization’s logo, in search results and in the Knowledge Graph.

In this section, we’ll be using JSON-LD (JavaScript Object Notation for Linked Data) only in order to gather structured data info.This is what Google recommends anyway for providing clues about the meaning of a web page.

Some useful bits on this:

  • At Google I/O 2019, it was announced that the structured data testing tool will be superseded by the rich results testing tool.
  • Now Googlebot indexes web pages using the latest Chromium rather than the old Chrome 42, meaning you can mitigate the SEO issues you may have had in the past, with structured data support as well.
  • Jason Barnard had an interesting talk at SMX London 2019 on how Google Search ranking works and according to his theory, there are seven ranking factors we can count on; structured data is definitely one of them.
  • Builtvisible‘s guide on Microdata, JSON-LD, & Schema.org contains everything you need to know about using structured data on your website.
  • Here’s an awesome guide to JSON-LD for beginners by Alexis Sanders.
  • Last but not least, there are lots of articles, presentations, and posts to dive in on the official JSON for Linking Data website.

Advanced Web Ranking’s HTML study relies on analyzing index pages only. What’s interesting is that even though it’s not stated in the guidelines, Google doesn’t seem to care about structured data on index pages, as stated in a Stack Overflow answer by Gary Illyes several years ago. Yet, on JSON-LD structured data types that Google understands, we found a total of 2,727,045 features:

json-ld-chart.png
Pie chart showing the structured data types that Google understands, with Sitelinks searchbox being 49.7% — the highest value.

STRUCTURED DATA FEATURES

COUNT

Article

35,961

Breadcrumb

30,306

Book

143

Carousel

13,884

Corporate contact

41,588

Course

676

Critic review

2,740

Dataset

28

Employer aggregate rating

7

Event

18,385

Fact check

7

FAQ page

16

How-to

8

Job posting

355

Livestream

232

Local business

200,974

Logo

442,324

Media

1,274

Occupation

0

Product

16,090

Q&A page

20

Recipe

434

Review snippet

72,732

Sitelinks searchbox

1,354,754

Social profile

478,099

Software app

780

Speakable

516

Subscription and paywalled content

363

Video

14,349

rel=canonical

The rel=canonical element, often called the “canonical link,” is an HTML element that helps webmasters prevent duplicate content issues. It does this by specifying the “canonical URL,” the “preferred” version of a web page.

SELECTOR

COUNT

<link rel=canonical href="*">

3,183,575

meta name=”keywords”

It’s not new that <meta name=”keywords”> is obsolete and Google doesn’t use it anymore. It also appears as though <meta name=”keywords”> is a spam signal for most of the search engines.

“While the main search engines don’t use meta keywords for ranking, they’re very useful for onsite search engines like Solr.”
JP Sherman on why this obsolete meta might still be useful nowadays.

SELECTOR

COUNT

<meta name="keywords" content="*">

2,577,850

<meta name="keywords" content="">

256,220

<meta name="keywords">

14,127

Headings

Within 7.5 million pages, h1 (59.6%) and h2 (58.9%) are among the twenty-eight elements used on the most pages. Still, after gathering all the headings, we found that h3 is the heading with the largest number of appearances — 29,565,562 h3s out of 70,428,376  total headings found.

Random facts:

  • The h1–h6 elements represent the six levels of section headings. Here are the full stats on headings usage, but we found 23,116 of h7s and 7,276 of h8s too. That’s a funny thing because plenty of people don’t even use h6s very often.
  • There are 3,046,879 pages with missing h1 tags and within the rest of the 4,502,255 pages, the h1 usage frequency is 2.6, with a total of 11,675,565 h1 elements.
  • While there are 6,263,396 pages with a valid title, as seen above, only 4,502,255 of them are using a h1 within the body of their content.

Missing alt tags

This eternal SEO and accessibility issue still seems to be common after analyzing this set of data. From the total of 669,591,743 images, almost 90% are missing the alt attribute or use it with a blank value.

chart (4).png
Pie chart showing the img tag alt attribute distribution, with missing alt being predominant — 81.7% from a total of about 670 million images we found.

SELECTOR

COUNT

img

669,591,743

img alt=”*”

79,953,034

img alt=”"

42,815,769

img w/ missing alt

546,822,940

Language detection

According to the specs, the language information specified via the lang attribute may be used by a user agent to control rendering in a variety of ways.

The part we’re interested in here is about “assisting search engines.”

“The HTML lang attribute is used to identify the language of text content on the web. This information helps search engines return language specific results, and it is also used by screen readers that switch language profiles to provide the correct accent and pronunciation.”
Léonie Watson

A while ago, John Mueller said Google ignores the HTML lang attribute and recommended the use of link hreflang instead. The Google Search Console documentation states that Google uses hreflang tags to match the user’s language preference to the right variation of your pages.

lang-vs-hreflang.png
Bar chart showing that 65% of the 7.5 million index pages use the lang attribute on the html element, at the same time 21.6% use at least a link hreflang.

Of the 7.5 million index pages that we were able to look into, 4,903,665 use the lang attribute on the html element. That’s about 65%!

When it comes to the hreflang attribute, suggesting the existence of a multilingual website, we found about 1,631,602 pages — that means around 21.6% index pages use at least a link rel=”alternate” href=”*” hreflang=”*” element.

Google Tag Manager

From the beginning, Google Analytics’ main task was to generate reports and statistics about your website. But if you want to group certain pages together to see how people are navigating through that funnel, you need a unique Google Analytics tag. This is where things get complicated.

Google Tag Manager makes it easier to:

  • Manage this mess of tags by letting you define custom rules for when and what user actions your tags should fire
  • Change your tags whenever you want without actually changing the source code of your website, which sometimes can be a headache due to slow release cycles
  • Use other analytics/marketing tools with GTM, again without touching the website’s source code

We searched for *googletagmanager.com/gtm.js references and saw that about 345,979 pages are using the Google Tag Manager.

rel=”nofollow”

“Nofollow” provides a way for webmasters to tell search engines “don’t follow links on this page” or “don’t follow this specific link.”

Google does not follow these links and likewise does not transfer equity. Considering this, we were curious about rel=”nofollow” numbers. We found a total of 12,828,286 rel=”nofollow” links within 7.5 million index pages, with a computed average of 1.69 rel=”nofollow” per page.

Last month, Google announced two new link attributes values that should be used in order to mark the nofollow property of a link: rel=”sponsored” and rel=”ugc”. I’d recommend you go read Cyrus Shepard’s article on how Google’s nofollow, sponsored, & ugc links impact SEO, learn why Google changed nofollow,  the ranking impact of nofollow links, and more.


A table showing how Google’s nofollow, sponsored, and UGC link attributes impact SEO, from Cyrus Shepard’s article.

We went a bit further and looked up these new link attributes values, finding 278 rel=”sponsored” and 123 rel=”ugc”. To make sure we had the relevant data for these queries, we updated the index pages data set specifically two weeks after the Google announcement on this matter. Then, using Moz authority metrics, we sorted out the top URLs we found that use at least one of the rel=”sponsored” or rel=”ugc” pair:

  • https://www.seroundtable.com/
  • https://letsencrypt.org/
  • https://www.newsbomb.gr/
  • https://thehackernews.com/
  • https://www.ccn.com/
  • https://www.chip.pl/
  • https://www.gamereactor.se/
  • https://www.tribes.co.uk/

AMP

Accelerated Mobile Pages (AMP) are a Google initiative which aims to speed up the mobile web. Many publishers are making their content available parallel to the AMP format.

To let Google and other platforms know about it, you need to link AMP and non-AMP pages together.

Within the millions of pages we looked at, we found only 24,807 non-AMP pages referencing their AMP version using rel=amphtml.

Social

We wanted to know how shareable or social a website is nowadays, so knowing that Josh Buchea made an awesome list with everything that could go in the head of your webpage, we extracted the social sections from there and got the following numbers:

Facebook Open Graph

chart.png
Bar chart showing the Facebook Open Graph meta tags distribution, described in detail in the table below.

SELECTOR

COUNT

meta property="fb:app_id" content="*"

277,406

meta property="og:url" content="*"

2,909,878

meta property="og:type" content="*"

2,660,215

meta property="og:title" content="*"

3,050,462

meta property="og:image" content="*"

2,603,057

meta property="og:image:alt" content="*"

54,513

meta property="og:description" content="*"

1,384,658

meta property="og:site_name" content="*"

2,618,713

meta property="og:locale" content="*"

1,384,658

meta property="article:author" content="*"

14,289

Twitter card

chart (1).png
Bar chart showing the Twitter Card meta tags distribution, described in detail in the table below.

SELECTOR

COUNT

meta name="twitter:card" content="*"

1,535,733

meta name="twitter:site" content="*"

512,907

meta name="twitter:creator" content="*"

283,533

meta name="twitter:url" content="*"

265,478

meta name="twitter:title" content="*"

716,577

meta name="twitter:description" content="*"

1,145,413

meta name="twitter:image" content="*"

716,577

meta name="twitter:image:alt" content="*"

30,339

And speaking of links, we grabbed all of them that were pointing to the most popular social networks.

chart (2).png
Pie chart showing the external social links distribution, described in detail in the table below.

SELECTOR

COUNT

<a href*="facebook.com">

6,180,313

<a href*="twitter.com">

5,214,768

<a href*="linkedin.com">

1,148,828

<a href*="plus.google.com">

1,019,970

Apparently there are lots of websites that still link to their Google+ profiles, which is probably an oversight considering the not-so-recent Google+ shutdown.

rel=prev/next

According to Google, using rel=prev/next is not an indexing signal anymore, as announced earlier this year:

“As we evaluated our indexing signals, we decided to retire rel=prev/next. Studies show that users love single-page content, aim for that when possible, but multi-part is also fine for Google Search.”
Tweeted by Google Webmasters

However, in case it matters for you, Bing says it uses them as hints for page discovery and site structure understanding.

“We’re using these (like most markup) as hints for page discovery and site structure understanding. At this point, we’re not merging pages together in the index based on these and we’re not using prev/next in the ranking model.”
Frédéric Dubut from Bing

Nevertheless, here are the usage stats we found while looking at millions of index pages:

SELECTOR

COUNT

<link rel="prev" href="*"

20,160

<link rel="next" href="*"

242,387

That’s pretty much it!

Knowing how the average web page looks using data from about 8 million index pages can give us a clearer idea of trends and help us visualize common usage of HTML when it comes to SEO modern and emerging techniques. But this may be a never-ending saga — while having lots of numbers and stats to explore, there are still lots of questions that need answering:

  • We know how structured data is used in the wild now. How will it evolve and how much structured data will be considered enough?
  • Should we expect AMP usage to increase somewhere in the future?
  • How will rel=”sponsored” and rel=“ugc” change the way we write HTML on a daily basis? When coding external links, besides the target=”_blank” and rel=“noopener” combo, we now have to consider the rel=”sponsored” and rel=“ugc” combinations as well.
  • Will we ever learn to always add alt attributes values for images that have a purpose beyond decoration?
  • How many more additional meta tags or attributes will we have to add to a web page to please the search engines? Do we really needed the newly announced data-nosnippet HTML attribute? What’s next, data-allowsnippet?

There are other things we would have liked to address as well, like “time-to-first-byte” (TTFB) values, which correlates highly with ranking; I’d highly recommend HTTP Archive for that. They periodically crawl the top sites on the web and record detailed information about almost everything. According to the latest info, they’ve analyzed 4,565,694 unique websites, with complete Lighthouse scores and having stored particular technologies like jQuery or WordPress for the whole data set. Huge props to Rick Viscomi who does an amazing job as its “steward,” as he likes to call himself.

Performing this large-scale study was a fun ride. We learned a lot and we hope you found the above numbers as interesting as we did. If there is a tag or attribute in particular you would like to see the numbers for, please let me know in the comments below.

Once again, check out the full HTML study results and let me know what you think!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Yes, you can add JSON structured data to the body of your pages

Head or body, Mueller explains that Google can read both just fine.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Effective Landing Pages: 30 powerful headlines that improved marketing results

Get oodles of examples of effective headlines in this MarketingSherpa blog post to help spark ideas as you brainstorm your own headlines
MarketingSherpa Blog

Posted in IM NewsComments Off

Google: We Don’t Hand Rank Web Pages, It Would Be Impossible

Google’s John Mueller said on Twitter that Google does not manually hand rank any web page. He added the web is too big for that and if they tried, it would be impossible.


Search Engine Roundtable

Posted in IM NewsComments Off

How to create landing pages that convert

Landing pages can make or break your digital marketing.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Do You Need Local Pages? – Whiteboard Friday

Posted by Tom.Capper

Does it make sense for you to create local-specific pages on your website? Regardless of whether you own or market a local business, it may make sense to compete for space in the organic SERPs using local pages. Please give a warm welcome to our friend Tom Capper as he shares a 4-point process for determining whether local pages are something you should explore in this week’s Whiteboard Friday!


Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hello, Moz fans. Welcome to another Whiteboard Friday. I’m Tom Capper. I’m a consultant at Distilled, and today I’m going to be talking to you about whether you need local pages. Just to be clear right off the bat what I’m talking about, I’m not talking about local rankings as we normally think of them, the local map pack results that you see in search results, the Google Maps rankings, that kind of thing.

A 4-step process to deciding whether you need local pages

I’m talking about conventional, 10 blue links rankings but for local pages, and by local pages I mean pages from a national or international business that are location-specific. What are some examples of that? Maybe on Indeed.com they would have a page for jobs in Seattle. Indeed doesn’t have a bricks-and-mortar premises in Seattle, but they do have a page that is about jobs in Seattle.

You might get a similar thing with flower delivery. You might get a similar thing with used cars, all sorts of different verticals. I think it can actually be quite a broadly applicable tactic. There’s a four-step process I’m going to outline for you. The first step is actually not on the board. It’s just doing some keyword research.

1. Know (or discover) your key transactional terms

I haven’t done much on that here because hopefully you’ve already done that. You already know what your key transactional terms are. Because whatever happens you don’t want to end up developing location pages for too many different keyword types because it’s gong to bloat your site, you probably just need to pick one or two key transactional terms that you’re going to make up the local variants of. For this purpose, I’m going to talk through an SEO job board as an example.

2. Categorize your keywords as implicit, explicit, or near me and log their search volumes

We might have “SEO jobs” as our core head term. We then want to figure out what the implicit, explicit, and near me versions of that keyword are and what the different volumes are. In this case, the implicit version is probably just “SEO jobs.” If you search for “SEO jobs” now, like if you open a new tab in your browser, you’re probably going to find that a lot of local orientated results appear because that is an implicitly local term and actually an awful lot of terms are using local data to affect rankings now, which does affect how you should consider your rank tracking, but we’ll get on to that later.

SEO jobs, maybe SEO vacancies, that kind of thing, those are all going to be going into your implicitly local terms bucket. The next bucket is your explicitly local terms. That’s going to be things like SEO jobs in Seattle, SEO jobs in London, and so on. You’re never going to get a complete coverage of different locations. Try to keep it simple.

You’re just trying to get a rough idea here. Lastly you’ve got your near me or nearby terms, and it turns out that for SEO jobs not many people search SEO jobs near me or SEO jobs nearby. This is also going to vary a lot by vertical. I would imagine that if you’re in food delivery or something like that, then that would be huge.

3. Examine the SERPs to see whether local-specific pages are ranking

Now we’ve categorized our keywords. We want to figure out what kind of results are going to do well for what kind of keywords, because obviously if local pages is the answer, then we might want to build some.

In this case, I’m looking at the SERP for “SEO jobs.” This is imaginary. The rankings don’t really look like this. But we’ve got SEO jobs in Seattle from Indeed. That’s an example of a local page, because this is a national business with a location-specific page. Then we’ve got SEO jobs Glassdoor. That’s a national page, because in this case they’re not putting anything on this page that makes it location specific.

Then we’ve got SEO jobs Seattle Times. That’s a local business. The Seattle Times only operates in Seattle. It probably has a bricks-and-mortar location. If you’re going to be pulling a lot of data of this type, maybe from stats or something like that, obviously tracking from the locations that you’re mentioning, where you are mentioning locations, then you’re probably going to want to categorize these at scale rather than going through one at a time.

I’ve drawn up a little flowchart here that you could encapsulate in a Excel formula or something like that. If the location is mentioned in the URL and in the domain, then we know we’ve got a local business. Most of the time it’s just a rule of thumb. If the location is mentioned in the URL but not mentioned in the domain, then we know we’ve got a local page and so on.

4. Compare & decide where to focus your efforts

You can just sort of categorize at scale all the different result types that we’ve got. Then we can start to fill out a chart like this using the rankings. What I’d recommend doing is finding a click-through rate curve that you are happy to use. You could go to somewhere like AdvancedWebRanking.com, download some example click-through rate curves.

Again, this doesn’t have to be super precise. We’re looking to get a proportionate directional indication of what would be useful here. I’ve got Implicit, Explicit, and Near Me keyword groups. I’ve got Local Business, Local Page, and National Page result types. Then I’m just figuring out what the visibility share of all these types is. In my particular example, it turns out that for explicit terms, it could be worth building some local pages.

That’s all. I’d love to hear your thoughts in the comments. Thanks.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Digital Marketing News: Facebook’s Playable Ads & Business Pages Update, Gen Z Mom Trends, & B2B’s Video Uptick

Facebook Business Pages

Facebook redesigns biz Pages for utility as feed reach declines
Facebook has released a slew of changes to its popular Business Pages offering, including updates to mobile, recommendations, events, jobs, and Facebook Local. The updates bring marketers new opportunities along with the need to re-think certain strategies that may no longer be relevant. TechCruch

Twitter loses ability to let users auto-post tweets & retweets to Facebook
Facebook changed how its API is utilized by some 60K apps, including Twitter’s, doing away with cross-posted auto-tweets unless going through the more limited posting options of Facebook’s Share feature. Marketing Land

Move Over Millennials: It’s Time To Discuss How To Win With Generation Z Moms
An examination of digital native Gen Z moms and their online brand engagement traits and habits. Forbes

Making B2B video content work: marketers from Linkedln, Dailymotion and The Smalls share best practices
Marketers from LinkedIn (client), The Small, and Dailymotion take a serious look at what’s working in B2B video marketing, what isn’t, and why. The Drum

Facebook launches playable ads, tests retention optimization for app advertising
With Facebook’s recent launch, are playable ads likely to make their way into other, non-gaming areas of digital marketing? Marketing Land

‘Better ROI than influencers’: Meme accounts attract growing interest on Instagram
Brand and publisher partnerships look at engagement via meme, where even small follower counts can produce high engagement rates. DigiDay

2018 August 10 Statistics Image

We Analyzed 43 Million Facebook Posts From the Top 20,000 Brands (New Research)
A new study from Buffer and BuzzSumo examined Facebook posts from some 20,000 top brands, and results show posting volume has been up while page engagement has decreased. Buffer

Snapchat launches ad marketplace for Discover partners & brings Commercials to Ads Manager
Snapchat’s Private Marketplace and non-skippable ad options were among several new beta features recently rolled out to publishers. Marketing Land

ON THE LIGHTER SIDE:

Marketoonist Tom Fishburne ROI of Marketing Cartoon

A lighthearted look at the ROI of marketing by Marketoonist Tom Fishburne — Marketoonist

Anti-Poser CAPTCHA Asks User to Click ‘Every Real Punk Band’ — The Hard Times

TOPRANK MARKETING & CLIENTS IN THE NEWS:

  • TopRank Marketing — Top 10 Content Marketing Blogs on the Internet Today — Blogging.org
  • Lee Odden — 50 Tips for Ad Agency New Business — Michael Gass
  • Lee Odden — Natural Language Generation Accelerates Content Marketing, But Keep Your Hands on the Wheel
    CMSWire
  • Lee Odden — 9 Expert Guides: How to Win at Influencer Marketing — Marx Communications
  • Lee Odden — Main Stage Spotlight Speakers at Pubcon Pro Las Vegas — Pubcon

What are some of your top influencer marketing news items for this week?

Thanks for reading, and we hope you’ll join us again next week for the latest digital marketing news, and in the meantime you can follow us at @toprank on Twitter for even more timely daily news. Also, don’t miss the full video summary on our TopRank Marketing TV YouTube Channel.


Email Newsletter
Gain a competitive advantage by subscribing to the
TopRank® Online Marketing Newsletter.

© Online Marketing Blog – TopRank®, 2018. |
Digital Marketing News: Facebook’s Playable Ads & Business Pages Update, Gen Z Mom Trends, & B2B’s Video Uptick | http://www.toprankblog.com

The post Digital Marketing News: Facebook’s Playable Ads & Business Pages Update, Gen Z Mom Trends, & B2B’s Video Uptick appeared first on Online Marketing Blog – TopRank®.

Online Marketing Blog – TopRank®

Posted in IM NewsComments Off

How a Few Pages Can Make or Break Your Website

Posted by Jeff_Baker

A prospect unequivocally disagreed with a recommendation I made recently.

I told him a few pages of content could make a significant impact on his site. Even when presented with hard numbers backing up my assertions, he still balked. My ego started gnawing: would a painter tell a mathematician how to do trigonometry?

Unlike art, content marketing and SEO aren’t subjective. The quality of the words you write can be quantified, and they can generate a return for your business.

Most of your content won’t do anything

In order to have this conversation, we really need to deal with this fact.

Most content created lives deep on page 7 of Google, ranking for an obscure keyword completely unrelated to your brand. A lack of scientific (objective math) process is to blame. But more on that later.

Case in point: Brafton used to employ a volume play with regard to content strategy. Volume = keyword rankings. It was spray-and-pray, and it worked.

Looking back on current performance for old articles, we find that the top 100 pages of our site (1.2% of all indexed pages) drive 68% of all organic traffic.

Further, 94.5% of all indexed pages drive five clicks or less from search every three months.

So what gives?

Here’s what has changed: easy content is a thing of the past. Writing content and “using keywords” is a plan destined for a lonely death on page 7 of the search results. The process for creating content needs to be rigorous and heavily supported by data. It needs to start with keyword research.

1. Keyword research:

Select content topics from keywords that are regularly being searched. Search volume implies interest, which guarantees what you are writing about is of interest to your target audience. The keywords you choose also need to be reasonable. Using organic difficulty metrics from Moz or SEMrush will help you determine if you stand a realistic chance of ranking somewhere meaningful.

2. SEO content writing:

Your goal is to get the page you’re writing to rank for the keyword you’re targeting. The days of using a keyword in blog posts and linking to a product landing page are over. One page, one keyword. Therefore, if you want your page to rank for the chosen keyword, that page must be the very best piece of content on the web for that keyword. It needs to be in-depth, covering a wide swath of related topics.

How to project results

Build out your initial list of keyword targets. Filter the list down to the keywords with the optimal combination of search volume, organic difficulty, SERP crowding, and searcher intent. You can use this template as a guide — just make a copy and you’re set.

Get the keyword target template

Once you’ve narrowed down your list to top contenders, tally up the total search volume potential — this is the total number of searches that are made on a monthly basis for all your keyword targets. You will not capture this total number of searches. A good rule of thumb is that if you rank, on average, at the bottom of page 1 and top of page 2 for all keywords, your estimated CTR will be a maximum of 2%. The mid-bottom of page 1 will be around 4%. The top-to-middle of page 1 will be 6%.

In the instance above, if we were to rank poorly, with a 2% CTR for 20 pages, we would drive an additional 42–89 targeted, commercial-intent visitors per month.

The website in question drives an average of 343 organic visitors per month, via a random assortment of keywords from 7,850 indexed pages in Google. At the very worst, 20 pages, or .3% of all pages, would drive 10.9% of all traffic. At best (if the client followed the steps above to a T), the .3% additional pages would drive 43.7% of all traffic!

Whoa.

That’s .3% of a site’s indexed pages driving an additional 77.6% of traffic every. single. month.

How a few pages can make a difference

Up until now, everything we’ve discussed has been hypothetical keyword potential. Fortunately, we have tested this method with 37 core landing pages on our site (.5% of all indexed pages). The result of deploying the method above was 24 of our targeted keywords ranking on page 1, driving an estimated 716 high-intent visitors per month.

That amounts to .5% of all pages driving 7.7% of all traffic. At an average CPC of $ 12.05 per keyword, the total cost of paying for these keywords would be $ 8,628 per month.

Our 37 pages (.5% of all pages), which were a one-time investment, drive 7.7% of all traffic at an estimated value of $ 103,533 yearly.

Can a few pages make or break your website? You bet your butt.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Diagnosing Why a Site’s Set of Pages May Be Ranking Poorly – Whiteboard Friday

Posted by randfish

Your rankings have dropped and you don’t know why. Maybe your traffic dropped as well, or maybe just a section of your site has lost rankings. It’s an important and often complex mystery to solve, and there are a number of boxes to check off while you investigate. In this Whiteboard Friday, Rand shares a detailed process to follow to diagnose what went wrong to cause your rankings drop, why it happened, and how to start the recovery process.

Diagnosing why a site's pages may be ranking poorly

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about diagnosing a site and specifically a section of a site’s pages and why they might be performing poorly, why their traffic may have dropped, why rankings may have dropped, why both of them might have dropped. So we’ve got a fairly extensive process here, so let’s get started.

Step 1: Uncover the problem

First off, our first step is uncovering the problem or finding whether there is actually a problem. A good way to think about this is especially if you have a larger website, if we’re talking about a site that’s 20 or 30 or even a couple hundred pages, this is not a big issue. But many websites that SEOs are working on these days are thousands, tens of thousands, hundreds of thousands of pages. So what I like to urge folks to do is to

A. Treat different site sections as unique segments for investigation. You should look at them individually.

A lot of times subfolders or URL structures are really helpful here. So I might say, okay, MySite.com, I’m going to look exclusively at the /news section. Did that fall in rankings? Did it fall in traffic? Or was it /posts, where my blog posts and my content is? Or was it /cities? Let’s say I have a website that’s dealing with data about the population of cities. So I rank for lots of those types of queries, and it seems like I’m ranking for fewer of them, and it’s my cities pages that are poorly performing in comparison to where they were a few months ago or last year at this time.

B. Check traffic from search over time.

So I go to my Google Analytics or whatever analytics you’re using, and you might see something like, okay, I’m going to look exclusively at the /cities section. If you can structure your URLs in this fashion, use subfolders, this is a great way to do it. Then take a look and see, oh, hang on, that’s a big traffic drop. We fell off a cliff there for these particular pages.

This data can be hiding inside your analytics because it could be that the rest of your site is performing well. It’s going sort of up and to the right, and so you see this slow plateauing or a little bit of a decline, but it’s not nearly as sharp as it is if you look at the traffic specifically for a single subsection that might be performing poorly, like this /cities section.

From there, I’m going to next urge you to use Google Trends. Why? Why would I go to Google Trends? Because what I want you to do is I want you to look at some of your big keywords and topics in Google Trends to see if there has been a serious decline in search volume at the same time. If search demand is rising or staying stable over the course of time where you have lost traffic, it’s almost certainly something you’ve done, not something searchers are doing. But if you see that traffic has declined, for example, maybe you were ranking really well for population data from 2015. It turns out people are now looking for population data for 2016 or ’17 or ’18. Maybe that is part of the problem, that search demand has fallen and your curve matches that.

C. Perform some diagnostic queries or use your rank tracking data if you have it on these types of things.

This is one of the reasons I like to rank track for even these types of queries that don’t get a lot of traffic.

1. Target keywords. In this case, it might be “Denver population growth,” maybe that’s one of your keywords. You would see, “Do I still rank for this? How well do I rank for this? Am I ranking more poorly than I used to?”

2. Check brand name plus target keyword. So, in this case, it would be my site plus the above here plus “Denver population growth,” so My Site or MySite.com Denver population growth. If you’re not ranking for that, that’s usually an indication of a more serious problem, potentially a penalty or some type of dampening that’s happening around your brand name or around your website.

3. Look for a 10 to 20-word text string from page content without quotes. It could be shorter. It could be only six or seven words, or it could be longer, 25 words if you really need it. But essentially, I want to take a string of text that exists on the page and put it in order in Google search engine, not in quotes. I do not want to use quotes here, and I want to see how it performs. This might be several lines of text here.

4. Look for a 10 to 20-word text string with quotes. So those lines of text, but in quotes searched in Google. If I’m not ranking for this, but I am for this one … sorry, if I’m not ranking for the one not in quotes, but I am in quotes, I might surmise this is probably not duplicate content. It’s probably something to do with my content quality or maybe my link profile or Google has penalized or dampened me in some way.

5. site: urlstring/ So I would search for “site:MySite.com/cities/Denver.” I would see: Wait, has Google actually indexed my page? When did they index it? Oh, it’s been a month. I wonder why they haven’t come back. Maybe there’s some sort of crawl issue, robots.txt issue, meta robots issue, something. I’m preventing Google from potentially getting there. Or maybe they can’t get there at all, and this results in zero results. That means Google hasn’t even indexed the page. Now we have another type of problem.

D. Check your tools.

1. Google Search Console. I would start there, especially in the site issues section.

2. Check your rank tracker or whatever tool you’re using, whether that’s Moz or something else.

3. On-page and crawl monitoring. Hopefully you have something like that. It could be through Screaming Frog. Maybe you’ve run some crawls over time, or maybe you have a tracking system in place. Moz has a crawl system. OnPage.org has a really good one.

4. Site uptime. So I might check Pingdom or other things that alert me to, “Oh, wait a minute, my site was down for a few days last week. That obviously is why traffic has fallen,” those types of things.

Step 2: Offer hypothesis for falling rankings/traffic

Okay, you’ve done your diagnostics. Now it’s time to offer some hypotheses. So now that we understand which problem I might have, I want to understand what could be resulting in that problem. So there are basically two situations you can have. Rankings have stayed stable or gone up, but traffic has fallen.

A. If rankings are up, but traffic is down…

In those cases, these are the five things that are most typically to blame.

1. New SERP features. There’s a bunch of featured snippets that have entered the population growth for cities search results, and so now number one is not what number one used to be. If you don’t get that featured snippet, you’re losing out to one of your competitors.

2. Lower search demand. Like we talked about in Google Trends. I’m looking at search demand, and there are just not as many people searching as there used to be.

3. Brand or reputation issues. I’m ranking just fine, but people now for some reason hate me. People who are searching this sector think my brand is evil or bad or just not as helpful as it used to be. So I have issues, and people are not clicking on my results. They’re choosing someone else actively because of reputation issues.

4. Snippet problems. I’m ranking in the same place I used to be, but I’m no longer the sexiest, most click-drawing snippet in the search results, and other people are earning those clicks instead.

5. Shift in personalization or location biasing by Google. It used to be the case that everyone who searched for city name plus population growth got the same results, but now suddenly people are seeing different results based on maybe their device or things they’ve clicked in the past or where they’re located. Location is often a big cause for this.

So for many SEOs for many years, “SEO consultant” resulted in the same search results. Then Google introduced the Maps results and pushed down a lot of those folks, and now “SEO consultant” results in different ranked results in each city and each geography that you search in. So that can often be a cause for falling traffic even though rankings remain high.

B. If rankings and traffic are down…

If you’re seeing that rankings have fallen and traffic has fallen in conjunction, there’s a bunch of other things that are probably going on that are not necessarily these things. A few of these could be responsible still, like snippet problems could cause your rankings and your traffic to fall, or brand and reputation issues could cause your click-through rate to fall, which would cause you to get dampened. But oftentimes it’s things like this:

1. & 2. Duplicate content and low-quality or thin content. Google thinks that what you’re providing just isn’t good enough.

3. Change in searcher intent. People who were searching for population growth used to want what you had to offer, but now they want something different and other people in the SERP are providing that, but you are not, so Google is ranking you lower. Even though your content is still good, it’s just not serving the new searcher intent.

4. Loss to competitors. So maybe you have worse links than they do now or less relevance or you’re not solving the searcher’s query as well. Your user interface, your UX is not as good. Your keyword targeting isn’t as good as theirs. Your content quality and the unique value you provide isn’t as good as theirs. If you see that one or two competitors are consistently outranking you, you might diagnose that this is the problem.

5. Technical issues. So if I saw from over here that the crawl was the problem, I wasn’t getting indexed, or Google hasn’t updated my pages in a long time, I might look into accessibility things, maybe speed, maybe I’m having problems like letting Googlebot in, HTTPS problems, or indexable content, maybe Google can’t see the content on my page anymore because I made some change in the technology of how it’s displayed, or crawlability, internal link structure problems, robots.txt problems, meta robots tag issues, that kind of stuff.

Maybe at the server level, someone on the tech ops team of my website decided, “Oh, there’s this really problematic bot coming from Mountain View that’s costing us a bunch of bandwidth. Let’s block bots from Mountain View.” No, don’t do that. Bad. Those kinds of technical issues can happen.

6. Spam and penalties. We’ll talk a little bit more about how to diagnose those in a second.

7. CTR, engagement, or pogo-sticking issues. There could be click-through rate issues or engagement issues, meaning pogo sticking, like people are coming to your site, but they are clicking back because they weren’t satisfied by your results, maybe because their expectations have changed or market issues have changed.

Step 3: Make fixes and observe results

All right. Next and last in this process, what we’re going to do is make some fixes and observe the results. Hopefully, we’ve been able to correctly diagnose and form some wise hypotheses about what’s going wrong, and now we’re going to try and resolve them.

A. On-page and technical issues should solve after a new crawl + index.

So on-page and technical issues, if we’re fixing those, they should usually resolve, especially on small sections of sites, pretty fast. As soon as Google has crawled and indexed the page, you should generally see performance improve. But this can take a few weeks if we’re talking about a large section on a site, many thousands of pages, because Google has to crawl and index all of them to get the new sense that things are fixed and traffic is coming in. Since it’s long tail to many different pages, you’re not going to see that instant traffic gain and rise as fast.

B. Link issues and spam penalty problems can take months to show results.

Look, if you have crappier links or not a good enough link profile as your competitors, growing that can take months or years even to fix. Penalty problems and spam problems, same thing. Google can take sometimes a long time. You’ve seen a lot of spam experts on Twitter saying, “Oh, well, all my clients who had issues over the last nine months suddenly are ranking better today,” because Google made some fix in their latest index rollout or their algorithm changed, and it’s sort of, okay, well we’ll reward the people for all the fixes that they’ve made. Sometimes that’s in batches that take months.

C. Fixing a small number of pages in a section that’s performing poorly might not show results very quickly.

For example, let’s say you go and you fix /cities/Milwaukee. You determine from your diagnostics that the problem is a content quality issue. So you go and you update these pages. They have new content. It serves the searchers much better, doing a much better job. You’ve tested it. People really love it. You fixed two cities, Milwaukee and Denver, to test it out. But you’ve left 5,000 other cities pages untouched.

Sometimes Google will sort of be like, “No, you know what? We still think your cities pages, as a whole, don’t do a good job solving this query. So even though these two that you’ve updated do a better job, we’re not necessarily going to rank them, because we sort of think of your site as this whole section and we grade it as a section or apply some grades as a section.” That is a real thing that we’ve observed happening in Google’s results.

Because of this, one of the things that I would urge you to do is if you’re seeing good results from the people you’re testing it with and you’re pretty confident, I would roll out the changes to a significant subset, 30%, 50%, 70% of the pages rather than doing only a tiny, tiny sample.

D. Sometimes when you encounter these issues, a remove and replace strategy works better than simply upgrading old URLs.

So if Google has decided /cities, your /cities section is just awful, has all sorts of problems, not performing well on a bunch of different vectors, you might take your /cities section and actually 301 redirect them to a new URL, /location, and put the new UI and the new content that better serves the searcher and fixes a lot of these issues into that location section, such that Google now goes, “Ah, we have something new to judge. Let’s see how these location pages on MySite.com perform versus the old cities pages.”

So I know we’ve covered a ton today and there are a lot of diagnostic issues that we haven’t necessarily dug deep into, but I hope this can help you if you’re encountering rankings challenges with sections of your site or with your site as a whole. Certainly, I look forward to your comments and your feedback. If you have other tips for folks facing this, that would be great. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

PSA: Google Doesn’t Use Content On Non-Canonical Pages

Google’s John Mueller wrote on Twitter “remember that the content on ‘non-canonical’ versions generally doesn’t get used.” Meaning, if you point page A to page B using a 301 or canonical tag…


Search Engine Roundtable

Posted in IM NewsComments Off

Advert