Archive | IM News

Moz’s Mid-Year Retrospective: Exciting Upgrades from the First Half of 2018

Posted by NeilCrist

Every year, we publish an overview of all the upgrades we’ve made to our tools and how those changes benefit our customers and Moz Community members. So far, 2018 has been a whirlwind of activity here at Moz — not only did we release a massive, long-awaited update to our link building tool, we’ve also been improving and updating systems and tools across the board to make your Moz experience even better. To that end, we’re sharing a mid-year retrospective to keep up with the incredible amount of progress we’ve made.

We receive a lot of amazing feedback from our customers on pain points they experience and improvements they’d like to see. Folks, we hear you.

We not only massively restructured some of our internal systems to provide you with better data, we also innovated new ways to display and report on that data, making the tools more accurate and more useful than ever before.

If you’ve been tasked with achieving organic success, we know your job isn’t easy. You need tools that get the job done, and done well. We think Moz delivered.

Check out our 2018 improvements so far:

Our new link index: Bigger, fresher, better than ever

Our link index underwent a major overhaul: it’s now 20x larger and 30x fresher than it was previously. This new link index data has been made available via our Mozscape API, as well as integrated into many Moz Pro tools, including Campaigns, Keyword Explorer, the MozBar, and Fresh Web Explorer. But undoubtedly the largest and most-anticipated improvement the new link index allowed us to make was the launch of Link Explorer, which we rolled out at the end of April as a replacement for Open Site Explorer.

Link Explorer addresses and improves upon its predecessor by providing more data, fresher data, and better ways to visualize that data. Answering a long-asked-for feature in OSE, Link Explorer includes historical metrics, and it also surfaces newly discovered and lost links:

Below are just a few of the many ways Link Explorer is providing some of the best link data available:

  • Link Explorer’s link index contains approximately 4.8 trillion URLs — that’s 20x larger than OSE and surpasses Ahrefs’ index (~3 trillion pages) and Majestic’s fresh index (~1 trillion pages).
  • Link Explorer is 30x fresher than OSE. All data updates every 24 hours.
  • We believe Link Explorer is unique in how accurately our link index represents the web, resulting in data quality you can trust.
  • Link Explorer has the closest robots.txt profile to Google among the three major link indexes, which means we get more of the links Google gets.
  • We also improved Domain Authority, Page Authority, and Spam Score. The size and freshness of our index has allowed us to offer a more stable DA and PA score. Though it will still fluctuate as the index fluctuates (which has always been by design), it will not be as dramatic as it was in Open Site Explorer.

Explore your link profile

You can learn more about Link Explorer by reading Sarah Bird’s announcement, watching Rand’s Whiteboard Friday, or visiting our Link Explorer Help Guide. Even though it’s still in beta, Link Explorer already blows away OSE’s data quality, freshness, and capabilities. Look for steady improvements to Link Explorer as we continue to iterate on it and add more key features.

New-and-improved On-Page Grader

Moz’s On-Page Grader got a thorough and much-needed overhaul! Not only did we freshen up the interface with a new look and feel, but we also added new features and improved upon our data.

Inside the new On-Page Grader, you’ll find:

  • An updated metrics bar to show you Page Title, Meta Description, and the number of Keywords Found. No need to dig through source code!
  • An updated Optimization Score to align with the Page Optimization feature that’s inside Campaigns and in the MozBar. Instead of a letter grade (A–F), you now have Page Score (0–100) for a more precise measurement of page optimization performance.
  • On-page factors are now categorized so you can see what is hurting or helping your Page Score.
  • On-page factors are organized by importance so you can prioritize your efforts. Red indicates high importance, yellow indicates moderate importance, and blue indicates low importance.

On-Page Grader is a great way to take a quick look at how well a page is optimized for a specified keyword. Here’s how it works.

Input your page and the keyword you want that page to rank for…

… and On-Page Grader will return a list of suggestions for improving your on-site optimization.

Check it out!

Keyword ranking data now available for Canada, UK, and Australia

We’re very excited to announce that, as of just last week, international data has been added to the Keywords by Site feature of Keyword Explorer! This will now allow Moz Pro customers to see which keywords they rank for and assess their visibility across millions of SERPs, now encompassing the US, Canada, the United Kingdom, and Australia! Keywords by Site is a newer feature within Keyword Explorer, added just last October to show which and how many keywords any domain, subdomain, or page ranks for.

Want to see which keywords your site ranks for in the US, UK, Canada, or Australia?

See what you rank for

It’s easy to use — just select a country from the dropdown menu to the right. This will show you which keywords a domain or page is ranking for from a particular country.

On-Demand Crawl now available

We know it can be important to track your site changes in real time. That’s why, on June 29th, we’re replacing our legacy site audit tool, Crawl Test, with the new and improved On-Demand Crawl:

Whether you need to double-check a change you’ve made or need a one-off report, the new On-Demand Crawl offers an updated experience for Moz Pro customers:

  • Crawl reports are now faster and available sooner, allowing you to quickly assess your site, a new client or prospect’s, or the competition.
  • Your site issues are now categorized by issue type and quantity, making it easier to identify what to work on and how to prioritize:

  • Recommendations are now provided for how to fix each issue, along with resources detailing why it matters:

  • Site audit reports are now easier than ever to package and present with PDF exports.
  • An updated, fresh design and UX!

On-Demand Crawl is already available now in Moz Pro. If you’re curious how it works, check it out:

Try On-Demand Crawl

Improvements to tool notifications & visuals

Moz’s email notification system and tools dashboard didn’t always sync up perfectly with the actual data update times. Sometimes, customers would receive an email or see updated dates on their dashboard before the data had rolled out, resulting in confusion. We’ve streamlined the process, and now customers no longer have to wonder where their data is — you can rest assured that your data is waiting for you in Moz Pro as soon as you’re notified.

Rank Tracker is sticking around

While we had originally planned to retire Rank Tracker at the beginning of June, we’ve decided to hold off in light of the feedback we received from our customers. Our goal in retiring Rank Tracker was to make Moz Pro easier to navigate by eliminating the redundancy of having two options for tracking keyword rankings (Rank Tracker and Campaigns), but after hearing how many people use and value Rank Tracker, and after weighing our options, we decided to postpone its retirement until we had a better solution than simply shutting it down.

Right now, we’re focused on learning more from our community on what makes this tool so valuable, so if you have feedback regarding Rank Tracker, we’d love it if you would take our survey. The information we gather from this survey will help us create a better solution for you!

Updates from Moz Academy

New advanced SEO courses

In response to the growing interest in advanced and niche-specific training, Moz is now offering ongoing classes and seminars on topics such as e-commerce SEO and technical site audits. If there’s an advanced topic you’d like training on, let us know by visiting https://moz.com/training and navigating to the “Custom” tab to tell us exactly what type of training you’re looking for.

On-demand coursework

We love the fact that we have Moz customers from around the globe, so we’re always looking for new ways to accommodate those in different timezones and those with sporadic schedules. One new way we’re doing this is by offering on-demand coursework. Get training from Moz when it works best for you. With this added scheduling flexibility (and with added instructors to boot), we hope to be able to reach more people than ever before.

To view Moz’s on-demand coursework, visit moz.com/training and click on the “On-Demand” tab.

Certificate development

There’s been a growing demand for a meaningful certification program in SEO, and we’re proud to say that Moz is here to deliver. This coursework will include a certificate and a badge for your LinkedIn profile. We’re planning on launching the program later this year, so stay tuned by signing up for Moz Training Alerts!

Tell us what you think!

Have feedback for us on any of our 2018 improvements? Any ideas on new ways we can improve our tools and training resources? Let us know in the comments! We love hearing from marketers like you. Your input helps us develop the best tools possible for ensuring your content gets found online.

If you’re not a Moz Pro subscriber and haven’t gotten a chance to check out these new features yet, sign up for a free trial!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How a Few Pages Can Make or Break Your Website

Posted by Jeff_Baker

A prospect unequivocally disagreed with a recommendation I made recently.

I told him a few pages of content could make a significant impact on his site. Even when presented with hard numbers backing up my assertions, he still balked. My ego started gnawing: would a painter tell a mathematician how to do trigonometry?

Unlike art, content marketing and SEO aren’t subjective. The quality of the words you write can be quantified, and they can generate a return for your business.

Most of your content won’t do anything

In order to have this conversation, we really need to deal with this fact.

Most content created lives deep on page 7 of Google, ranking for an obscure keyword completely unrelated to your brand. A lack of scientific (objective math) process is to blame. But more on that later.

Case in point: Brafton used to employ a volume play with regard to content strategy. Volume = keyword rankings. It was spray-and-pray, and it worked.

Looking back on current performance for old articles, we find that the top 100 pages of our site (1.2% of all indexed pages) drive 68% of all organic traffic.

Further, 94.5% of all indexed pages drive five clicks or less from search every three months.

So what gives?

Here’s what has changed: easy content is a thing of the past. Writing content and “using keywords” is a plan destined for a lonely death on page 7 of the search results. The process for creating content needs to be rigorous and heavily supported by data. It needs to start with keyword research.

1. Keyword research:

Select content topics from keywords that are regularly being searched. Search volume implies interest, which guarantees what you are writing about is of interest to your target audience. The keywords you choose also need to be reasonable. Using organic difficulty metrics from Moz or SEMrush will help you determine if you stand a realistic chance of ranking somewhere meaningful.

2. SEO content writing:

Your goal is to get the page you’re writing to rank for the keyword you’re targeting. The days of using a keyword in blog posts and linking to a product landing page are over. One page, one keyword. Therefore, if you want your page to rank for the chosen keyword, that page must be the very best piece of content on the web for that keyword. It needs to be in-depth, covering a wide swath of related topics.

How to project results

Build out your initial list of keyword targets. Filter the list down to the keywords with the optimal combination of search volume, organic difficulty, SERP crowding, and searcher intent. You can use this template as a guide — just make a copy and you’re set.

Get the keyword target template

Once you’ve narrowed down your list to top contenders, tally up the total search volume potential — this is the total number of searches that are made on a monthly basis for all your keyword targets. You will not capture this total number of searches. A good rule of thumb is that if you rank, on average, at the bottom of page 1 and top of page 2 for all keywords, your estimated CTR will be a maximum of 2%. The mid-bottom of page 1 will be around 4%. The top-to-middle of page 1 will be 6%.

In the instance above, if we were to rank poorly, with a 2% CTR for 20 pages, we would drive an additional 42–89 targeted, commercial-intent visitors per month.

The website in question drives an average of 343 organic visitors per month, via a random assortment of keywords from 7,850 indexed pages in Google. At the very worst, 20 pages, or .3% of all pages, would drive 10.9% of all traffic. At best (if the client followed the steps above to a T), the .3% additional pages would drive 43.7% of all traffic!

Whoa.

That’s .3% of a site’s indexed pages driving an additional 77.6% of traffic every. single. month.

How a few pages can make a difference

Up until now, everything we’ve discussed has been hypothetical keyword potential. Fortunately, we have tested this method with 37 core landing pages on our site (.5% of all indexed pages). The result of deploying the method above was 24 of our targeted keywords ranking on page 1, driving an estimated 716 high-intent visitors per month.

That amounts to .5% of all pages driving 7.7% of all traffic. At an average CPC of $ 12.05 per keyword, the total cost of paying for these keywords would be $ 8,628 per month.

Our 37 pages (.5% of all pages), which were a one-time investment, drive 7.7% of all traffic at an estimated value of $ 103,533 yearly.

Can a few pages make or break your website? You bet your butt.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

An 8-Point Checklist for Debugging Strange Technical SEO Problems

Posted by Dom-Woodman

Occasionally, a problem will land on your desk that’s a little out of the ordinary. Something where you don’t have an easy answer. You go to your brain and your brain returns nothing.

These problems can’t be solved with a little bit of keyword research and basic technical configuration. These are the types of technical SEO problems where the rabbit hole goes deep.

The very nature of these situations defies a checklist, but it’s useful to have one for the same reason we have them on planes: even the best of us can and will forget things, and a checklist will provvide you with places to dig.


Fancy some examples of strange SEO problems? Here are four examples to mull over while you read. We’ll answer them at the end.

1. Why wasn’t Google showing 5-star markup on product pages?

  • The pages had server-rendered product markup and they also had Feefo product markup, including ratings being attached client-side.
  • The Feefo ratings snippet was successfully rendered in Fetch & Render, plus the mobile-friendly tool.
  • When you put the rendered DOM into the structured data testing tool, both pieces of structured data appeared without errors.

2. Why wouldn’t Bing display 5-star markup on review pages, when Google would?

  • The review pages of client & competitors all had rating rich snippets on Google.
  • All the competitors had rating rich snippets on Bing; however, the client did not.
  • The review pages had correctly validating ratings schema on Google’s structured data testing tool, but did not on Bing.

3. Why were pages getting indexed with a no-index tag?

  • Pages with a server-side-rendered no-index tag in the head were being indexed by Google across a large template for a client.

4. Why did any page on a website return a 302 about 20–50% of the time, but only for crawlers?

  • A website was randomly throwing 302 errors.
  • This never happened in the browser and only in crawlers.
  • User agent made no difference; location or cookies also made no difference.

Finally, a quick note. It’s entirely possible that some of this checklist won’t apply to every scenario. That’s totally fine. It’s meant to be a process for everything you could check, not everything you should check.

The pre-checklist check

Does it actually matter?

Does this problem only affect a tiny amount of traffic? Is it only on a handful of pages and you already have a big list of other actions that will help the website? You probably need to just drop it.

I know, I hate it too. I also want to be right and dig these things out. But in six months’ time, when you’ve solved twenty complex SEO rabbit holes and your website has stayed flat because you didn’t re-write the title tags, you’re still going to get fired.

But hopefully that’s not the case, in which case, onwards!

Where are you seeing the problem?

We don’t want to waste a lot of time. Have you heard this wonderful saying?: “If you hear hooves, it’s probably not a zebra.”

The process we’re about to go through is fairly involved and it’s entirely up to your discretion if you want to go ahead. Just make sure you’re not overlooking something obvious that would solve your problem. Here are some common problems I’ve come across that were mostly horses.

  1. You’re underperforming from where you should be.
    1. When a site is under-performing, people love looking for excuses. Weird Google nonsense can be quite a handy thing to blame. In reality, it’s typically some combination of a poor site, higher competition, and a failing brand. Horse.
  2. You’ve suffered a sudden traffic drop.
    1. Something has certainly happened, but this is probably not the checklist for you. There are plenty of common-sense checklists for this. I’ve written about diagnosing traffic drops recently — check that out first.
  3. The wrong page is ranking for the wrong query.
    1. In my experience (which should probably preface this entire post), this is usually a basic problem where a site has poor targeting or a lot of cannibalization. Probably a horse.

Factors which make it more likely that you’ve got a more complex problem which require you to don your debugging shoes:

  • A website that has a lot of client-side JavaScript.
  • Bigger, older websites with more legacy.
  • Your problem is related to a new Google property or feature where there is less community knowledge.

1. Start by picking some example pages.

Pick a couple of example pages to work with — ones that exhibit whatever problem you’re seeing. No, this won’t be representative, but we’ll come back to that in a bit.

Of course, if it only affects a tiny number of pages then it might actually be representative, in which case we’re good. It definitely matters, right? You didn’t just skip the step above? OK, cool, let’s move on.

2. Can Google crawl the page once?

First we’re checking whether Googlebot has access to the page, which we’ll define as a 200 status code.

We’ll check in four different ways to expose any common issues:

  1. Robots.txt: Open up Search Console and check in the robots.txt validator.
  2. User agent: Open Dev Tools and verify that you can open the URL with both Googlebot and Googlebot Mobile.
    1. To get the user agent switcher, open Dev Tools.
    2. Check the console drawer is open (the toggle is the Escape key)
    3. Hit the … and open “Network conditions”
    4. Here, select your user agent!

  1. IP Address: Verify that you can access the page with the mobile testing tool. (This will come from one of the IPs used by Google; any checks you do from your computer won’t.)
  2. Country: The mobile testing tool will visit from US IPs, from what I’ve seen, so we get two birds with one stone. But Googlebot will occasionally crawl from non-American IPs, so it’s also worth using a VPN to double-check whether you can access the site from any other relevant countries.
    1. I’ve used HideMyAss for this before, but whatever VPN you have will work fine.

We should now have an idea whether or not Googlebot is struggling to fetch the page once.

Have we found any problems yet?

If we can re-create a failed crawl with a simple check above, then it’s likely Googlebot is probably failing consistently to fetch our page and it’s typically one of those basic reasons.

But it might not be. Many problems are inconsistent because of the nature of technology. ;)

3. Are we telling Google two different things?

Next up: Google can find the page, but are we confusing it by telling it two different things?

This is most commonly seen, in my experience, because someone has messed up the indexing directives.

By “indexing directives,” I’m referring to any tag that defines the correct index status or page in the index which should rank. Here’s a non-exhaustive list:

  • No-index
  • Canonical
  • Mobile alternate tags
  • AMP alternate tags

An example of providing mixed messages would be:

  • No-indexing page A
  • Page B canonicals to page A

Or:

  • Page A has a canonical in a header to A with a parameter
  • Page A has a canonical in the body to A without a parameter

If we’re providing mixed messages, then it’s not clear how Google will respond. It’s a great way to start seeing strange results.

Good places to check for the indexing directives listed above are:

  • Sitemap
    • Example: Mobile alternate tags can sit in a sitemap
  • HTTP headers
    • Example: Canonical and meta robots can be set in headers.
  • HTML head
    • This is where you’re probably looking, you’ll need this one for a comparison.
  • JavaScript-rendered vs hard-coded directives
    • You might be setting one thing in the page source and then rendering another with JavaScript, i.e. you would see something different in the HTML source from the rendered DOM.
  • Google Search Console settings
    • There are Search Console settings for ignoring parameters and country localization that can clash with indexing tags on the page.

A quick aside on rendered DOM

This page has a lot of mentions of the rendered DOM on it (18, if you’re curious). Since we’ve just had our first, here’s a quick recap about what that is.

When you load a webpage, the first request is the HTML. This is what you see in the HTML source (right-click on a webpage and click View Source).

This is before JavaScript has done anything to the page. This didn’t use to be such a big deal, but now so many websites rely heavily on JavaScript that the most people quite reasonably won’t trust the the initial HTML.

Rendered DOM is the technical term for a page, when all the JavaScript has been rendered and all the page alterations made. You can see this in Dev Tools.

In Chrome you can get that by right clicking and hitting inspect element (or Ctrl + Shift + I). The Elements tab will show the DOM as it’s being rendered. When it stops flickering and changing, then you’ve got the rendered DOM!

4. Can Google crawl the page consistently?

To see what Google is seeing, we’re going to need to get log files. At this point, we can check to see how it is accessing the page.

Aside: Working with logs is an entire post in and of itself. I’ve written a guide to log analysis with BigQuery, I’d also really recommend trying out Screaming Frog Log Analyzer, which has done a great job of handling a lot of the complexity around logs.

When we’re looking at crawling there are three useful checks we can do:

  1. Status codes: Plot the status codes over time. Is Google seeing different status codes than you when you check URLs?
  2. Resources: Is Google downloading all the resources of the page?
    1. Is it downloading all your site-specific JavaScript and CSS files that it would need to generate the page?
  3. Page size follow-up: Take the max and min of all your pages and resources and diff them. If you see a difference, then Google might be failing to fully download all the resources or pages. (Hat tip to @ohgm, where I first heard this neat tip).

Have we found any problems yet?

If Google isn’t getting 200s consistently in our log files, but we can access the page fine when we try, then there is clearly still some differences between Googlebot and ourselves. What might those differences be?

  1. It will crawl more than us
  2. It is obviously a bot, rather than a human pretending to be a bot
  3. It will crawl at different times of day

This means that:

  • If our website is doing clever bot blocking, it might be able to differentiate between us and Googlebot.
  • Because Googlebot will put more stress on our web servers, it might behave differently. When websites have a lot of bots or visitors visiting at once, they might take certain actions to help keep the website online. They might turn on more computers to power the website (this is called scaling), they might also attempt to rate-limit users who are requesting lots of pages, or serve reduced versions of pages.
  • Servers run tasks periodically; for example, a listings website might run a daily task at 01:00 to clean up all it’s old listings, which might affect server performance.

Working out what’s happening with these periodic effects is going to be fiddly; you’re probably going to need to talk to a back-end developer.

Depending on your skill level, you might not know exactly where to lead the discussion. A useful structure for a discussion is often to talk about how a request passes through your technology stack and then look at the edge cases we discussed above.

  • What happens to the servers under heavy load?
  • When do important scheduled tasks happen?

Two useful pieces of information to enter this conversation with:

  1. Depending on the regularity of the problem in the logs, it is often worth trying to re-create the problem by attempting to crawl the website with a crawler at the same speed/intensity that Google is using to see if you can find/cause the same issues. This won’t always be possible depending on the size of the site, but for some sites it will be. Being able to consistently re-create a problem is the best way to get it solved.
  2. If you can’t, however, then try to provide the exact periods of time where Googlebot was seeing the problems. This will give the developer the best chance of tying the issue to other logs to let them debug what was happening.

If Google can crawl the page consistently, then we move onto our next step.

5. Does Google see what I can see on a one-off basis?

We know Google is crawling the page correctly. The next step is to try and work out what Google is seeing on the page. If you’ve got a JavaScript-heavy website you’ve probably banged your head against this problem before, but even if you don’t this can still sometimes be an issue.

We follow the same pattern as before. First, we try to re-create it once. The following tools will let us do that:

  • Fetch & Render
    • Shows: Rendered DOM in an image, but only returns the page source HTML for you to read.
  • Mobile-friendly test
    • Shows: Rendered DOM and returns rendered DOM for you to read.
    • Not only does this show you rendered DOM, but it will also track any console errors.

Is there a difference between Fetch & Render, the mobile-friendly testing tool, and Googlebot? Not really, with the exception of timeouts (which is why we have our later steps!). Here’s the full analysis of the difference between them, if you’re interested.

Once we have the output from these, we compare them to what we ordinarily see in our browser. I’d recommend using a tool like Diff Checker to compare the two.

Have we found any problems yet?

If we encounter meaningful differences at this point, then in my experience it’s typically either from JavaScript or cookies

Why?

We can isolate each of these by:

  • Loading the page with no cookies. This can be done simply by loading the page with a fresh incognito session and comparing the rendered DOM here against the rendered DOM in our ordinary browser.
  • Use the mobile testing tool to see the page with Chrome 41 and compare against the rendered DOM we normally see with Inspect Element.

Yet again we can compare them using something like Diff Checker, which will allow us to spot any differences. You might want to use an HTML formatter to help line them up better.

We can also see the JavaScript errors thrown using the Mobile-Friendly Testing Tool, which may prove particularly useful if you’re confident in your JavaScript.

If, using this knowledge and these tools, we can recreate the bug, then we have something that can be replicated and it’s easier for us to hand off to a developer as a bug that will get fixed.

If we’re seeing everything is correct here, we move on to the next step.

6. What is Google actually seeing?

It’s possible that what Google is seeing is different from what we recreate using the tools in the previous step. Why? A couple main reasons:

  • Overloaded servers can have all sorts of strange behaviors. For example, they might be returning 200 codes, but perhaps with a default page.
  • JavaScript is rendered separately from pages being crawled and Googlebot may spend less time rendering JavaScript than a testing tool.
  • There is often a lot of caching in the creation of web pages and this can cause issues.

We’ve gotten this far without talking about time! Pages don’t get crawled instantly, and crawled pages don’t get indexed instantly.

Quick sidebar: What is caching?

Caching is often a problem if you get to this stage. Unlike JS, it’s not talked about as much in our community, so it’s worth some more explanation in case you’re not familiar. Caching is storing something so it’s available more quickly next time.

When you request a webpage, a lot of calculations happen to generate that page. If you then refreshed the page when it was done, it would be incredibly wasteful to just re-run all those same calculations. Instead, servers will often save the output and serve you the output without re-running them. Saving the output is called caching.

Why do we need to know this? Well, we’re already well out into the weeds at this point and so it’s possible that a cache is misconfigured and the wrong information is being returned to users.

There aren’t many good beginner resources on caching which go into more depth. However, I found this article on caching basics to be one of the more friendly ones. It covers some of the basic types of caching quite well.

How can we see what Google is actually working with?

  • Google’s cache
    • Shows: Source code
    • While this won’t show you the rendered DOM, it is showing you the raw HTML Googlebot actually saw when visiting the page. You’ll need to check this with JS disabled; otherwise, on opening it, your browser will run all the JS on the cached version.
  • Site searches for specific content
    • Shows: A tiny snippet of rendered content.
    • By searching for a specific phrase on a page, e.g. inurl:example.com/url “only JS rendered text”, you can see if Google has manage to index a specific snippet of content. Of course, it only works for visible text and misses a lot of the content, but it’s better than nothing!
    • Better yet, do the same thing with a rank tracker, to see if it changes over time.
  • Storing the actual rendered DOM
    • Shows: Rendered DOM
    • Alex from DeepCrawl has written about saving the rendered DOM from Googlebot. The TL;DR version: Google will render JS and post to endpoints, so we can get it to submit the JS-rendered version of a page that it sees. We can then save that, examine it, and see what went wrong.

Have we found any problems yet?

Again, once we’ve found the problem, it’s time to go and talk to a developer. The advice for this conversation is identical to the last one — everything I said there still applies.

The other knowledge you should go into this conversation armed with: how Google works and where it can struggle. While your developer will know the technical ins and outs of your website and how it’s built, they might not know much about how Google works. Together, this can help you reach the answer more quickly.

The obvious source for this are resources or presentations given by Google themselves. Of the various resources that have come out, I’ve found these two to be some of the more useful ones for giving insight into first principles:

But there is often a difference between statements Google will make and what the SEO community sees in practice. All the SEO experiments people tirelessly perform in our industry can also help shed some insight. There are far too many list here, but here are two good examples:

7. Could Google be aggregating your website across others?

If we’ve reached this point, we’re pretty happy that our website is running smoothly. But not all problems can be solved just on your website; sometimes you’ve got to look to the wider landscape and the SERPs around it.

Most commonly, what I’m looking for here is:

  • Similar/duplicate content to the pages that have the problem.
    • This could be intentional duplicate content (e.g. syndicating content) or unintentional (competitors’ scraping or accidentally indexed sites).

Either way, they’re nearly always found by doing exact searches in Google. I.e. taking a relatively specific piece of content from your page and searching for it in quotes.

Have you found any problems yet?

If you find a number of other exact copies, then it’s possible they might be causing issues.

The best description I’ve come up with for “have you found a problem here?” is: do you think Google is aggregating together similar pages and only showing one? And if it is, is it picking the wrong page?

This doesn’t just have to be on traditional Google search. You might find a version of it on Google Jobs, Google News, etc.

To give an example, if you are a reseller, you might find content isn’t ranking because there’s another, more authoritative reseller who consistently posts the same listings first.

Sometimes you’ll see this consistently and straightaway, while other times the aggregation might be changing over time. In that case, you’ll need a rank tracker for whatever Google property you’re working on to see it.

Jon Earnshaw from Pi Datametrics gave an excellent talk on the latter (around suspicious SERP flux) which is well worth watching.

Once you’ve found the problem, you’ll probably need to experiment to find out how to get around it, but the easiest factors to play with are usually:

  • De-duplication of content
  • Speed of discovery (you can often improve by putting up a 24-hour RSS feed of all the new content that appears)
  • Lowering syndication

8. A roundup of some other likely suspects

If you’ve gotten this far, then we’re sure that:

  • Google can consistently crawl our pages as intended.
  • We’re sending Google consistent signals about the status of our page.
  • Google is consistently rendering our pages as we expect.
  • Google is picking the correct page out of any duplicates that might exist on the web.

And your problem still isn’t solved?

And it is important?

Well, shoot.

Feel free to hire us…?

As much as I’d love for this article to list every SEO problem ever, that’s not really practical, so to finish off this article let’s go through two more common gotchas and principles that didn’t really fit in elsewhere before the answers to those four problems we listed at the beginning.

Invalid/poorly constructed HTML

You and Googlebot might be seeing the same HTML, but it might be invalid or wrong. Googlebot (and any crawler, for that matter) has to provide workarounds when the HTML specification isn’t followed, and those can sometimes cause strange behavior.

The easiest way to spot it is either by eye-balling the rendered DOM tools or using an HTML validator.

The W3C validator is very useful, but will throw up a lot of errors/warnings you won’t care about. The closest I can give to a one-line of summary of which ones are useful is to:

  • Look for errors
  • Ignore anything to do with attributes (won’t always apply, but is often true).

The classic example of this is breaking the head.

An iframe isn’t allowed in the head code, so Chrome will end the head and start the body. Unfortunately, it takes the title and canonical with it, because they fall after it — so Google can’t read them. The head code should have ended in a different place.

Oliver Mason wrote a good post that explains an even more subtle version of this in breaking the head quietly.

When in doubt, diff

Never underestimate the power of trying to compare two things line by line with a diff from something like Diff Checker. It won’t apply to everything, but when it does it’s powerful.

For example, if Google has suddenly stopped showing your featured markup, try to diff your page against a historical version either in your QA environment or from the Wayback Machine.


Answers to our original 4 questions

Time to answer those questions. These are all problems we’ve had clients bring to us at Distilled.

1. Why wasn’t Google showing 5-star markup on product pages?

Google was seeing both the server-rendered markup and the client-side-rendered markup; however, the server-rendered side was taking precedence.

Removing the server-rendered markup meant the 5-star markup began appearing.

2. Why wouldn’t Bing display 5-star markup on review pages, when Google would?

The problem came from the references to schema.org.

        <div itemscope="" itemtype="https://schema.org/Movie">
        </div>
        <p>  <h1 itemprop="name">Avatar</h1>
        </p>
        <p>  <span>Director: <span itemprop="director">James Cameron</span> (born August 16, 1954)</span>
        </p>
        <p>  <span itemprop="genre">Science fiction</span>
        </p>
        <p>  <a href="../movies/avatar-theatrical-trailer.html" itemprop="trailer">Trailer</a>
        </p>
        <p></div>
        </p>

We diffed our markup against our competitors and the only difference was we’d referenced the HTTPS version of schema.org in our itemtype, which caused Bing to not support it.

C’mon, Bing.

3. Why were pages getting indexed with a no-index tag?

The answer for this was in this post. This was a case of breaking the head.

The developers had installed some ad-tech in the head and inserted an non-standard tag, i.e. not:

  • <title>
  • <style>
  • <base>
  • <link>
  • <meta>
  • <script>
  • <noscript>

This caused the head to end prematurely and the no-index tag was left in the body where it wasn’t read.

4. Why did any page on a website return a 302 about 20–50% of the time, but only for crawlers?

This took some time to figure out. The client had an old legacy website that has two servers, one for the blog and one for the rest of the site. This issue started occurring shortly after a migration of the blog from a subdomain (blog.client.com) to a subdirectory (client.com/blog/…).

At surface level everything was fine; if a user requested any individual page, it all looked good. A crawl of all the blog URLs to check they’d redirected was fine.

But we noticed a sharp increase of errors being flagged in Search Console, and during a routine site-wide crawl, many pages that were fine when checked manually were causing redirect loops.

We checked using Fetch and Render, but once again, the pages were fine.

Eventually, it turned out that when a non-blog page was requested very quickly after a blog page (which, realistically, only a crawler is fast enough to achieve), the request for the non-blog page would be sent to the blog server.

These would then be caught by a long-forgotten redirect rule, which 302-redirected deleted blog posts (or other duff URLs) to the root. This, in turn, was caught by a blanket HTTP to HTTPS 301 redirect rule, which would be requested from the blog server again, perpetuating the loop.

For example, requesting https://www.client.com/blog/ followed quickly enough by https://www.client.com/category/ would result in:

  • 302 to http://www.client.com – This was the rule that redirected deleted blog posts to the root
  • 301 to https://www.client.com – This was the blanket HTTPS redirect
  • 302 to http://www.client.com – The blog server doesn’t know about the HTTPS non-blog homepage and it redirects back to the HTTP version. Rinse and repeat.

This caused the periodic 302 errors and it meant we could work with their devs to fix the problem.

What are the best brainteasers you’ve had?

Let’s hear them, people. What problems have you run into? Let us know in the comments.

Also credit to @RobinLord8, @TomAnthonySEO, @THCapper, @samnemzer, and @sergeystefoglo_ for help with this piece.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

CMWorld Interview: Peter Krmpotic on Optimizing the Content Supply Chain

Content personalization is no longer a dream that marketers have for leveling up engagement with their audience, it’s become an essential combo for winning the content marketing game. Need proof? According to a study from Marketo, 79% of consumers say they are only likely to engage with an offer if it has been personalized. And Salesforce estimates that by 2020 51% of consumers will expect that companies will anticipate their needs and make suggestions, before contact.

But how can enterprise brands scale personalization efforts in a way that is efficient and effective?

Peter Krmpotic, Group Product Manager at Adobe, has focused heavily throughout his career on scaling personalization. He alo references the content supply chain (which is a framework for viewing content production, management and scalability) as a granular way to break down different structural elements and make them more manageable.

Applying personalization to an entire content marketing operation, especially at the enterprise level, might feel overwhelming. But applying it individually to different aspects of the process, piece by piece? This feels more feasible.

Peter will be joining other high-scoring content marketing experts at 2018’s Content Marketing World in Cleveland, OH this September. In anticipation of this awesome event, we sat down with Peter for the first interview in our series leading up to the event and asked him more about his role at Adobe, the importance of content personalization and the impact of technology on personalization.  

What does your role as Group Product Manager at Adobe entail? What are your main areas of focus and key priorities?

At Adobe, I focus on content marketing, digital asset management, and personalization at scale.

Throughout my career, I’ve developed a passion for customers, their use cases and building scalable software for them.

Specifically, my interests include next-generation technologies, evolving organizational structures, and industry best practices.

You’re a big believer in the importance of personalization. Where do you see the biggest opportunities for content marketers to improve in this regard?

First and foremost, personalization is a group effort which cuts across all functions of the content supply chain: strategy, planning, creation, assembly, and delivery.

Establishing and aligning these functions with each other is the first block in a strong foundation.

What we are doing here is leveraging the centuries-old concept of “divide and conquer,” where we break personalization down into manageable stages.

Once everything is in place, the biggest opportunity lies in providing relevant data that is actionable at each of the content supply chain functions.

While we all talk a lot about data-informed and data-driven content marketing, I still see addressing this data gap as the biggest opportunity by far.

Which prevalent pitfalls are preventing content from connecting with its audience, from your view?

We have the people, the data, and the tools to create engaging content at scale, yet we often jumpstart the process of creating content without the required thoughtfulness on the initial critical steps.

It is essential to be clear which audiences we are targeting and subsequently to define clear goals for the message we are creating.

To this day, most brands need to improve at this stage, otherwise the best content marketer in the world cannot create an effective piece of engaging content.

Developing scalable ways to create and personalize content has been a key area of emphasis in your career. How can marketers think differently about scaling for efficiency and impact?

Similar to what I said earlier of “divide and conquer,” break the problem into manageable pieces and thus build a content supply chain.

Then, optimize each piece of the supply chain as opposed to trying to improve the whole thing all at once.

Where do you see the biggest influences of technologies like machine learning and automation in the world of content?

Currently, many mundane tasks, such as gathering and analyzing data or making sure content is optimized for each channel, take up a lot of time and effort in content marketing, preventing us from doing what matters most.

Things that take weeks and months will gradually be performed in the background.

By eliminating these mundane tasks, the human capacity for creativity and intuition will be magnified and reach new levels that were unimaginable before.

Which aspects of marketing SaaS products and services could and should be instilled for pros in other verticals?

Marketing software has received the kind of attention and focus that very few verticals have ever received, and as a result, we now benefit from a variety of software options that is unparalleled. This has led to a lot of AI being developed for marketing first that will be deployed in other verticals later.

A result of this fierce competition is that marketing software tends to be the more flexible and user friendly than others, adapting to a multitude of use cases, which has set new standards across all verticals.

Lastly, even though software in general does not integrate well with each other, given its variety and busy ecosystem, marketing software has trail-blazed integration best practices, which other verticals will benefit from.

Looking back, is there a particular moment or juncture in your career that you view as transformative? What takeaways could other marketers learn and apply?

Joining Adobe was truly transformative, because it allowed me to engage with customers across the entire breadth and depth of digital marketing, as well as with colleagues across different products and solutions who are truly world-class at what they do.

My recommended takeaway is to look beyond your current scope of work — which is not necessarily easy — and to figure out ways to connect with people who can help you understand adjacent functions and disciplines.

Seeing the entire picture will help you with solving your current challenges in ways that you could not have imagined before.

Which speaker presentations are you looking forward to most at Content Marketing World 2018?

I’m looking forward to quite a few sessions, but here are 5 sessions I am particularly excited about:

  • Joe Pulizzi’s keynote on Tuesday. I am sure I am not the only one interested to hear his take on the industry and where it is headed.
  • Then Gartner’s Heather Pemberton Levy and her workshop on their branded content platform, Smarter With Gartner, which I am a big fan of.
  • Michael Brenner’s workshop on how to create a documented content marketing strategy, which I know a lot of brands struggle with.
  • And then two sessions that talk about leveraging data during content creation: Morgan Molnar and Brad Sanzenbacher on Wednesday, and Katie Pennell on Thursday.

Ready Player One

Big thanks to Peter for his enlightening insights. His final takeaway — “Seeing the entire picture will help you with solving your current challenges in ways that you could not have imagined before” — is at the heart of Content Marketing World, which will bring together a diverse set of voices and perspectives to broaden your view of this exciting yet challenging frontier.

Tap into some of the unique expertise offered by CMWorld speakers by checking out the Ultimate Guide to Conquering Content Marketing below:

 

The post CMWorld Interview: Peter Krmpotic on Optimizing the Content Supply Chain appeared first on Online Marketing Blog – TopRank®.

Online Marketing Blog – TopRank®

Posted in IM NewsComments Off

Strategies And Tactics: Do You Know Why You Are Doing What You Do?

Inside the Blog Profits Blueprint I talk about a key distinction, the difference between strategies and tactics when it comes to online marketing and building a blog-based business. Here’s a relevant quote from the Blueprint: Strategies are in place to educate your mind about why things happen. Strategy helps you…

The post Strategies And Tactics: Do You Know Why You Are Doing What You Do? appeared first on Yaro.blog.

Entrepreneurs-Journey.com by Yaro Starak

Posted in IM NewsComments Off

Beyond the Hype Cycle: It’s Time to Redefine Influencer Marketing

It's Time to Redefine Influencer Marketing

It's Time to Redefine Influencer Marketing

Every marketer should consider getting a tattoo of Gartner’s Hype Cycle, as a reminder to keep us from chasing shiny objects.

The Hype Cycle goes like this:

  1. A new hotness emerges. It could be new technology, a new strategy or tactic, some new thing.
  2. There are wild predictions about how the thing will revolutionize the world.
  3. People scramble to get on board with the thing before they even understand it.
  4. The new thing doesn’t measure up to elevated expectations.
  5. People get disillusioned with the thing and decide it’s worthless.
  6. People actually learn how the thing works, get sophisticated in using it.
  7. The thing turns out to be pretty awesome and is used productively.

Marketers are just as susceptible to the hype machine as anyone else is. More so, even. Think of content marketing: We went from “content is king” to “content shock” in just a few years, and we’re just now hitting the plateau of productivity.

Now it’s influencer marketing’s turn to ride the downhill slope to the trough of disillusionment. It’s inevitable. We started with high expectations, a ton of hype, and a lot of investment before people really knew what worked.

Now the backlash is hitting. The latest Sprout Social Index is particularly sobering. Only 46% of marketers are using influencer marketing. Only 19% said they had the budget for an influencer program. And on the consumer side, people say they’re more likely to take a friend’s recommendation on social media than take an influencer’s word for it.

In other words: The party’s over. Now the real work begins. It’s time to redefine influencer marketing, get more sophisticated, and get productive. Here’s how to get out of the trough:

#1 – Redefining Influence

In the B2C world (and even in the B2B realm), influence and celebrity are often treated as synonyms. Whether it’s Rhianna or Matthew McConaughey or Pewdiepie, it’s people who have audiences in the millions. There’s some differentiation for relevancy — this YouTuber does makeup tutorials, that one is a gamer — but it’s mostly a numbers game. It’s paying people with huge followings to throw some attention at your brand.

As Ursula Ringham, Head of Global Influencer Marketing for SAP*, told us in a recent interview on social and influencer marketing:

“People often think that influencer marketing is all about celebrities hawking a product. It’s truly not about that—especially in the B2B realm. It’s about highlighting experts who have real experience on the business challenges a brand’s audience faces.”

To become more sophisticated, you need to rethink what it means to be influential. Sure, a mega-star with a huge following is great — if they are relevant to your specific target audience and if their participation doesn’t break the bank.

However, you can get amazing results working with influencers like:

  • Thought leaders in the industry with a small but prestigious network
  • Experts with radical new ideas who are poised to become thought leaders
  • Subject matter experts within your own company
  • Prospective customers from influential brands you want to work with
  • Employees who will advocate for your brand given direction and material

That last one is crucial. Inspiring your internal influencers can give your content a massive boost in reach — LinkedIn* estimates that the average employee has a network 10x bigger than the brand’s social reach. Sprout says, in the key findings of their report:

“Social marketers in 2018 see the value in employee advocacy as a cost-effective, scalable alternative to influencer marketing.”

I would say “addition” rather than “alternative,” but it’s definitely an undervalued tactic.

Our experience is that a combination of industry and internal influencers can yield the most effective results. SAP Success Factors incorporated industry influencers, internal subject matter experts, partners and clients on a program that exceeded the lead generation goal by 272% with a 66% conversion rate.

The bottom line is, when evaluating influencers, look beyond their follower count. Their industry reputation, group affiliations, and level of engagement are all indicators influence, too. And don’t forget to include your customers, prospects, and employees in your potential influencer pool.

[bctt tweet="When evaluating influencers, look beyond their follower count. Their industry reputation, group affiliations, & level of engagement are all indicators influence, too. - @NiteWrites #RedefiningInfluencerMarketing" username="toprank"]

#2 – Redefining Compensation

The rising cost of influencer marketing is another factor that has led to the trough of disillusionment. The majority of influencer marketing, especially in B2C, has been exclusively transactional. Big brands swept up top-tier influencers, the payments kept getting bigger for smaller results, and eventually the bubble had to burst.

To reach the plateau of productivity, that compensation model must change. At TopRank Marketing, we focus on building relationships with influencers and invite them to co-create with us. While there are instances in which financial compensation is part of the partnership, most often the compensation is the same both for our client and the influencer:

  • A cool, valuable asset to share
  • Cross-promotion to each other’s audiences
  • Boost to thought leadership
  • Access to a community of thought leaders

The relationship model is far more sustainable than a transactional-only approach. Again, if there is an influencer who prefers a transaction, and is of high value to the client, we’re not opposed to financial compensation. But these cases should be the exception, not the norm.

#3 – Redefining Measurement

Proving ROI is a crucial part of making your influencer marketing more sophisticated. Without the ability to show what your influencers have accomplished for the brand, it’s hard to sell management on continued investment.

It all starts with measurable goals and KPIs that hold your influencer marketing to the same standards as every other tactic you use. Tracking performance against those goals is the next step. We all have access to the tools and tech for this kind of measurement. We just need to use them more effectively to show how influencers are effective throughout the entire buyer’s journey.

Right now, marketers tend to focus on the top of funnel metrics, because they’re easy to measure: Social reach, influencer participation, engagements, likes, comments.

You need to get more granular than just those raw engagement numbers. You need to get from engagement to action. When you’re ready to amplify, give each influencer a custom URL to share. Then you can measure which influencers are actually inspiring people to leave social media and check out the asset you’ve created. From there, you can measure how those clicks convert to a lead capture, and track the lead through your pipeline.

[bctt tweet="We all have access to the tools & tech for better measurement of #influencermarketing #ROI. We just need to use them more effectively. - @NiteWrites #RedefiningInfluencerMarketing" username="toprank"]

Redefining Influencer Marketing

It’s time for influencer marketing to graduate from the Hype Cycle and become a trusted part of your integrated marketing strategy. To get to the plateau of productivity, we must discard what doesn’t work, keep what does, and refine our approach for continued improvement.

It starts with reconsidering just what influence means and who has it. Once you find your true influencers, it’s about developing relationships and building communities, rather than ever-more-expensive transactions. Finally, it requires making your measurement as sophisticated as it is for the rest of your marketing tactics.

We have found that influencer marketing beyond the Hype Cycle is an indispensable part of our marketing mix. The proof is in the pie: Read how our Easy-As-Pie Guide to Content Planning drove a 500% increase in leads for client DivvyHQ.

*Disclosure: SAP and LinkedIn are TopRank Marketing clients.

The post Beyond the Hype Cycle: It’s Time to Redefine Influencer Marketing appeared first on Online Marketing Blog – TopRank®.

Online Marketing Blog – TopRank®

Posted in IM NewsComments Off

Turn Off Bad Pop Music and Turn On Good Marketing Strategy

This week, we got into the sadness of crummy email marketing, the delight of writing productivity, and the puzzle of why anyone ever treated marketing and selling like they were two completely different things. On Monday, Stefanie Flaxman talked about how weak email marketing is even weaker than Nickelback. (Wow.) Apologies in advance for any
Read More…

The post Turn Off Bad Pop Music and Turn On Good Marketing Strategy appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

Google Clarifies Seven Points On Mobile-First Indexing After Much Confusion

Image credit to Shutterstock

Let me start by saying all of these points, in my opinion, is something we’ve covered here before but the interesting thing is that Google felt these seven items are things SEOs who give presentations have confused over the past several months…


Search Engine Roundtable

Posted in IM NewsComments Off

How to Craft Question Headlines that Don’t Flop

During last week’s Editorial call here at Copyblogger, we had a lively discussion about ham. But that’s not the H-word I’m going to talk about today. More commonly, we analyze headlines. There’s nothing more disappointing than a unique, thoughtful, and helpful piece of content that has a headline that doesn’t do it justice. Great content
Read More…

The post How to Craft Question Headlines that Don’t Flop appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

Search Buzz Video Recap: Google Algorithm Changes, Bing AMP & JSON-LD, Google & YouTube Spam & Matt Cutts

This week we have a lot to cover, first an algorithmic change in the Google search results over last weekend throughout this whole week. Bing announced a new AMP viewer coming this summer, they also announced JSON-LD support in Bing Webmaster Tools…


Search Engine Roundtable

Posted in IM NewsComments Off

Advert