Tag Archive | "Using"

Custom Extraction Using an SEO Crawler for CRO and UX Insights – Whiteboard Friday

Posted by MrLukeCarthy

From e-commerce to listings sites to real estate and myriad verticals beyond, the data you can harness using custom extraction via crawler tools is worth its weight in revenue. With a greater granularity of data at your fingertips, you can uncover CRO and user experience insights that can inform your optimizations and transform your customer experience.

In this episode of Whiteboard Friday, we’re delighted to welcome Luke Carthy to share actionable wisdom from his recent MozCon 2019 presentation, Killer CRO and UX Wins Using an SEO Crawler.

Video Transcription

Hey, Moz. What’s up? Wow, can I just say it’s incredible I’m here in Seattle right now doing a Whiteboard Friday? I can’t wait to share this cool stuff with you. So thanks for joining me.

My name is Luke Carthy. As you can probably tell, I’m from the UK, and I want to talk to you about custom extraction, specifically in the world of e-commerce. However, what I will say is this works beautifully well in many of the verticals as well, so real estate, in job listings. In fact, any website that can pretty much spit out HTML in a web crawler, you can use custom extraction.

What is custom extraction?

Let’s get started. What is custom extraction? Well, as I kind of just alluded to, it allows you, when you’re crawling using like Screaming Frog, for example, or DeepCrawl or whatever it is you want to use, it allows you to grab and extract specific parts of the HTML and export it to a file, a CSV, in Excel, or whatever you prefer.



As a principle, okay, great, but I’m going to give you some really good examples of how you can really leverage that. So e-commerce, right here we’ve got a product page that I’ve beautifully drawn, and everything in red is something that you can potentially extract. Although, as I said, anything on the page you can. These are just some good examples.

Product information + page performance

Think about this for a moment. You’re an e-commerce website, you’re a listing site, and of course you have listing pages, you have product pages. Wouldn’t it be great if you could very quickly, at scale, understand all of your products’ pricing, whether you’ve got stock, whether it’s got an image, whether it’s got a description, how many reviews it has, and of the reviews, what’s the aggregate score, whether it’s four stars, five stars, whatever it is?

That’s really powerful because you can then start to understand how good pages perform based upon the information that they have, based upon traffic, conversion, customer feedback, and all sorts of great stuff, all using custom extraction and spitting it out on say a CSV or an Excel spreadsheet file.

Competitive insights

But where it gets super powerful and you get a lot of insight from is when you start to turn the lens to your competitors and you think about ways in which you can get those really good insights. You may have three competitors. You may have some aspirational competitors. You may have a site that you don’t necessarily compete with, but you use them on a day-to-day basis or you admire how easy their site was to use, and you can go away and do that.

You can fire up a crawl, and there’s no reason why you couldn’t extract that same information from other competitors and see what’s going on, to see what pricing your competitors are selling an item at, do they have that in stock or not, what are the reviews like, what FAQs do people have, can you then leverage that in your own content. 

Examples of how to glean insights from custom extraction in e-commerce

Example 1: Price increases for products competitors don’t stock

Let me give you a perfect example of how I’ve managed to use this.

I’ve managed to identify that a competitor doesn’t have a specific product in stock, and, as a result of that, I’ve been able to increase our prices because they didn’t sell it. We did at that specific time, and we could identify the price point, the fact that they didn’t have any stock, and it was awesome. Think about that. Really powerful insights at massive amounts of scale. 

Example 2: Improving facets and filters on category pages

Another example I wanted to talk to you about. Category pages, again incredibly gorgeous illustrations. So category pages, we have filters, we have a category page, and just to switch things up a little bit I’ve also got like a listings page as well, so whether it’s, as I said, real estate, jobs, or anything in that environment.

If you think about the competition again for a second, there is no reason why you wouldn’t be able to extract via custom extraction the best filters that people use, the top filters, the top facets that people like to select and understand. So you can then see whether you’re using the same kind of combinations of features and facets on your site and maybe improve that.

Equally, you can then start to understand what specific features correlate to sales and performance and impacts and really start to improve the performance of how your website performs and behaves for your customers. The same thing applies to both environments here. 

If you are a listing site and you list jobs or you list products or classified ads, is it location filters that they have at the top? Is it availability? Is it reviews? Is it scores? You can crawl a number of your competitors across a number of areas and identify whether there’s a pattern, see a theme, and then see whether you can leverage and better that and take advantage of that. That’s a great way in which you can use it. 

Example 3: Recommendations, suggestions, and optimization

But on top of that and the one that I am most fascinated with is by far recommendations.

In the MozCon talk I did earlier I had a statistic, and I think I can recall it. It was 35% of what people buy on Amazon comes from recommendations, and 75% of what people watch on Netflix comes from suggestions, from recommendations.

Think about how powerful that is. You can crawl your own site, understand your own recommendations at scale, identify the stock of those recommendations, the price, whether they have images, in what order they are, and you can start to build a really vivid picture as to what products people associate with your items. You can do that on a global scale. You can crawl the entire of your product portfolio or your listing portfolio and get that. 



But again, back to powerful intelligence, your competitors, especially when you have competitors that might have multivariable facets or multivariable recommendations. What I mean by that is we’ve all seen sites where you’ve got multiple carousels. So you’ve got Recommended for You.

You might have People Also Bought, alternative suggestions. The more different types of recommendations you have, the more data you have, the more intelligence you have, the more insight you have. Going back to say a real estate example, you might be looking at a property here. It’s at this price. What is your main aspirational real estate competitor recommending to you that you may not be aware of?

Then you can think about whether the focus is on location, whether it’s on price, whether it’s on number of bedrooms, etc., and you can start to understand and behave how that can work and get some really powerful insights from that. 

Custom extraction is all about granular data at scale

To summarize and bring it all to a close, custom extraction is all about great granular data at scale. The really powerful thing about it is you can do all of this yourself, so there’s no need to have to have meetings, send elaborate emails, get permission from somebody.

Fire up Screaming Frog, fire up DeepCrawl, fire up whatever kind of crawler you want to use, have a look at custom extraction, and see how you can make your business more efficient, find out how you can get some really cool competitive insights, and yeah, hopefully, fingers crossed that works for you guys. Thank you very much.

Bonus resources:

Video transcription by Speechpad.com


This is a meaty topic, we know — if you enjoyed this Whiteboard Friday and find yourself eager to know more, you’re in luck! Luke’s full presentation at MozCon 2019 goes even more in-depth into what custom extraction can do for you. Catch his talk along with 26 other forward-thinking topics from our amazing speakers in the MozCon video bundle:

Access the sessions now!

We recommend sharing them with your team and spreading the learning love. Happy watching!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How To Know Exactly What People Want (And Will Pay For) Using Your Most Powerful Research Tool

When I started teaching people how to build blogs that could potentially make money, I faced a difficult challenge, a big question that begins the entire process of blogging that had to be answered – How can I teach people to pick the right blog topic or niche? It’s the hardest decision to make and […]

The post How To Know Exactly What People Want (And Will Pay For) Using Your Most Powerful Research Tool appeared first on Yaro.Blog.

Entrepreneurs-Journey.com by Yaro Starak

Posted in IM NewsComments Off

How to Write Content for Answers Using the Inverted Pyramid – Best of Whiteboard Friday

Posted by Dr-Pete

If you’ve been searching for a quick hack to write content for featured snippets, this isn’t the article for you. But if you’re looking for lasting results and a smart tactic to increase your chances of winning a snippet, you’re definitely in the right place.

Borrowed from journalism, the inverted pyramid method of writing can help you craft intentional, compelling, rich content that will help you rank for multiple queries and win more than one snippet at a time. Learn how in this fan-favorite Whiteboard Friday starring the one and only Dr. Pete!

Content for Answers

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hey, Moz fans, Dr. Pete here. I’m the Marketing Scientist at Moz and visiting you from not-so-sunny Chicago in the Seattle office. We’ve talked a lot in the last couple years in my blog posts and such about featured snippets.

So these are answers that kind of cross with organic. So it’s an answer box, but you get the attribution and the link. Britney has done some great Whiteboard Fridays, the last couple, about how you do research for featured snippets and how you look for good questions to answer. But I want to talk about something that we don’t cover very much, which is how to write content for answers.

The inverted pyramid style of content writing

It’s tough, because I’m a content marketer and I don’t like to think that there’s a trick to content. I’m afraid to give people the kind of tricks that would have them run off and write lousy, thin content. But there is a technique that works that I think has been very effective for featured snippets for writing for questions and answers. It comes from the world of journalism, which gives me a little more faith in its credibility. So I want to talk to you about that today. That’s called the inverted pyramid.

Content for Answers

1. Start with the lead

It looks something like this. When you write a story as a journalist, you start with the lead. You lead with the lead. So if we have a story like “Penguins Rob a Bank,” which would be a strange story, we want to put that right out front. That’s interesting. Penguins rob a bank, that’s all you need to know. The thing about it is, and this is true back to print, especially when we had to buy each newspaper. We weren’t subscribers. But definitely on the web, you have to get people’s attention quickly. You have to draw them in. You have to have that headline.

2. Go into the details

So leading with the lead is all about pulling them in to see if they’re interested and grabbing their attention. The inverted pyramid, then you get into the smaller pieces. Then you get to the details. You might talk about how many penguins were there and what bank did they rob and how much money did they take.

3. Move to the context

Then you’re going to move to the context. That might be the history of penguin crime in America and penguin ties to the mafia and what does this say about penguin culture and what are we going to do about this. So then it gets into kind of the speculation and the value add that you as an expert might have.

How does this apply to answering questions for SEO?

So how does this apply to answering questions in an SEO context?

Content for Answers

Lead with the answer, get into the details and data, then address the sub-questions.

Well, what you can do is lead with the answer. If somebody’s asked you a question, you have that snippet, go straight to the summary of the answer. Tell them what they want to know and then get into the details and get into the data. Add those things that give you credibility and that show your expertise. Then you can talk about context.

But I think what’s interesting with answers — and I’ll talk about this in a minute — is getting into these sub-questions, talking about if you have a very big, broad question, that’s going to dive up into a lot of follow-ups. People who are interested are going to want to know about those follow-ups. So go ahead and answer those.

If I win a featured snippet, will people click on my answer? Should I give everything away?

Content for Answers

So I think there’s a fear we have. What if we answer the question and Google puts it in that box? Here’s the question and that’s the query. It shows the answer. Are people going to click? What’s going to happen? Should we be giving everything away? Yes, I think, and there are a couple reasons.

Questions that can be very easily answered should be avoided

First, I want you to be careful. Britney has gotten into some of this. This is a separate topic on its own. You don’t always want to answer questions that can be very easily answered. We’ve already seen that with the Knowledge Graph. Google says something like time and date or a fact about a person, anything that can come from that Knowledge Graph. “How tall was Abraham Lincoln?” That’s answered and done, and they’re already replacing those answers.

Answer how-to questions and questions with rich context instead

So you want to answer the kinds of things, the how-to questions and the why questions that have a rich enough context to get people interested. In those cases, I don’t think you have to be afraid to give that away, and I’m going to tell you why. This is more of a UX perspective. If somebody asks this question and they see that little teaser of your answer and it’s credible, they’re going to click through.

“Giving away” the answer builds your credibility and earns more qualified visitors

Content for Answers

So here you’ve got the penguin. He’s flushed with cash. He’s looking for money to spend. We’re not going to worry about the ethics of how he got his money. You don’t know. It’s okay. Then he’s going to click through to your link. You know you have your branding and hopefully it looks professional, Pyramid Inc., and he sees that question again and he sees that answer again.

Giving the searcher a “scent trail” builds trust

If you’re afraid that that’s repetitive, I think the good thing about that is this gives him what we call a scent trail. He can see that, “You know what? Yes, this is the page I meant to click on. This is relevant. I’m in the right place.” Then you get to the details, and then you get to the data and you give this trail of credibility that gives them more to go after and shows your expertise.

People who want an easy answer aren’t the kind of visitors that convert

I think the good thing about that is we’re so afraid to give something away because then somebody might not click. But the kind of people who just wanted that answer and clicked, they’re not the kind of people that are going to convert. They’re not qualified leads. So these people that see this and see it as credible and want to go read more, they’re the qualified leads. They’re the kind of people that are going to give you that money.

So I don’t think we should be afraid of this. Don’t give away the easy answers. I think if you’re in the easy answer business, you’re in trouble right now anyway, to be honest. That’s a tough topic. But give them something that guides them to the path of your answer and gives them more information.

How does this tactic work in the real world?

Thin content isn’t credible.

Content for Answers

So I’m going to talk about how that looks in a more real context. My fear is this. Don’t take this and run off and say write a bunch of pages that are just a question and a paragraph and a ton of thin content and answering hundreds and hundreds of questions. I think that can really look thin to Google. So you don’t want pages that are like question, answer, buy my stuff. It doesn’t look credible. You’re not going to convert. I think those pages are going to look thin to Google, and you’re going to end up spinning out many, many hundreds of them. I’ve seen people do that.

Use the inverted pyramid to build richer content and lead to your CTA

Content for Answers

What I’d like to see you do is craft this kind of question page. This is something that takes a fair amount of time and effort. You have that question. You lead with that answer. You’re at the top of the pyramid. Get into the details. Get into the things that people who are really interested in this would want to know and let them build up to that. Then get into data. If you have original data, if you have something you can contribute that no one else can, that’s great.

Then go ahead and answer those sub-questions, because the people who are really interested in that question will have follow-ups. If you’re the person who can answer that follow-up, that makes for a very, very credible piece of content, and not just something that can rank for this snippet, but something that really is useful for anybody who finds it in any way.

So I think this is great content to have. Then if you want some kind of call to action, like a “Learn More,” that’s contextual, I think this is a page that will attract qualified leads and convert.

Moz’s example: What is a Title Tag?

So I want to give you an example. This is something we’ve used a lot on Moz in the Learning Center. So, obviously, we have the Moz blog, but we also have these permanent pages that answer kind of the big questions that people always have. So we have one on the title tag, obviously a big topic in SEO.

Content for Answers

Here’s what this page looks like. So we go right to the question: What is a title tag? We give the answer: A title tag is an HTML element that does this and this and is useful for SEO, etc. Right there in the paragraph. That’s in the featured snippet. That’s okay. If that’s all someone wants to know and they see that Moz answered that, great, no problem.

But naturally, the people who ask that question, they really want to know: What does this do? What’s it good for? How does it help my SEO? How do I write one? So we dug in and we ended up combining three or four pieces of content into one large piece of content, and we get into some pretty rich things. So we have a preview tool that’s been popular. We give a code sample. We show how it might look in HTML. It gives it kind of a visual richness. Then we start to get into these sub-questions. Why are title tags important? How do I write a good title tag?

One page can gain the ability to rank for hundreds of questions and phrases

What’s interesting, because I think sometimes people want to split up all the questions because they’re afraid that they have to have one question per page, what’s interesting is that I think looked the other day, this was ranking in our 40 million keyword set for over 200 phrases, over 200 questions. So it’s ranking for things like “what is a title tag,” but it’s also ranking for things like “how do I write a good title tag.” So you don’t have to be afraid of that. If this is a rich, solid piece of content that people are going to, you’re going to rank for these sub-questions, in many cases, and you’re going to get featured snippets for those as well.

Then, when people have gotten through all of this, we can give them something like, “Hey, Moz has some of these tools. You can help write richer title tags. We can check your title tags. Why don’t you try a free 30-day trial?” Obviously, we’re experimenting with that, and you don’t want to push too hard, but this becomes a very rich piece of content. We can answer multiple questions, and you actually have multiple opportunities to get featured snippets.

So I think this inverted pyramid technique is legitimate. I think it can help you write good content that’s a win-win. It’s good for SEO. It’s good for your visitors, and it will hopefully help you land some featured snippets.

So I’d love to hear about what kind of questions you’re writing content for, how you can break that up, how you can answer that, and I’d love to discuss that with you. So we’ll see you in the comments. Thank you.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

The Data You’re Using to Calculate CTR is Wrong and Here’s Why

Posted by Luca-Bares

Click Through Rate (CTR) is an important metric that’s useful for making a lot of calculations about your site’s SEO performance, from estimating revenue opportunity, prioritize keyword optimization, to the impact of SERP changes within the market. Most SEOs know the value of creating custom CTR curves for their sites to make those projections more accurate. The only problem with custom CTR curves from Google Search Console (GSC) data is that GSC is known to be a flawed tool that can give out inaccurate data. This convolutes the data we get from GSC and can make it difficult to accurately interpret the CTR curves we create from this tool. Fortunately, there are ways to help control for these inaccuracies so you get a much clearer picture of what your data says.

By carefully cleaning your data and thoughtfully implementing an analysis methodology, you can calculate CTR for your site much more accurately using 4 basic steps:

  1. Extract your sites keyword data from GSC — the more data you can get, the better.
  2. Remove biased keywords — Branded search terms can throw off your CTR curves so they should be removed.
  3. Find the optimal impression level for your data set — Google samples data at low impression levels so it’s important to remove keywords that Google may be inaccurately reporting at these lower levels.
  4. Choose your rank position methodology — No data set is perfect, so you may want to change your rank classification methodology depending on the size of your keyword set.

Let’s take a quick step back

Before getting into the nitty gritty of calculating CTR curves, it’s useful to briefly cover the simplest way to calculate CTR since we’ll still be using this principle. 

To calculate CTR, download the keywords your site ranks for with click, impression, and position data. Then take the sum of clicks divided by the sum of impressions at each rank level from GSC data you’ll come out with a custom CTR curve. For more detail on actually crunching the numbers for CTR curves, you can check out this article by SEER if you’re not familiar with the process.

Where this calculation gets tricky is when you start to try to control for the bias that inherently comes with CTR data. However, even though we know it gives bad data we don’t really have many other options, so our only option is to try to eliminate as much bias as possible in our data set and be aware of some of the problems that come from using that data.

Without controlling and manipulating the data that comes from GSC, you can get results that seem illogical. For instance, you may find your curves show position 2 and 3 CTR’s having wildly larger averages than position 1. If you don’t know that data that you’re using from Search Console is flawed you might accept that data as truth and a) try to come up with hypotheses as to why the CTR curves look that way based on incorrect data, and b) create inaccurate estimates and projections based on those CTR curves.

Step 1: Pull your data

The first part of any analysis is actually pulling the data. This data ultimately comes from GSC, but there are many platforms that you can pull this data from that are better than GSC’s web extraction.

Google Search Console — The easiest platform to get the data from is from GSC itself. You can go into GSC and pull all your keyword data for the last three months. Google will automatically download a csv. file for you. The downside to this method is that GSC only exports 1,000 keywords at a time making your data size much too small for analysis. You can try to get around this by using the keyword filter for the head terms that you rank for and downloading multiple 1k files to get more data, but this process is an arduous one. Besides the other methods listed below are better and easier.

Google Data Studio — For any non-programmer looking for an easy way to get much more data from Search Console for free, this is definitely your best option. Google Data Studio connects directly to your GSC account data, but there are no limitations on the data size you can pull. For the same three month period trying to pull data from GSC where I would get 1k keywords (the max in GSC), Data Studio would give me back 200k keywords!

Google Search Console API — This takes some programming know-how, but one of the best ways to get the data you’re looking for is to connect directly to the source using their API. You’ll have much more control over the data you’re pulling and get a fairly large data set. The main setback here is you need to have the programming knowledge or resources to do so.

Keylime SEO Toolbox — If you don’t know how to program but still want access to Google’s impression and click data, then this is a great option to consider. Keylime stores historical Search Console data directly from the Search Console API so it’s as good (if not better) of an option than directly connecting to the API. It does cost $ 49/mo, but that’s pretty affordable considering the value of the data you’re getting.

The reason it’s important what platform you get your data from is that each one listed gives out different amounts of data. I’ve listed them here in the order of which tool gives the most data from least to most. Using GSC’s UI directly gives by far the least data, while Keylime can connect to GSC and Google Analytics to combine data to actually give you more information than the Search Console API would give you. This is good because whenever you can get more data, the more likely that the CTR averages you’re going to make for your site are going to be accurate.

Step 2: Remove keyword bias

Once you’ve pulled the data, you have to clean it. Because this data ultimately comes from Search Console we have to make sure we clean the data as best we can.

Remove branded search & knowledge graph keywords

When you create general CTR curves for non-branded search it’s important to remove all branded keywords from your data. These keywords should have high CTR’s which will throw off the averages of your non-branded searches which is why they should be removed. In addition, if you’re aware of any SERP features like knowledge graph you rank for consistently, you should try to remove those as well since we’re only calculating CTR for positions 1–10 and SERP feature keywords could throw off your averages.

Step 3: Find the optimal impression level in GSC for your data

The largest bias from Search Console data appears to come from data with low search impressions which is the data we need to try and remove. It’s not surprising that Google doesn’t accurately report low impression data since we know that Google doesn’t even include data with very low searches in GSC. For some reason Google decides to drastically over report CTR for these low impression terms. As an example, here’s an impression distribution graph I made with data from GSC for keywords that have only 1 impression and the CTR for every position.

If that doesn’t make a lot of sense to you, I’m right there with you. This graph says a majority of the keywords with only one impression has 100 percent CTR. It’s extremely unlikely, no matter how good your site’s CTR is, that one impression keywords are going to get a majority of 100 percent CTR. This is especially true for keywords that rank below #1. This gives us pretty solid evidence low impression data is not to be trusted, and we should limit the number of keywords in our data with low impressions.

Step 3 a): Use normal curves to help calculate CTR

For more evidence of Google giving us biased data we can look at the distribution of CTR for all the keywords in our data set. Since we’re calculating CTR averages, the data should adhere to a Normal Bell Curve. In most cases CTR curves from GSC are highly skewed to the left with long tails which again indicates that Google reports very high CTR at low impression volumes.

If we change the minimum number of impressions for the keyword sets that we’re analyzing we end up getting closer and closer to the center of the graph. Here’s an example, below is the distribution of a site CTR in CTR increments of .001.

The graph above shows the impressions at a very low impression level, around 25 impressions. The distribution of data is mostly on the right side of this graph with a small, high concentration on the left implies that this site has a very high click-through rate. However, by increasing the impression filter to 5,000 impressions per keyword the distribution of keywords gets much much closer to the center.

This graph most likely would never be centered around 50% CTR because that’d be a very high average CTR to have, so the graph should be skewed to the left. The main issue is we don’t know how much because Google gives us sampled data. The best we can do is guess. But this raises the question, what’s the right impression level to filter my keywords out to get rid of faulty data?

One way to find the right impression level to create CTR curves is to use the above method to get a feel for when your CTR distribution is getting close to a normal distribution. A Normally Distributed set of CTR data has fewer outliers and is less likely to have a high number of misreported pieces of data from Google.

3 b): Finding the best impression level to calculate CTR for your site

You can also create impression tiers to see where there’s less variability in the data you’re analyzing instead of Normal Curves. The less variability in your estimates, the closer you’re getting to an accurate CTR curve.

Tiered CTR tables

Creating tiered CTR needs to be done for every site because the sampling from GSC for every site is different depending on the keywords you rank for. I’ve seen CTR curves vary as much as 30 percent without the proper controls added to CTR estimates. This step is important because using all of the data points in your CTR calculation can wildly offset your results. And using too few data points gives you too small of a sample size to get an accurate idea of what your CTR actually is. The key is to find that happy medium between the two.

In the tiered table above, there’s huge variability from All Impressions to >250 impressions. After that point though, the change per tier is fairly small. Greater than 750 impressions are the right level for this site because the variability among curves is fairly small as we increase impression levels in the other tiers and >750 impressions still gives us plenty of keywords in each ranking level of our data set.

When creating tiered CTR curves, it’s important to also count how much data is used to build each data point throughout the tiers. For smaller sites, you may find that you don’t have enough data to reliably calculate CTR curves, but that won’t be apparent from just looking at your tiered curves. So knowing the size of your data at each stage is important when deciding what impression level is the most accurate for your site.

Step 4: Decide which position methodology to analyze your data

Once you’ve figured out the correct impression-level you want to filter your data by you can start actually calculating CTR curves using impression, click, and position data. The problem with position data is that it’s often inaccurate, so if you have great keyword tracking it’s far better to use the data from your own tracking numbers than Google’s. Most people can’t track that many keyword positions so it’s necessary to use Google’s position data. That’s certainly possible, but it’s important to be careful with how we use their data.

How to use GSC position

One question that may come up when calculating CTR curves using GSC average positions is whether to use rounded positions or exact positions (i.e. only positions from GSC that rank exactly 1. So, ranks 1.0 or 2.0 are exact positions instead of 1.3 or 2.1 for example).

Exact position vs. rounded position

The reasoning behind using exact position is we want data that’s most likely to have been ranking in position 1 for the time period we’re measuring. Using exact position will give us the best idea of what CTR is at position 1. Exact rank keywords are more likely to have been ranking in that position for the duration of the time period you pulled keywords from. The problem is that Average Rank is an average so there’s no way to know if a keyword has ranked solidly in one place for a full time period or the average just happens to show an exact rank.

Fortunately, if we compare exact position CTR vs rounded position CTR, they’re directionally similar in terms of actual CTR estimations with enough data. The problem is that exact position can be volatile when you don’t have enough data. By using rounded positions we get much more data, so it makes sense to use rounded position when not enough data is available for exact position.

The one caveat is for position 1 CTR estimates. For every other position average rankings can pull up on a keywords average ranking position and at the same time they can pull down the average. Meaning that if a keyword has an average ranking of 3. It could have ranked #1 and #5 at some point and the average was 3. However, for #1 ranks, the average can only be brought down which means that the CTR for a keyword is always going to be reported lower than reality if you use rounded position.

A rank position hybrid: Adjusted exact position

So if you have enough data, only use exact position for position 1. For smaller sites, you can use adjusted exact position. Since Google gives averages up to two decimal points, one way to get more “exact position” #1s is to include all keywords which rank below position 1.1. I find this gets a couple hundred extra keywords which makes my data more reliable.

And this also shouldn’t pull down our average much at all, since GSC is somewhat inaccurate with how it reports Average Ranking. At Wayfair, we use STAT as our keyword rank tracking tool and after comparing the difference between GSC average rankings with average rankings from STAT the rankings near #1 position are close, but not 100 percent accurate. Once you start going farther down in rankings the difference between STAT and GSC become larger, so watch out how far down in the rankings you go to include more keywords in your data set.

I’ve done this analysis for all the rankings tracked on Wayfair and I found the lower the position, the less closely rankings matched between the two tools. So Google isn’t giving great rankings data, but it’s close enough near the #1 position, that I’m comfortable using adjusted exact position to increase my data set without worrying about sacrificing data quality within reason.

Conclusion

GSC is an imperfect tool, but it gives SEOs the best information we have to understand an individual site’s click performance in the SERPs. Since we know that GSC is going to throw us a few curveballs with the data it provides its important to control as many pieces of that data as possible. The main ways to do so is to choose your ideal data extraction source, get rid of low impression keywords, and use the right rank rounding methods. If you do all of these things you’re much more likely to get more accurate, consistent CTR curves on your own site.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Using topic clusters to increase SEO rankings in practical

Topic linking comes under the wider term, internal linking. Internal links in SEO go to web pages in the same domain, internal links are considered to be of less value than external links. 

However, the topic clusters can be strategically used to significantly improve your site’s performance and increase rankings. 

What internal linking is

Internal links are useful for Google to identify content on your site. Google’s bots find new content by crawling websites and following links. It means that if you post fresh content and it is not linked to any other page on the web, it won’t be found, nor ranked. 

Google itself confirms that, saying –

“Google must constantly search for new pages and add them to its list of known pages. Some pages are known because Google has already crawled them before. Other pages are discovered when Google follows a link from a known page to a new page.”

How topic clusters work

While internal linking is quite broad, topic linking is narrower. Topic linking is simply linking posts with related themes on your website to one another. A simple way to explain it is to consider Wikipedia. For every article on the online encyclopedia, there are links to many other relevant topics. 

That is, among others, one of the reasons Wikipedia consistently ranks, not just on the first page, but as the very first search results for several queries.

According to Google,

“The number of internal links pointing to a page is a signal to search engines about the relative importance of that page.” 

This very fact is why the homepage of any website ranks higher than other pages on the website, it contains more backlinks. Therefore, an important strategy would be linking to similar topics on your website to increase their value and push the rankings. 

Siloing and topic clusters

According to Alex Bill of ClothingRIC, topic clusters are a group of articles that support a pillar page, with a purposeful linking structure and content format. There can be two types of pillar pages, a resource page and a 10x content pillar page which contains a mix of external and internal links respectively.

Let’s assume that you manage a travel website. You might have pages giving a general overview of different countries. Also, you may have pages talking about different cities. Siloing comes in where each page about a country contains links to different pages about cities in that country. 

Even further, you may link the city pages to “places to visit” within each city page. On and on like that, that’s how it works. You are basically organizing your ecosystem. Think of your website as a web. 

Using links for siloing improve your site in the following ways

  • Easier search navigation for site users
  • Easier crawling by Google bot
  • Strategic value distribution

Siloing makes navigation around your site easier for visitors. Instead of having to search for items on their own, the backlinks are there to guide them. That would make each user spend more time on your site than they normally would.

In addition, value is rightly distributed across the pages on the website. I mentioned above that the homepage has a higher rank than other pages, and one of the reasons is that it contains more backlinks. What happens is that value is distributed equally from the homepage to each linked page. 

Organizing topics with siloing

By running an internal linking campaign using siloing of topics, NinjaOutreach was able to boost their site traffic by 50% within three months. Using the necessary tools, they sorted out all their posts (about 300) into tiers one, two, and three. Afterward, the pages were linked to one another by their values. 

To implement the siloing approach, consider the whole website as a pyramid with multiple steps. The homepage is the first tier, sitting at the very top, then each link from there falls to the second tier and each link from the pages on the second tier falls to the third and so on. 

The link value is passed from the top down and that means pages at the lowest rung will have the smallest value. The main point is that siloing, when done right, can be used to push your most important pages further up in the pyramid so that they can gain more value, rank higher and eventually attract more traffic

Here is what you need to do

  • Determine which articles/posts should be regarded as a “tier one”. Typically, these are the posts that bring in the most conversions and traffic. Using Google Analytics or any other analytics tool will help you identify such pages. New articles that you need to gain recognition may fall into this category too. 
  • Those pages classified as tier one should have links to them directly from the homepage. That guarantees maximum value. You may also include some in the page footer. Make sure you maximize every space available. 
  • Tier two pages are the ones next in value to the tier one pages. Add links to tier two pages from the latter. You may follow the one link per 100 words rule. Then link to tier three pages from tier two pages. 
  • While linking, be careful to make the anchor texts and links as natural as possible. That is, they should fit their immediate context. Google’s bots are really smart and throwing keywords and backlinks indiscriminately might earn you a penalty. 
  • In case you are unable to find a suitable way to add links within the post itself, a smart trick is to create a “related articles” (or whatever you call it) section. Then add a couple of relevant links to that section. 

Conclusion

Topic linking is a smart way to organize your site and strategically position web pages to attract more traffic. Certainly, implementing this using siloing would not result in instant improvements. But like NinjaOutreach, you may begin to notice slight changes after a month of doing so. If it is not yet, topic linking is an important method to include in your SEO strategy. 

Pius Boachie is the founder of DigitiMatic, an inbound marketing agency.

The post Using topic clusters to increase SEO rankings in practical appeared first on Search Engine Watch.

Search Engine Watch

Posted in IM NewsComments Off

How to Automate Pagespeed Insights For Multiple URLs using Google Sheets

Posted by James_McNulty

Calculating individual page speed performance metrics can help you to understand how efficiently your site is running as a whole. Since Google uses the speed of a site (frequently measured by and referred to as PageSpeed) as one of the signals used by its algorithm to rank pages, it’s important to have that insight down to the page level.

One of the pain points in website performance optimization, however, is the lack of ability to easily run page speed performance evaluations en masse. There are plenty of great tools like PageSpeed Insights or the Lighthouse Chrome plugin that can help you understand more about the performance of an individual page, but these tools are not readily configured to help you gather insights for multiple URLs — and running individual reports for hundreds or even thousands of pages isn’t exactly feasible or efficient.

In September 2018, I set out to find a way to gather sitewide performance metrics and ended up with a working solution. While this method resolved my initial problem, the setup process is rather complex and requires that you have access to a server.

Ultimately, it just wasn’t an efficient method. Furthermore, it was nearly impossible to easily share with others (especially those outside of UpBuild).

In November 2018, two months after I published this method, Google released version 5 of the PageSpeed Insights API. V5 now uses Lighthouse as its analysis engine and also incorporates field data provided by the Chrome User Experience Report (CrUX). In short, this version of the API now easily provides all of the data that is provided in the Chrome Lighthouse audits.

So I went back to the drawing board, and I’m happy to announce that there is now an easier, automated method to produce Lighthouse reports en masse using Google Sheets and Pagespeed Insights API v5.

Introducing the Automated PageSpeed Insights Report:

With this tool, we are able to quickly uncover key performance metrics for multiple URLs with just a couple of clicks.

All you’ll need is a copy of this Google Sheet, a free Google API key, and a list of URLs you want data for — but first, let’s take a quick tour.

How to use this tool

The Google Sheet consists of the three following tabs:

  • Settings
  • Results
  • Log

Settings

On this tab, you will be required to provide a unique Google API key in order to make the sheet work.

Getting a Google API Key

  1. Visit the Google API Credentials page.
  2. Choose the API key option from the ‘Create credentials’ dropdown (as shown):

  1. You should now see a prompt providing you with a unique API key:

  1. Next, simply copy and paste that API key into the section shown below found on the “Settings” tab of the Automated Pagespeed Insights spreadsheet.

Now that you have an API key, you are ready to use the tool.

Setting the report schedule

On the Settings tab, you can schedule which day and time that the report should start running each week. As you can see from this screenshot below, we have set this report to begin every Wednesday at 8:00 am. This will be set to the local time as defined by your Google account.

As you can see this setting is also assigning the report to run for the following three hours on the same day. This is a workaround to the limitations set by both Google Apps Scripts and Google PageSpeed API.

Limitations

Our Google Sheet is using a Google Apps script to run all the magic behind the scenes. Each time that the report runs, Google Apps Scripts sets a six-minute execution time limit, (thirty minutes for G Suite Business / Enterprise / Education and Early Access users).

In six minutes you should be able to extract PageSpeed Insights for around 30 URLs.

Then you’ll be met with the following message:

In order to continue running the function for the rest of the URLs, we simply need to schedule the report to run again. That is why this setting will run the report again three more times in the consecutive hours, picking up exactly where it left off.

The next hurdle is the limitation set by Google Sheets itself.

If you’re doing the math, you’ll see that since we can only automate the report a total of four times — we theoretically will be only able to pull PageSpeed Insights data for around 120 URLs. That’s not ideal if you’re working with a site that has more than a few hundred pages!.

The schedule function in the Settings tab uses the Google Sheet’s built-in Triggers feature. This tells our Google Apps script to run the report automatically at a particular day and time. Unfortunately, using this feature more than four times causes the “Service using too much computer time for one day” message.

This means that our Google Apps Script has exceeded the total allowable execution time for one day. It most commonly occurs for scripts that run on a trigger, which have a lower daily limit than scripts executed manually.

Manually?

You betcha! If you have more than 120 URLs that you want to pull data for, then you can simply use the Manual Push Report button. It does exactly what you think.

Manual Push Report

Once clicked, the ‘Manual Push Report’ button (linked from the PageSpeed Menu on the Google Sheet) will run the report. It will pick up right where it left off with data populating in the fields adjacent to your URLs in the Results tab.

For clarity, you don’t even need to schedule the report to run to use this document. Once you have your API key, all you need to do is add your URLs to the Results tab (starting in cell B6) and click ‘Manual Push Report’.

You will, of course, be met with the inevitable “Exceed maximum execution time” message after six minutes, but you can simply dismiss it, and click “Manual Push Report” again and again until you’re finished. It’s not fully automated, but it should allow you to gather the data you need relatively quickly.

Setting the log schedule

Another feature in the Settings tab is the Log Results function.

This will automatically take the data that has populated in the Results tab and move it to the Log sheet. Once it has copied over the results, it will automatically clear the populated data from the Results tab so that when the next scheduled report run time arrives, it can gather new data accordingly. Ideally, you would want to set the Log day and time after the scheduled report has run to ensure that it has time to capture and log all of the data.

You can also manually push data to the Log sheet using the ‘Manual Push Log’ button in the menu.

How to confirm and adjust the report and log schedules

Once you’re happy with the scheduling for the report and the log, be sure to set it using the ‘Set Report and Log Schedule’ from the PageSpeed Menu (as shown):

Should you want to change the frequency, I’d recommend first setting the report and log schedule using the sheet.

Then adjust the runLog and runTool functions using Google Script Triggers.

  • runLog controls when the data will be sent to the LOG sheet.
  • runTool controls when the API runs for each URL.

Simply click the pencil icon next to each respective function and adjust the timings as you see fit.

You can also use the ‘Reset Schedule’ button in the PageSpeed Menu (next to Help) to clear all scheduled triggers. This can be a helpful shortcut if you’re simply using the interface on the ‘Settings’ tab.

PageSpeed results tab

This tab is where the PageSpeed Insights data will be generated for each URL you provide. All you need to do is add a list of URLs starting from cell B6. You can either wait for your scheduled report time to arrive or use the ‘Manual Push Report’ button.

You should now see the following data generating for each respective URL:

  • Time to Interactive
  • First Contentful Paint
  • First Meaningful Paint
  • Time to First Byte
  • Speed Index

You will also see a column for Last Time Report Ran and Status on this tab. This will tell you when the data was gathered, and if the pull request was successful. A successful API request will show a status of “complete” in the Status column.

Log tab

Logging the data is a useful way to keep a historical account on these important speed metrics. There is nothing to modify in this tab, however, you will want to ensure that there are plenty of empty rows. When the runLog function runs (which is controlled by the Log schedule you assign in the “Settings” tab, or via the Manual Push Log button in the menu), it will move all of the rows from the Results tab that contains a status of “complete”. If there are no empty rows available on the Log tab, it will simply not copy over any of the data. All you need to do is add several thousands of rows depending on how often you plan to check-in and maintain the Log.

How to use the log data

The scheduling feature in this tool has been designed to run on a weekly basis to allow you enough time to review the results, optimize, then monitor your efforts. If you love spreadsheets then you can stop right here, but if you’re more of a visual person, then read on.

Visualizing the results in Google Data Studio

You can also use this Log sheet as a Data Source in Google Data Studio to visualize your results. As long as the Log sheet stays connected as a source, the results should automatically publish each week. This will allow you to work on performance optimization and evaluate results using Data Studio easily, as well as communicate performance issues and progress to clients who might not love spreadsheets as much as you do.

Blend your log data with other data sources

One great Google Data Studio feature is the ability to blend data. This allows you to compare and analyze data from multiple sources, as long as they have a common key. For example, if you wanted to blend the Time to Interactive results against Google Search Console data for those same URLs, you can easily do so. You will notice that the column in the Log tab containing the URLs is titled “Landing Page”. This is the same naming convention that Search Console uses and will allow Data Studio to connect the two sources.

There are several ways that you can use this data in Google Data Studio.

Compare your competitors’ performance

You don’t need to limit yourself to just your own URLs in this tool; you can use any set of URLs. This would be a great way to compare your competitor’s pages and even see if there are any clear indicators of speed affecting positions in Search results.

Improve usability

Don’t immediately assume that your content is the problem. Your visitors may not be leaving the page because they don’t find the content useful; it could be slow load times or other incompatibility issues that are driving visitors away. Compare bounce rates, time on site, and device type data alongside performance metrics to see if it could be a factor.

Increase organic visibility

Compare your performance data against Search ranking positions for your target keywords. Use a tool to gather your page positions, and fix performance issues for landing pages on page two or three of Google Search results to see if you can increase their prominence.

Final thoughts

This tool is all yours.

Make a copy and use it as is, or tear apart the Google Apps Script that makes this thing work and adapt it into something bigger and better (if you do, please let me know; I want to hear all about it).

Remember PageSpeed Insights API V5 now includes all of the same data that is provided in the Chrome Lighthouse audits, which means there are way more available details you can extract beyond the five metrics that this tool generates.

Hopefully, for now, this tool helps you gather Performance data a little more efficiently between now and when Google releases their recently announced Speed report for Search Console.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How to improve SEO using data science

Gone are the days when a single tweak in the content or the title tag was able to get your site to the top of the search results. 

Google algorithm is now much harder to crack than before. Besides, 75 percent of online users do not scroll past the first page of the search engine results.

As you can imagine, this makes the SEO space highly competitive right now and companies can no longer rely on basic techniques.

However, you can always make sure that the odds are in your favor by using data science.

What is data science?

A combination of various tools, algorithms, and machine learning principles designed to unveil hidden patterns using the raw data is referred to as data science.

Data science is creating its impression across every domain. As cited by Maryville University, around 1.7 megabytes of data will be generated every second for everyone on the planet by the end of 2020.

Why do you need it?

Data science provides valuable insights about a website’s performance and these insights can help you improve your SEO campaigns.

Data science is used to make predictions about upcoming trends and customer behavior using analytics and machine learning. For example, Netflix uses insight from data science to produce its original series that drives user interest.

Apart from identifying opportunities, data science also handles high voluminous data and helps in making better decisions. Businesses can easily gauge the effectiveness of a marketing campaign with the help of data science.

How does data science help SEO?

Data science helps you make concrete decisions by letting you:

  • Visualize which combinations have the potential to make the biggest impact
  • Create marketing campaigns aligned with the needs of their audience
  • Understand buyer’s preferences and identify pain points
  • Identify referral sources of converting traffic
  • Verify loading time, indexing, bounce rate, response errors, and redirects
  • Verify the most and least crawled URLs
  • Identify pages that crawlers aren’t supposed to index
  • Identify sources of unusual traffic

How do you apply data science to your SEO data?

Follow the below ways to apply data science to your SEO campaigns:

1. Select your data sources

Understand that the quality of your data sources directly impacts your data insights. You need the right tools to track important metrics more precisely. The top four tools that can help you gather the right data and make better decisions are Google Analytics, SEMrush, and Ahrefs.

2. Think “ecosystem” instead of “data” and “tools” 

Do not rely on one solution if your SEO is complex and integrates with various other digital marketing areas like content marketing, CX management, CRO, and sales. The “data science” approach to SEO is about integrating methods, tools, and practices in a way that draws deep and accurate insights from the cumulative data mix. Consider the SEMRush console we discussed above. The traffic stats it presents work on the assumption that all traffic is genuine. What if there are bad bots at play here? It makes a lot of sense to bring in a traffic quality checking tool into the mix, something like what Finteza does.

Example of using Finteza to improve SEO using data science

Source: Finteza

It offers you advanced bot detection tech, along with a whole suite of conversion funnel optimization modules, to help you not only make more sense of your data but also to put the insight into action, to drive business KPI scores.

3. Align SEO with marketing initiatives 

Backing your SEO with other marketing initiatives makes it stronger. Collaborate with sales, developers, UX designers, and customer support teams to optimize for all search ranking factors.

Use data science to determine a universal set of SEO best practices each team can follow to achieve your goal. Try tracking the evolving relationships between independent and dependent variables to get a better idea of what actions are important to your business. To fully understand how your SEO affects other channels, capture and analyze data from:

  • Top conversion paths
  • Conversions and assisted conversions

Gain a clear understanding of your customers’ journeys to establish a stronger alignment between various marketing activities and attribute the outcomes to separate campaigns easily.

4. Visualize with data science

Find it hard to digest numbers piled onto a spreadsheet? Taking a hierarchical approach to your data can cause you to miss out on important hidden between the lines. On the other hand, draw different benefits from data visualizations like:

  • Compare and contrast
  • Process large data volumes at scale
  • Accelerate knowledge discovery
  • Reveal hidden questions
  • Spot common patterns and trends

Test it out yourself. Leverage data science during an SEO technical audit and receive insights about your site’s health and performance. Use that data to know more about your page authority, rankings, number of outbound/inbound links per page, and other factors. However, you won’t find a proper answer about why some pages perform better in the search results, while the others lag behind. Visualizing the site’s internal link structure and figuring out the domain authority of individual pages on a scale of one to ten (like Google) allows you to see the areas for improvement and adopt proactive measures.

On-page SEO optimization is just a single example of how SEO experts combine visualizations with data science to provide better results to clients. Make your SEO data more actionable with visualizations.

5. Take help of A/B testing

LinkedIn carried out an experiment using the XNLT platform. The experiment was focused on the redesign of the “Premium Subscription” payment flow. The LinkedIn UX team reduced the number of payment checkout pages and added a FAQ. The results were impressive with an increase in the number of annual bookings which was worth millions of dollars, a 30% reduction in refund orders and a 10% increase in free trial subscriptions.

Concluding remarks

Data science focuses on eliminating guesswork from SEO. Rather than presuming what works and how a specific action affects your goals, use data science to know what’s bringing you the desired results and how you’re able to quantify your success. Brands like Airbnb are already doing it and so can you.

The post How to improve SEO using data science appeared first on Search Engine Watch.

Search Engine Watch

Posted in IM NewsComments Off

FAQ, HowTo, and Q&A: Using New Schema Types to Create Interactive Rich Results

Posted by LilyRayNYC

Structured data (Schema markup) is a powerful tool SEOs can use to efficiently deliver the most important information on our webpages to search engines. When applied effectively across all relevant entities, Schema markup provides significant opportunities to improve a website’s SEO performance by helping search engines to better understand its content.

While Schema.org is continuously expanding and refining its documentation, Google updates its list of supported features that are eligible to be displayed as rich organic results far less frequently. When they happen, these updates are exciting because they give marketers new ways to affect how their organic listings appear in Google’s search results. To make things even more interesting, some of this year’s new Schema types offer the unique opportunity for marketers to use Schema to drive clicks to more than one page on their site through just one organic listing.

Three new Schema types worth focusing on are FAQ, HowTo, and Q&A Schema, all of which present great opportunities to improve organic search traffic with eye-catching, real estate-grabbing listing features. By strategically implementing these Schema types across eligible page content, marketers can dramatically increase their pages’ visibility in the search results for targeted keywords — especially on mobile devices.

Pro tip: When rolling out new Schema, use the Rich Results Testing Tool to see how your Schema can appear in Google’s search results. Google Search Console also offers reporting on FAQ, HowTo, and Q&A Schema along with other Schema types in its Rich Results Status Report.

FAQ Schema

According to Google, FAQ Schema can be used on any page that contains a list of questions and answers on any particular topic. That means FAQ Schema doesn’t have to be reserved only for company FAQ pages; you can create a “frequently asked questions” resource on any topic and use the Schema to indicate that the content is structured as an FAQ.

FAQ Schema is a particularly exciting new Schema type due to how much real estate it can capture in the organic listings. Marking up your FAQ content can create rich results that absolutely dominate the SERP, with the potential to take up a huge amount of vertical space compared to other listings. See the below example on mobile:

Like all Schema, the FAQ content must be a 100 percent match to the content displayed on the page, and displaying different content in your Schema than what is displayed on the page can result in a manual action. Google also requires that the content marked up with FAQ Schema is not used for advertising purposes.

Impacts on click-through rate

There is some risk involved with implementing this Schema: if the content is too informational in nature, it can create a situation where users to get the answers they need entirely within the search results. This is exactly what happened when we first rolled out FAQ Schema for one of our clients at Path Interactive — impressions to the page surged, but clicks fell just as quickly.

This conundrum led to us discover the single most exciting feature of FAQ Schema: The fact that Google supports links and other HTML within the answers. Look for opportunities within your FAQ answers to link to other relevant pages on your site, and you can use FAQ Schema to drive organic users to more than one page on your website. This is a great way to use informational content to drive users to your product or service pages.

Note that this tactic should be done within reason: The links to other pages should actually provide value to the user, and they must also be added to the page content so the Schema code is a 100 percent match with the content on the page. Check out my other detailed article on implementing FAQ Schema, which includes recommendations around tagging links in FAQ answers so you can monitor how the links are performing, and for distinguishing clicks to the FAQ links from your other organic listings.

HowTo Schema

HowTo Schema is another new Schema type that can be used to enhance articles containing instructions on “how to” do something. Like FAQ Schema, Google lays out certain content requirements about what can and can’t be marked up with HowTo Schema, including:

  • Not marking up offensive, violent or explicit content
  • The entire content of each “step” must be marked up
  • Not using HowTo markup to advertise a product
  • Including relevant images, as well as materials and tools used to complete the task
  • HowTo should not be used for Recipes, which have their own Schema

Unfortunately, unlike FAQ Schema, the text included within each HowTo step is not linkable. However, the individual steps themselves can become links to an anchor on your page that corresponds to each step in the process, if you include anchored links and images in your HowTo markup.

HowTo has two visual layouts:

Image source: https://developers.google.com/search/docs/data-types/how-to

One layout includes image thumbnails for each step in the process. With this layout, users can click on each step and be taken directly to that step on your page. Anchored (#) links also appear separately in Google Search Console, so you can track impressions and clicks to each step in your HowTo process.

Image source: https://developers.google.com/search/docs/data-types/how-to

The second HowTo layout uses accordions to display the steps.

One added benefit of HowTo Schema is its voice search potential: properly marked up HowTo content is eligible to be read aloud by Google Assistant devices. When voice searchers ask their Google Assistants for help with a task that is best answered with a “how to” guide, content marked up with HowTo Schema will be more likely to be read aloud as the answer.

Like FAQ Schema, HowTo markup presents pros and cons for marketers. Given that the rich result takes up so much space in the SERP, it’s a great way to make your listing stand out compared to competing results. However, if users can get all the information they need from your marked-up content within the search results, it may result in fewer clicks going to your website, which coincides with Google’s rise in no-click searches.

In rolling out HowTo markup, it’s important to monitor the impact the Schema has on your impressions, clicks, and rankings for the page, to make sure the Schema is producing positive results for your business. For publishers whose sites rely on ad revenue, the potential loss in click-through-rate might not be worth the enhanced appearance of HowTo markup in the search results.

Does HowTo markup earn featured snippets for “how to” queries?

Given that virtually every “How To” query generates a Featured Snippet result, I wanted to see whether there was any correlation between implementing HowTo Schema and earning Featured Snippets. I conducted an analysis of 420 URLs currently ranking in Featured Snippets for common “how to” queries, and only 3 these pages are currently using HowTo markup. While this Schema type is still relatively new, it doesn’t appear to be the case that using HowTo markup is a prerequisite for earning the Featured Snippet for “how to” queries.

Q&A Schema

Q&A Schema is another new Schema type used for pages that contain a question and a way for users to submit answers to that question. The Q&A Schema should be applied only on pages that have one question as the main focus on the page — not a variety of different questions. In its documentation, Google also distinguishes between Q&A and FAQ markup: If users are not able to add their own answers to the question, FAQ markup should be used instead.

Q&A Schema is great for forums or other online message boards where users can ask a question and the community can submit answers, such as the Moz Q&A Forum.

Google strongly recommends that Q&A Schema include a URL that links directly to each individual answer to improve user experience. As with HowTo Schema, this can be done using anchor (#) links, which can then be monitored individually in Google Search Console.

Image source: https://developers.google.com/search/docs/data-types/qapage

Blending Schema types

Another exciting new development with these new Schema types is the opportunity to blend multiple types of Schema that generate rich results on the same page. FAQ Schema in particular works as a great supplement to other Schema types, such as Product or Professional Service, which can generate stars, review counts, or other attributes in the SERP. Below is an example of how these combined Schema types can look on mobile:

If it makes sense for your content, it may be worth testing adding FAQ or HowTo markup to pages that already have other Schema types that generate rich results. It’s possible that Google will display multiple rich result types at once for certain queries, or it could change the rich appearance of your listing depending on the query. This could potentially lead to a big increase in the click-through-rate given how much space these mixed results take up in the SERP.

Note: there is no guarantee Google will always display blended Schema types the way it currently does for websites who have already done this implementation. Google is always changing how it displays rich results, so it’s important to test this on your own pages and see what Google chooses to display.

Risks involved with implementing Schema

It would be irresponsible to write about using Schema without including a warning about the potential risks involved. For one, Google maintains specific criteria about how Schema should be used, and misusing the markup (whether intentionally or not) can result in a structured data manual action. A common way this occurs is when the JSON-LD code includes information that is not visible for users on the page.

Secondly, it can be tempting to implement Schema markup without thoroughly thinking through the impact it can have on the click-through-rate of the page. It is possible that Schema markup can result in such a positive user experience within the SERP, that it can actually cause a decline in click-through-rate and less traffic to your site (as users get all the information they need within the search results). These considerations require that marketers think strategically about whether and how to implement Schema to ensure they are not only complying with Google’s guidelines but also using Schema in a way that will provide meaningful results for their websites.

Lastly, it is possible that Google will update its quality guidelines around how rich results are displayed if they find that these new Schema types are leading to spam or low-quality results.

Avoid misusing Schema, or it’s possible Google might take away these fantastic opportunities to enhance our organic listings in the future.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Using STAT: How to Uncover Additional Value in Your Keyword Data

Posted by TheMozTeam

Changing SERP features and near-daily Google updates mean that single keyword strategies are no longer viable. Brands have a lot to keep tabs on if they want to stay visible and keep that coveted top spot on the SERP.

That’s why we asked Laura Hampton, Head of Marketing at Impressionto share some of the ways her award-winning team leverages STAT to surface all kinds of insights to make informed decisions.

Snag her expert tips on how to uncover additional value in your keyword data — including how Impression’s web team uses STAT’s API to improve client reporting, how to spot quick wins with dynamic tags, and what new projects they have up their sleeves. Take it away, Laura!

Spotting quick wins 

We all remember the traditional CTR chart. It suggests that websites ranking in position one on the SERPs can expect roughly 30 percent of the clicks available, with position two getting around 12 percent, position three seeing six percent, and so on (disclaimer: these may not be the actual numbers but, let’s face it, this formula is way outdated at this point anyway).

Today, the SERP landscape has changed, so we know that the chances of any of the above-suggested numbers being correct are minimal — especially when you consider the influence of elements like featured snippets on click-through rates.

But the practical reality remains that if you can improve your ranking position, it’s highly likely you’ll get at least some uplift in traffic for that term. This is where STAT’s dynamic tags can really help. Dynamic tags are a special kind of tag that automatically populates keywords based on changeable filter criteria.

We like to set up dynamic tags based on ranking position. We use this to flag keywords which are sitting just outside of the top three, top five, or top 10 positions. Layer into this some form of traffic benchmark, and you can easily uncover keywords with decent traffic potential that just need an extra bit of work to tip them into a better position.

Chasing position zero with featured snippets and PAAs 

There’s been a lot of chat in our industry about the growing prevalence of SERP features like featured snippets and “People also ask” (PAA) boxes. In fact, STAT has been instrumental in leading much of the research into the influence of these two SERP features on brand visibility and CTRs.

If your strategy includes a hunt for the coveted position zero, you’re in luck. We like to use STAT’s dynamic tagging feature to monitor the keywords that result in featured snippets. This way, we can track keywords where our client owns the snippet and where they don’t. We can also highlight new opportunities to create optimized content and attempt to capture the spot from their competitors.

This also really helps guide our overall content strategy, since STAT is able to provide quick feedback on the type of content (and, therefore, the assumed intent) that will perform best amongst a keyword set.

Making use of data views 

Data views are one of the most fundamental elements of STAT. They are tools that allow you to organize your data in ways that are meaningful to you. Holding multiple keyword segments (tags) and producing aggregate metrics, they make it possible for us to dissect keyword information and then implement strategically driven decisions.

For us at Impression, data views are essential. They reflect the tactical aspirations of the client. While you could create a single templated dashboard for all your clients with the same data views, our strategists will often set up data views that mirror the way each client and account work.

Even if we’re not yet actively working on a keyword set, we usually create data views to enable us to quickly spot opportunities and report back on the strategic progression.

Here are just some of the data views we’ve grouped our keyword segments into:

The conversion funnel

Segmenting keywords into the stages of the conversion funnel is a fairly common strategy for search marketers — it makes it possible to focus in on and prioritize higher intent queries and then extrapolate out.

Many of our data views are set up to monitor keywords tagged as “conversion,” “education,” and “awareness.”

Client goals

Because we believe successful search marketing is only possible when it integrates with wider business goals, we like to spend time getting to know our clients’ audiences, as well as their specific niches and characteristics.

This way, we can split our keywords into those which reflect the segments that our clients wish to target. For example, in some cases, this is based on sectors, such as our telecommunications client who targets audiences in finance, marketing, IT, and general business. In others, it’s based on locations, in which case we’ll leverage STAT’s location capabilities to track the visibility of our clients to different locales.

Services and/or categories

For those clients who sell online — whether it’s products or services — data views are a great way to track their visibility within each service area or product category.

Our own dashboard (for Impression) uses this approach to split out our service-based keywords, so our data view is marked “Services” and the tags we track within are “SEO,” “PPC,” “web,” and so on. For one of our fashion clients, the data view relates to product categories, where the tracked tags include “footwear,” “accessories,” and “dresses.”

At-a-glance health monitoring

A relatively new feature in STAT allows us to see the performance of tags compared to one another: the Tags tab.

Because we use data views and tags a lot, this has been a neat addition for us. The ability to quickly view those tags and how the keywords within are progressing is immensely valuable.

Let’s use an example from above. For Impression’s own keyword set, one data view contains tags that represent different service offerings. When we click on that data view and choose “Tags” in the tabbed options, we can see how well each service area is performing in terms of its visibility online.

This means we can get very quick strategic insights that say our ranking positions for SEO are consistently pretty awesome, while those around CRO (which we are arguably less well known for), tend to fluctuate more. We can also make a quick comparison between them thanks to the layout of the tab.

Identifying keyword cannibalization risk through duplicate landing pages 

While we certainly don’t subscribe to any notion of a content cannibalization penalty per se, we do believe that having multiple landing pages for one keyword or keyword set is problematic.

That’s where STAT can help. We simply filter the keywords table to show a given landing page and we’re able to track instances where it’s ranking for multiple keywords.

By exporting that information, we can then compare the best and worst ranking URLs. We can also highlight where the ranking URL for a single keyword has changed, signaling internal conflict and, therefore, an opportunity to streamline and improve.

Monitoring the competitive landscape 

No search strategy is complete without an understanding of the wider search landscape. Specifically, this means keeping track of your and/or your client’s rankings when compared to others ranking around them.

We like to use STAT’s Competitive Landscape tab to view this information for a specific data view, or across the whole account. In particular, the Share of Voice: Current Leaders board tells us very quickly who we’re up against for a keyword set.

This leads to insights such as the competitiveness of the keyword set, which makes it easier to set client expectations. It also surfaces relevance of the keywords tracked, where, if the share of voice is going to brands that aren’t your own, it may indicate the keywords you’re targeting are not that relevant to your own audience.

You can also take a look at the Share of Voice: Top 10 Trending to see where competitors are increasing or decreasing their visibility. This can be indicative of changes on the SERPs for that industry, or in the industry as a whole.

Creating a custom connector for GDS 

Reporting is a fundamental part of agency life. Our clients appreciate formalized insights into campaign progression (on top of regular communications throughout the month, of course) and one of our main challenges in growing our agency lies in identifying the best way to display reports.

We’ll be honest here: There was a point where we had started to invest in building our own platform, with all sorts of aspirations of bespoke builds and highly branded experiences that could tie into a plethora of other UX considerations for our clients.

But at the same time, we’re also big believers that there’s no point in trying to reinvent the wheel if an appropriate solution already exists. So, we decided to use Google Data Studio (GDS) as it was released in Beta and moved onto the platform in 2017.

Of course, ranking data — while we’d all like to reserve it for internal insight to drive bigger goals — is always of interest to clients. At the time, the STAT API was publicly available, but there was no way to pull data into GDS.

That’s why we decided to put some of our own time into creating a GDS connector for STAT. Through this connector, we’re able to pull in live data to our GDS reports, which can be easily shared with our clients. It was a relatively straightforward process and, because GDS caches the data for a short amount of time, it doesn’t hammer the STAT API for every request.

Though our clients do have access to STAT (made possible through their granular user permissions), the GDS integration is a simpler way for them to see top-level stats at a glance.

We’re in the process of building pipelines through BigQuery to feed into this and facilitate date specific tracking in GDS too — keep an eye out for more info and get access to the STAT GDS connector here.

Want more? 

Ready to learn how to get cracking and tracking some more? Reach out to our rad team and request a demo to get your very own tailored walkthrough of STAT. 

If you’re attending MozCon this year, you can see the ins and outs of STAT in person — grab your ticket before they’re all gone! 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How McDonald’s Is Using Data, Machine Learning, and AI to Accelerate Growth

“Our acquisition of Dynamic Yield has brought us a lot of excitement,” says McDonald’s CEO Steve Easterbrook. “Very simply put, in the online world when we’re shopping and we pick an item and put it into our shopping basket, any website will automatically suggest two or three things to go along with it. We’re the first business that we’re aware of that can bring that into the physical world. It’s really just taking data and machine learning and AI, all these sorts of technical capabilities.”

Steve Easterbrook, CEO of McDonald’s, discusses how the company is using technology to elevate the customer experience and accelerate growth in an interview on CNBC:

Continue To See How We Can Elevate the Customer Experience

As we’ve executed the growth plan we’ve spent the first two years, three or four years ago, turning the business around. Now we’ve had a couple of years of growth. We’re confident now that we’re beginning to identify further opportunities to further accelerate growth. That takes a little bit of research and development cost. It means you’ve got to bring some expertise into the business to help us do that. We’re still managing to effectively run the business. G&A is staying the same and we’re putting a little bit more into innovation.

We continue to see how can we help continue to elevate the experience for customers. With this pace of change in the world and with different technology and different innovations, whether it’s around food, technology, or design, we’re seeing opportunities that we think can either make the experience more fun and enjoyable or smoother for customers. If we can find that we’re going to go hard at it.

We need to continue growing. If where we are investing that money is helping drive growth across 38,000 restaurants then I think the shareholders and investors would be satisfied. We want to bring our owner-operators along with us as well. They’re investing their hard-earned dollars so that always means we got a business case. The owner-operators will want to see a return on their investment just the same as a shareholder would. We’ve got a wonderful check and balance in the system to help us make sure we spend that innovative money in the right way.

Using Data, Machine Learning, and AI to Accelerate Growth

Our acquisition of Dynamic Yield has brought us a lot of excitement. It was our first acquisition for 20 years. It was an acquisition in a way that was different from the past. It wasn’t looking at different restaurant businesses to try and expand our footprint. It’s bringing a capability, an IP and some talent, into our business that can help us accelerate the growth model. We completed the deal mid-April and within two weeks we had that technical capability in 800 drive-throughs here in the U.S. It’s a very rapid execution and implementation.

Very simply put, in the online world when we’re shopping and we pick an item and put it into our shopping basket, any website we’re on these days will automatically suggest two or three things to go along with it. People who buy that tend to like these things as well. We’re the first business that we’re aware of that can bring that into the physical world. As customers are at the menu board, maybe they’re ordering a coffee and we can suggest a dessert or they’re ordering a quarter pounder with cheese and we can suggest making that into a meal. It’s really just taking data and machine learning and AI, all these sorts of technical capabilities.

Mining All of the Data Will Improve the Business

The best benefit for customers is we’re more likely to suggest things they do want and less likely to suggest things they don’t. It’ll just be a nicer experience for the customer. But yes, for the restaurant itself, because we can put our drive-thru service lines in there, for example, the technical capability by mining all of the data will be to suggest items are easier to make at our busier times. That’ll help smooth the operation as well. The immediate result will be some ticket (increases). But frankly, if the overall experience is better customers come back more often. That’s ultimately where the success will be, driving repeat visits and getting people back more often.

Across the entire sector, traffic is tight right now and people are eating out less. They have been progressively eating out less for a number of years. Whether it’s the advent of home delivery, for example, which is something we participate in, but at the moment it’s just a little bit tight out there. It’s a fight for market share. Anyone who is getting growth, typically it’s because they’re adding new units. People are finding it hard to (increase) guest count growth. It’s something that we have stated as an ambition of ours. We think that’s a measure of the true health of the business. Last quarter, we did grow traffic and we’ve grown traffic for the last couple of years, but only modestly. We want to be stronger than that.

How McDonald’s Is Using Data, Machine Learning, and AI to Accelerate Growth

The post How McDonald’s Is Using Data, Machine Learning, and AI to Accelerate Growth appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Advert