Tag Archive | "&amp"

How to Make Effective, High-Quality Marketing Reports & Dashboards

Posted by Dom-Woodman

My current obsession has been reporting. Everyone could benefit from paying more attention to it. Five years, countless ciders, and too many conferences into my career, I finally spent some time on it.

Bad reporting soaks up just as much time as pointless meetings. Analysts spend hours creating reports that no one will read, or making dashboards that never get looked it. Bad reporting means people either focus on the wrong goals, or they pick the right goals, but choose the wrong way to measure them. Either way, you end up in the same place.

So I thought I’d share what I’ve learned.

We’re going to split this into:

(We’ll lean on SEO examples — we’re on Moz! — however, for those non-SEO folks, the principles are the same.)

What is the goal of a report versus a dashboard?

Dashboards

Dashboards should:

  • Measure a goal(s) over time
  • Be easily digestible at a glance

The action you take off a dashboard should be:

  • Let’s go look into this.

Example questions a dashboard would answer:

  • How are we performing organically?
  • How fast does our site load?

Reports

Reports should:

  • Help you make a decision

The action you take off a report should be:

  • Making a decision

Example questions a report would answer:

  • Are our product changes hurting organic search?
  • What are the biggest elements slowing our website?

Who is this data for?

This context will inform many of our decisions. We care about our audience, because they all know and care about very different things.

A C-level executive doesn’t care about keyword cannibalization, but probably does care about the overall performance of marketing. An SEO manager, on the other hand, probably does care about the number of pages indexed and keyword cannibalization, but is less bothered by the overall performance of marketing.

Don’t mix audience levels

If someone tells you the report is for audiences with obviously different decision levels, then you’re almost always going to end up creating something that won’t fulfill the goals we talked about above. Split up your reporting into individual reports/dashboards for each audience, or it will be left ignored and unloved.

Find out what your audience cares about

How do you know what your audience will care about? Ask them. As a rough guide, you can assume people typically care about:

  • The goals that their jobs depend on. If your SEO manager is being paid because the business wants to rank for ten specific keywords, then they’re unlikely to care about much else.
  • Budget or people they have control over.

But seriously. Ask them what they care about.

Educating your audience

Asking them is particularly important, because you don’t just need to understand your audience — you may also need to educate them. To go back on myself, there are in fact CEOs who will care about specific keywords.

The problem is, they shouldn’t. And if you can’t convince them to stop caring about that metric, their incentives will be wrong and succeeding in search will be harder. So ask. Persuading them to stop using the wrong metrics is, of course, another article in and of itself.

Get agreement now

To continue that point, now is also the time to get initial agreement that these dashboards/reports will be what’s used to measure performance.

That way, when they email you three months in asking how you’re doing for keyword x, you’re covered.

How to create a good dashboard

Picking a sensible goal for your dashboard

The question you’re answering with a dashboard is usually quite simple. It’s often some version of:

  • Are we being successful at x?

…where x is a general goal, not a metric. The difference here is that a goal is the end result (e.g. a fast website), and the metric (e.g. time to start render) is the way of measuring progress against that.

How to choose good metrics for dashboards

This is the hard part. We’re defining our goal by the metrics we choose to measure it by.

A good metric is typically a direct measure of success. It should ideally have no caveats that are outside your control.

No caveats? Ask yourself how you would explain if the number went down. If you can immediately come up with excuses that could be answered by things out of your control, then you should try to refine this metric. (Don’t worry, there’s an example in the next section.)

We also need to be sure that it will create incentives for how people behave.

Unlike a report, which will be used to help us make a decision, a dashboard is showing the goals we care about. It’s a subtle distinction, but an important one. A report will help you make a single decision. A dashboard and the KPIs it shows will define the decisions and reports you create and the ideas people have. It will set incentives and change how the people working off it behave. Choose carefully. Avinash has my back here; go read his excellent article on choosing KPIs.

You need to bear both of these in mind when choosing metrics. You typically want only one or two metrics per goal to avoid being overwhelming.

Example: Building the spec for our dashboard

Goal: Measure the success of organic performance

Who is it for: SEO manager

The goal we’re measuring and the target audience are sane, so now we need to pick a metric.

We’ll start with a common metric that I often hear suggested and we’ll iterate on it until we’re happy. Our starting place is:

  1. Metric: Search/SEO visibility
    1. “Our search visibility has dropped”: This could be because we were ranking for vanity terms like Facebook and we lost that ranking. Our traffic would be fine, but our visibility would be down. *Not a good metric.
  2. Metric: Organic sessions over time
    1. “Our organic sessions have dropped”: This could easily be because of seasonality. We always see a drop in the summer holidays. *Okay, also not a good metric.
  3. Metric: Organic sessions with smoothed seasonality
    1. Aside: See a good example of this here.
    2. “Our organic sessions with smoothed seasonality have dropped”: What if the industry is in a downturn? *We’re getting somewhere here. But let’s just see…
  4. Metric: Organic sessions with smoothed seasonality and adjusted for industry
    1. “Our organic sessions with smoothed seasonality and adjusted for industry have dropped”: *Now we’ve got a metric that’s getting quite robust. If this number drops, we’re going to care about it.

You might have to compromise your metric depending on resources. What we’ve just talked through is an ideal. Adjusting for industry, for example, is typically quite hard; you might have to settle for showing Google trends for some popular terms on a second graph, or showing Hitwise industry data on another graph.

Watch out if you find yourself adding more than one or two additional metrics. When you get to three or four, information gets difficult to parse at glance.

What about incentives? The metric we settled on will incentivize our team get more traffic, but it doesn’t have any quality control.

We could succeed at our goal by aiming for low-quality traffic, which doesn’t convert or care about our brand. We should consider adding a second metric, perhaps revenue attributed to search with linear attribution, smoothed seasonality, and a 90-day lookback. Or alternatively, organic non-bounce sessions with smoothed seasonality (using adjusted bounce rate).

Both those metrics sound like a bit of a mouthful. That’s because they’ve gone through a process similar to what we talked about above. We might’ve started with revenue attributed to search before, then got more specific and ended up with revenue attributed to search with linear attribution, smoothed seasonality and a 90-day lookback.

Remember, a dashboard shouldn’t try to explain why performance was bad (based on things in your control). A dashboard’s job is to track a goal over time and says whether or not further investigation is needed.

Laying out and styling dashboards

The goal here is to convey our information as quickly and easily as possible. It should be eyeball-able.

Creating a good dashboard layout:

  • It should all fit on a single screen (i.e. don’t scroll on the standard screen that will show the results)
  • People typically read from the top and left. Work out the importance of each graph to the question you’re answering and order them accordingly.
  • The question a graph is answering should be sat near it (usually above it)
  • Your design should keep the focus on the content. Simplify: keep styles and colors unified, where possible.

Here’s a really basic example I mocked up for this post, based on the section above:

  • We picked two crucial summary metrics for organic traffic:
    1. Organic sessions with smoothed seasonality
      • In this case we’ve done a really basic version of “adjusting” for seasonality by just showing year on year!
    2. Revenue attributed to organic sessions
  • We’ve kept the colors clean and unified.
  • We’ve got clean labels and, based on imaginary discussions, we’ve decided to put organic sessions above attributed revenue.

(The sharp-eyed amongst you may notice a small bug. The dates in the x-axis are misaligned by 1 day; this was due to some temporary constraints on my end. Don’t repeat this in your actual report!)

How to create a good report

Picking a sensible decision for your report

A report needs to be able to help us make a decision. Picking the goal for a dashboard is typically quite simple. Choosing the decision our report is helping us make is usually a little more fraught. Most importantly, we need to decide:

  • Is there a decision to be made or are we knowledge-gathering for its own sake?

If you don’t have a decision in mind, if you’re just creating a report to dig into things, then you’re wasting time. Don’t make a report.

If the decision is to prioritize next month, then you could have an investigative report designed to help you prioritize. But the goal of the report isn’t to dig in — it’s to help you make a decision. This is primarily a frame of mind, but I think it’s a crucial one.

Once we’ve settled on the decision, we then:

  • Make a list of all the data that might be relevant to this decision
  • Work down the list and ask the following question for each factor:
    1. What are the odds this piece of information causes me to change my mind?
    2. Could this information be better segmented or grouped to improve?
    3. How long will it take me to add this information to the report?
    4. Is this information for ruling something out or helping me weigh a decision?

Example: Creating a spec for a report

Here’s an example decision a client suggested to me recently:

  • Decision: Do we need to change our focus based on our weekly organic traffic fluctuations?
  • Who’s it for: SEO manager
  • Website: A large e-commerce site

Are we happy with this decision? In this case, I wasn’t. Experience has taught me that SEO very rarely runs week to week; one thing our SEO split-testing platform has taught us time and time again is even obvious improvements can take three to four weeks to result in significant traffic change.

  • New decision: Do we need to change our focus based on our monthly organic traffic fluctuations?

Great — we’re now happy with our decision, so let’s start listing possible factors. For the sake of brevity, I’m only going to include three here:

  • Individual keyword rankings
  • Individual keyword clicks
  • Number of indexed pages

1. Individual keyword rankings

  • What are the odds this piece of information causes me to change my mind?
    • As individual keyword rankings? Pretty low. This is a large website and individual keyword fluctuations aren’t much use; it will take too long to look through and I’ll probably end up ignoring it.
  • Could this information be better segmented or grouped to improve?
    • Yes, absolutely. If we were to group this by page type or topic level, it becomes far more interesting. Knowing my traffic has dropped only for one topic would make me want to go to push more resources to try and bring us back to parity. We would ideally also want to see the difference in rank with and without features.
  • How long will it take me to add this information to the report?
    • There are plenty of rank trackers with this data. It might take some integration time, but the data exists.
  • Is this information for ruling something out or helping me weigh a decision?
    • We’re just generically looking at performance here, so this is helping me weigh up my decision.

Conclusion: Yes, we should include keyword rankings, but they need to be grouped and ideally also have both rank with and without Google features. We’ll also want to avoid averaging rank, to lose subtlety in how our keywords are moving amongst each other. This example graph from STAT illustrates this well:

2. Individual keyword clicks

  • What are the odds this piece of information causes me to change my mind?
    • Low. Particularly because it won’t compensate for seasonality, I would definitely find myself relying more on rank here.
  • Could this information be better segmented or grouped to improve?
    • Again yes, same as above. It would almost certainly need to be grouped.
  • How long will it take me to add this information to the report?
    • This will have to come from Search Console. There will be some integration time again, but the data exists.
  • Is this information for ruling something out or helping me weigh a decision?
    • Again, we’re just generically looking at performance here, so this is helping me weigh up my decision.

Conclusion: I would probably say no. We’re only looking at organic performance here and clicks will be subject to seasonality and industry trends that aren’t related to our organic performance. There are certainly click metrics that will be useful that we haven’t gone over in these examples — this just isn’t one of them.

3. Number of indexed pages

  • What are the odds this piece of information causes me to change my mind?
    • Low, although sharp jumps would definitely be cause for further investigation.
  • Could this information be better segmented or grouped to improve?
    • It could sometimes be broken down into individual sections, using Search Console folders.
  • How long will it take me to add this information to the report?
    • This will have to come from Search Console. It doesn’t exist in the API, however, and will be a hassle to add or will have to be done manually.
  • Is this information for ruling something out or helping me weigh a decision?
    • This is just ruling out, as it’s possible any changes in fluctuation have come from massive index bloat.

Conclusion: Probably yes. The automation will be a pain, but it will be relatively easy to pull it in manually once a month. It won’t change anyone’s mind very often, so it won’t be put at the forefront of a report, but it’s a useful additional piece of information that’s very quick to scan and will help us rule something out.

Laying out and styling reports

Again, our layout should be fit for the goal we’re trying to achieve, which gives us a couple principles to follow:

  • It’s completely fine for reports to be large, as long as they’re ordered by the odds that the decision will change someone’s mind. Complexity is fine as long as it’s accompanied by depth and you don’t get it all at once.
  • On a similar point, you’ll often have to breakdown metrics into multiple graphs. Make sure that you order them by importance so someone can stop digging whenever they’re happy.

Here’s an example from an internal report I made. It shows the page breakdown first and then the page keyword breakdown after it to let you dig deeper.

  • There’s nothing wrong with repeating graphs. If you have a summary page with five following pages, each of which picks one crucial metric from the summary and digs deeper, it’s absolutely useful to repeat the summary graph for that metric at the top.
  • Pick a reporting program which allows paged information, like Google Data Studio, for example. It will force you to break a report into chunks.
  • As with dashboards, your design should keep the focus on the content. Simplify — keep styles and colors unified where possible.

Creating an effective graph

The graphs themselves are crucial elements of a report and dashboard. People have built entire careers out of helping people visualize data on graphs. Rather than reinvent the wheel, the following resources have all helped me avoid the worst when it comes to graphs.

Both #1 and #2 below don’t focus on making things pretty, but rather on the goal of a graph: to let you process data as quickly as possible.

  1. Do’s and Don’ts for Effective Graphs
  2. Karl Broman on How to Display Data Badly
  3. Dark Horse Analytics – Data Looks Better Naked
  4. Additional geek resource: Creating 538-Style Charts with matplotlib

Sometimes (read: nearly always) you’ll be limited by the programs you work in, but it’s good to know the ideal, even if you can’t quite reach it.

What did we learn?

Well, we got to the end of the article and I’ve barely even touched on how to practically make dashboards/reports. Where are the screenshots of the Google Data Studio menus and the step-by-step walkthroughs? Where’s the list of tools? Where’s the explanation on how to use a Google Sheet as a temporary database?

Those are all great questions, but it’s not where the problem lies.

We need to spend more time thinking about the content of reports and what they’re being used for. It’s possible having read this article you’ll come away with the determination to make fewer reports and to trash a whole bunch of your dashboards.

That’s fantastic. Mission accomplished.

There are good tools out there (I quite like Plot.ly and Google Data Studio) which make generating graphs easier, but the problem with many of the dashboards and reports I see isn’t that they’ve used the Excel default colors — it’s that they haven’t spent enough time thinking about the decision the report makes, or picking the ideal metric for a dashboard.

Let’s go out and think more about our reports and dashboards before we even begin making them.

What do you guys think? Has this been other people’s experience? What are the best/worst reports and dashboards you’ve seen and why?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How to Boost Bookings & Conversions with Google Posts: An Interview with Joel Headley

Posted by MiriamEllis

Have you been exploring all the ways you might use Google Posts to set and meet brand goals?

Chances are good you’ve heard of Google Posts by now: the micro-blogging Google My Business dashboard feature which instantly populates content to your Knowledge Panel and individual listing. We’re still only months into the release of this fascinating capability, use of which is theorized as having a potential impact on local pack rankings. When I recently listened to Joel Headley describing his incredibly creative use of Google Posts to increase healthcare provider bookings, it’s something I was excited to share with the Moz community here.


Joel Headley

Joel Headley worked for over a decade on local and web search at Google. He’s now the Director of Local SEO and Marketing at healthcare practice growth platform PatientPop. He’s graciously agreed to chat with me about how his company increased appointment bookings by about 11% for thousands of customer listings via Google Posts.

How PatientPop used Google Posts to increase bookings by 11%

Miriam: So, Joel, Google offers a formal booking feature within their own product, but it isn’t always easy to participate in that program, and it keeps users within “Google’s walled garden” instead of guiding them to brand-controlled assets. As I recently learned, PatientPop innovated almost instantly when Google Posts was rolled out in 2017. Can you summarize for me what your company put together for your customers as a booking vehicle that didn’t depend on Google’s booking program?

Joel: PatientPop wants to provide patients an opportunity to make appointments directly with their healthcare provider. In that way, we’re a white label service. Google has had a handful of booking products. In a prior iteration, there was a simpler product that was powered by schema and microforms, which could have scaled to anyone willing to add the schema.

Today, they are putting their effort behind Reserve with Google, which requires a much deeper API integration. While PatientPop would be happy to provide more services on Google, Reserve with Google doesn’t yet allow most of our customers, according to their own policies. (However, the reservation service is marketed through Google My Business to those categories, which is a bit confusing.)

Additionally, when you open the booking widget, you see two logos: G Pay and the booking software provider. I’d love to see a product that allows the healthcare provider to be front and center in the entire process. A patient-doctor relationship is personal, and we’d like to emphasize you’re booking your doctor, not PatientPop.

Because we can’t get the CTAs unique to Reserve with Google, we realized that Google Posts can be a great vehicle for us to essentially get the same result.

When Google Posts first launched, I tested a handful of practices. The interaction rate was low compared to other elements in the Google listing. But, given there was incremental gain in traffic, it seemed worthwhile, if we could scale the product. It seemed like a handy way to provide scheduling with Google without having to go through the hoops of the Maps Booking (reserve with) API.

Miriam: Makes sense! Now, I’ve created a fictitious example of what it looks like to use Google Posts to prompt bookings, following your recommendations to use a simple color as the image background and to make the image text quite visible. Does this look similar to what PatientPop is doing for its customers and can you provide recommendations for the image size and font size you’ve seen work best?

Joel: Yes, that’s pretty similar to the types of Posts we’re submitting to our customer listings. I tested a handful of image types, ones with providers, some with no text, and the less busy image with actionable text is what performed the best. I noticed that making the image look more like a button, with button-like text, improved click-through rates too — CTR doubled compared to images with no text.

The image size we use is 750×750 with 48-point font size. If one uses the API, the image must be square cropped when creating the post. Otherwise, Posts using the Google My Business interface will give you an option to crop. The only issue I have with the published version of the image: the cropping is uneven — sometimes it is center-cropped, but other times, the bottom is cut off. That makes it hard to predict when on-image text will appear. But we keep it in the center which generally works pretty well.

Miriam: And, when clicked on, the Google Post takes the user to the client’s own website, where PatientPop software is being used to manage appointments — is that right?

Joel: Yes, the site is built by PatientPop. When selecting Book, the patient is taken directly to the provider’s site where the booking widget is opened and an appointment can be selected from a calendar. These appointments can be synced back to the practice’s electronic records system.

Miriam: Very tidy! As I understand it, PatientPop manages thousands of client listings, necessitating the need to automate this use of Google Posts. Without giving any secrets away, can you share a link to the API you used and explain how you templatized the process of creating Posts at scale?

Joel: Sure! We were waiting for Google to provide Posts via the Google My Business API, because we wanted to scale. While I had a bit of a heads-up that the API was coming — Google shared this feature with their GMB Top Contributor group — we still had to wait for it to launch to see the documentation and try it out. So, when the launch announcement went out on October 11, with just a few developers, we were able to implement the solution for all of our practices the next evening. It was a fun, quick win for us, though it was a bit of a long day. :)

In order to get something out that quickly, we created templates that could use information from the listing itself like the business name, category, and location. That way, we were able to create a stand-alone Python script that grabbed listings from Google. When getting the listings, all the listing content comes along with it, including name, address, and category. These values are taken directly from the listing to create Posts and then are submitted to Google. We host the images on AWS and reuse them by submitting the image URL with the post. It’s a Python script which runs as a cron job on a regular schedule. If you’re new to the API, the real tricky part is authentication, but the GMB community can help answer questions there.

Miriam: Really admirable implementation! One question: Google Posts expire after 7 days unless they are events, so are you basically automating re-posting of the booking feature for each listing every seven days?

Joel: We create Posts every seven days for all our practices. That way, we can mix up the content and images used on any given practice. We’re also adding a second weekly post for practices that offer aesthetic services. We’ll be launching more Posts for specific practice types going forward, too.

Miriam: Now for the most exciting part, Joel! What can you tell me about the increase in appointments this use of Google Posts has delivered for your customers? And, can you also please explain what parameters and products you are using to track this growth?

Joel: To track clicks from listings on Google, we use UTM parameters. We can then track the authority page, the services (menu) URL, the appointment URL, and the Posts URL.

When I first did this analysis, I looked at the average of the last three weeks of appointments compared to the 4 days after launch. Over that period, I saw nearly an 8% increase in online bookings. I’ve since included the entire first week of launch. It shows an 11% average increase in online bookings.

Additionally, because we’re tracking each URL in the knowledge panel separately, I can confidently say there’s no cannibalization of clicks from other URLs as a result of adding Posts. While authority page CTR remained steady, services lost over 10% of the clicks and appointment URLs gained 10%. That indicates to me that not only are the Posts effective in driving appointments through the Posts CTA, it emphasizes the existing appointment CTA too. This was in the context of no additional product changes on our side.

Miriam: Right, so, some of our readers will be using Google’s Local Business URLs (frequently used for linking to menus) to add an “Appointments” link. One of the most exciting takeaways from your implementation is that using Google Posts to support bookings didn’t steal attention away from the appointment link, which appears higher up in the Knowledge Panel. Can you explain why you feel the Google Posts clicks have been additive instead of subtractive?

Joel: The “make appointment” link gets a higher CTR than Posts, so it shouldn’t be ignored. However, since
Posts include an image, I suspect it might be attracting a different kind of user, which is more primed to interact with images. And because we’re so specific on the type of interaction we want (appointment booking), both with the CTA and the image, it seems to convert well. And, as I stated above, it seems to help the appointment URLs too.

Miriam: I was honestly so impressed with your creativity in this, Joel. It’s just brilliant to look at something as simple as this little bit of Google screen real estate and ask, “Now, how could I use this to maximum effect?” Google Posts enables business owners to include links labeled Book, Order Online, Buy, Learn More, Sign Up, and Get Offer. The “Book” feature is obviously an ideal match for your company’s health care provider clients, but given your obvious talent for thinking outside the box, would you have any creative suggestions for other types of business models using the other pre-set link options?

Joel: I’m really excited about the events feature, actually. Because you can create a long-lived post while adding a sense of urgency by leveraging a time-bound context. Events can include limited-time offers, like a sale on a particular product, or signups for a newsletter that will include a coupon code. You can use all the link labels you’ve listed above for any given event. And, I think using the image-as-button philosophy can really drive results. I’d like to see an image with text Use coupon code XYZ546 now! with the Get Offer button. I imagine many business types, especially retail, can highlight their limited time deals without paying other companies to advertise your coupons and deals via Posts.

Miriam: Agreed, Joel, there are some really exciting opportunities for creative use here. Thank you so much for the inspiring knowledge you’ve shared with our community today!


Ready to get the most from Google Posts?

Reviews can be a challenge to manage. Google Q&A may be a mixed blessing. But as far as I can see, Posts are an unalloyed gift from Google. Here’s all you have to do to get started using them right now for a single location of your business:

  • Log into your Google My Business dashboard and click the “Posts” tab in the left menu.
  • Determine which of the options, labeled “Buttons,” is the right fit for your business. It could be “Book,” or it could be something else, like “Sign up” or “Buy.” Click the “Add a Button” option in the Google Posts wizard. Be sure the URL you enter includes a UTM parameter for tracking purposes.
  • Upload a 750×750 image. Joel recommends using a simple-colored background and highly visible 42-point font size for turning this image into a CTA button-style graphic. You may need to experiment with cropping the image.
  • Alternatively, you can create an event, which will cause your post to stay live through the date of the event.
  • Text has a minimum 100-character and maximum 300-character limit. I recommend writing something that would entice users to click to get beyond the cut-off point, especially because it appears to me that there are different display lengths on different devices. It’s also a good idea to bear in mind that Google Posts are indexed content. Initial testing is revealing that simply utilizing Posts may improve local pack rankings, but there is also an interesting hypothesis that they are a candidate for long-tail keyword optimization experiments. According to Mike Blumenthal:

“…If there are very long-tail phrases, where the ability to increase relevance isn’t up against so many headwinds, then this is a signal that Google might recognize and help lift the boat for that long-tail phrase. My experience with it was it didn’t work well on head phrases, and it may require some amount of interaction for it to really work well. In other words, I’m not sure just the phrase itself but the phrase with click-throughs on the Posts might be the actual trigger to this. It’s not totally clear yet.”

  • You can preview your post before you hit the publish button.
  • Your post will stay live for 7 days. After that, it will be time to post a new one.
  • If you need to implement at scale across multiple listings, re-read Joel’s description of the API and programming PatientPop is utilizing. It will take some doing, but an 11% increase in appointments may well make it worth the investment! And obviously, if you happen to be marketing health care providers, checking out PatientPop’s ready-made solution would be smart.

Nobody likes a ball-hog

I’m watching the development of Google Posts with rapt interest. Right now, they reside on Knowledge Panels and listings, but given that they are indexed, it’s not impossible that they could eventually end up in the organic SERPs. Whether or not that ever happens, what we have right now in this feature is something that offers instant publication to the consumer public in return for very modest effort.

Perhaps even more importantly, Posts offer a way to bring users from Google to your own website, where you have full control of messaging. That single accomplishment is becoming increasingly difficult as rich-feature SERPs (and even single results) keep searchers Google-bound. I wonder if school kids still shout “ball-hog” when a classmate refuses to relinquish ball control and be a team player. For now, for local businesses, Google Posts could be a precious chance for your brand to handle the ball.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

The Website Migration Guide: SEO Strategy, Process, & Checklist

Posted by Modestos

What is a site migration?

A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.

Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.

Quick access links

Site migration examples
Site migration types
Common site migration pitfalls
Site migration process
1. Scope & planning
2. Pre-launch preparation
3. Pre-launch testing
4. Launch day actions
5. Post-launch testing
6. Performance review
Site migration checklist
Appendix: Useful tools


Site migration examples

The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.

Debunking the “expected traffic drop” myth

Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.

Examples of unsuccessful site migrations

The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.

Example of a poor site migration — recovery took 6 months!

But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.

Another example of a poor site migration — no signs of recovery 6 months on!

In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.

Examples of successful site migrations

What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:

  1. Minimal visibility loss during the first few weeks (short-term goal)
  2. Visibility growth thereafter — depending on the type of migration (long-term goal)

The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.

The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.

As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.

Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.

Example of a very successful site migration — instant growth following new site launch!

This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.

In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.

Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.


Site migration types

There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.

Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:

  • Site moves with URL changes
  • Site moves without URL changes

Site move migrations

URL-structure2.png

These typically occur when a site moves to a different URL due to any of the below:

Protocol change

A classic example is when migrating from HTTP to HTTPS.

Subdomain or subfolder change

Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.

Domain name change

Commonly occurs when a business is rebranding and must move from one domain to another.

Top-level domain change

This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.

Site structure changes

These are changes to the site architecture that usually affect the site’s internal linking and URL structure.

Other types of migrations

There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.

Replatforming

This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.

Content migrations

Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.

Mobile setup changes

With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.

Structural changes

These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.

Site redesigns

These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.

Hybrid migrations

In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.


Common site migration pitfalls

Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:

Poor strategy

Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.

Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.

Poor planning

Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.

Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.

Lack of resources

Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.

As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.

Lack of SEO/UX consultation

When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.

To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.

Late involvement

Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.

Lack of testing

In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.

Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.

Slow response to bug fixing

There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.

Underestimating scale

Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let’s launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.

It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.


Site migration process

The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.


Phase 1: Scope & Planning

Work out the project scope

Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.

A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.

However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.

Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.

You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.

Prepare the project plan

Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.

The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.

A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.


Phase 2: Pre-launch preparation

These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.

Wireframes review

Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.

Preparing the technical SEO specifications

Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.

The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.

Make sure to include specific requirements that cover at least the following areas:

  • URL structure
  • Meta data (including dynamically generated default values)
  • Structured data
  • Canonicals and meta robots directives
  • Copy & headings
  • Main & secondary navigation
  • Internal linking (in any form)
  • Pagination
  • XML sitemap(s)
  • HTML sitemap
  • Hreflang (if there are international sites)
  • Mobile setup (including the app, AMP, or PWA site)
  • Redirects
  • Custom 404 page
  • JavaScript, CSS, and image files
  • Page loading times (for desktop & mobile)

The specification should also include areas of the CMS functionality that allows users to:

  • Specify custom URLs and override default ones
  • Update page titles
  • Update meta descriptions
  • Update any h1–h6 headings
  • Add or amend the default canonical tag
  • Set the meta robots attributes to index/noindex/follow/nofollow
  • Add or edit the alt text of each image
  • Include Open Graph fields for description, URL, image, type, sitename
  • Include Twitter Open Graph fields for card, URL, title, description, image
  • Bulk upload or amend redirects
  • Update the robots.txt file

It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).

Identifying priority pages

One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.

In order to do this, you need to:

  1. Crawl the legacy site
  2. Identify all indexable pages
  3. Identify top performing pages

How to crawl the legacy site

Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:

  • Ignore robots.txt (in case any vital parts are accidentally blocked)
  • Follow internal “nofollow” links (so the crawler reaches more pages)
  • Crawl all subdomains (depending on scope)
  • Crawl outside start folder (depending on scope)
  • Change the user agent to Googlebot (desktop)
  • Change the user agent to Googlebot (smartphone)

Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.

How to identify the indexable pages

Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:

  • Return a 200 server response
  • Either do not have a canonical tag or have a self-referring canonical URL
  • Do not have a meta robots noindex
  • Aren’t excluded from the robots.txt file
  • Are internally linked from other pages (non-orphan pages)

The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).

How to identify the top performing pages

Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.

If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.

It’s recommended to prepare a spreadsheet that includes the below fields:

  • Legacy URL (include only the indexable ones from the craw data)
  • Organic visits during the last 12 months (Analytics)
  • Revenue, conversions, and conversion rate during the last 12 months (Analytics)
  • Pageviews during the last 12 months (Analytics)
  • Number of clicks from the last 90 days (Search Console)
  • Top linked pages (Majestic SEO/Ahrefs)

With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.

The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.

Benchmarking

Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.

Keywords rank tracking

If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.

Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)

If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.

Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.

Site performance

The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.

It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.

MOBILE

Speed

FCP

DCL

Optimization

Optimization score

Homepage

Fast

0.7s

1.4s

Good

81/100

Category page

Slow

1.8s

5.1s

Medium

78/100

Subcategory page

Average

0.9s

2.4s

Medium

69/100

Product page

Slow

1.9s

5.5s

Good

83/100

DESKTOP

Speed

FCP

DCL

Optimization

Optimization score

Homepage

Good

0.7s

1.4s

Average

81/100

Category page

Fast

0.6s

1.2s

Medium

78/100

Subcategory page

Fast

0.6s

1.3s

Medium

78/100

Product page

Good

0.8s

1.3s

Good

83/100

Old site crawl data

A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.

Search Console data

Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:

  • Search analytics queries & pages
  • Crawl errors
  • Blocked resources
  • Mobile usability issues
  • URL parameters
  • Structured data errors
  • Links to your site
  • Internal links
  • Index status

Redirects preparation

The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.

Why are redirects important in site migrations?

Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.

What happens when redirects aren’t correctly implemented?

When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.

301, 302, JavaScript redirects, or meta refresh?

When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.

302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.

Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.

If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.

Redirect mapping process

If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.

The redirect mapping file is a spreadsheet that includes the following two columns:

  • Legacy site URL –> a page’s URL on the old site.
  • New site URL –> a page’s URL on the new site.

When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.

Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.

Increasing efficiencies during the redirect mapping process

Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.

Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.

Don’t forget the legacy redirects!

You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.

Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a “Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.

Example:

URL A redirects to URL B (legacy redirect)

URL B redirects to URL C (new redirect)

Which results in the following redirect chain:

URL A –> URL B –> URL C

To eliminate this, amend the existing legacy redirect and create a new one so that:

URL A redirects to URL C (amended legacy redirect)

URL B redirects to URL C (new redirect)

Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!

Implement blanket redirect rules to avoid duplicate content

It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.

In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:

Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.

Avoid internal redirects

Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.

Don’t forget your image files

If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.


Phase 3: Pre-launch testing

The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.

Making sure search engines cannot access the staging/test site

Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.

Site available to specific IPs (most recommended)

Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.

Password protection

Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.

Robots.txt blocking

Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.

User-agent: *
Disallow: /

One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.

User journey review

If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.

On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.

Site architecture review

A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.

Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.

Meta data & copy review

Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.

Internal linking review

Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:

  • Main & secondary navigation
  • Header & footer links
  • Body content links
  • Pagination links
  • Horizontal links (related articles, similar products, etc)
  • Vertical links (e.g. breadcrumb navigation)
  • Cross-site links (e.g. links across international sites)

Technical checks

A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.

Robots.txt file review

Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:

Disallow: /

If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.

But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.

When preparing the new site’s robots.txt file, make sure that:

  • It doesn’t block search engine access to pages that are intended to get indexed.
  • It doesn’t block any JavaScript or CSS resources search engines require to render page content.
  • The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
  • It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.

Canonical tags review

Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.

Meta robots review

Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.

XML sitemaps review

Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.

You should check each XML sitemap to make sure that:

  • It validates without issues
  • It is encoded as UTF-8
  • It does not contain more than 50,000 rows
  • Its size does not exceed 50MBs when uncompressed

If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.

In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:

  • 3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
  • Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
  • Canonicalized pages (apart from self-referring canonical URLs)
  • Pages with a meta robots noindex directive
<!DOCTYPE html>
<html><head>
<meta name="robots" content="noindex" />
(…)
</head>
<body>(…)</body>
</html>
  • Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK
Date: Tue, 10 Nov 2017 17:12:43 GMT
(…)
X-Robots-Tag: noindex
(…)
  • Pages blocked from the robots.txt file

Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.

Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.

HTML sitemap review

Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.

The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.

For example, the NYTimes.com HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.

The NYTimes HTML sitemap (level 1)

The NYTimes HTML sitemap (level 2)

Structured data review

Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.

Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.

Structured Data Testing Tool.png

The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.

Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.

JavaScript crawling review

You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.

As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.

Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.

Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.

Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!

Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.

Mobile site SEO review

Assets blocking review

First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.

Mobile-first index review

In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:

  • Page titles
  • Meta descriptions
  • Headings
  • Copy
  • Canonical tags
  • Meta robots attributes (i.e. noindex, nofollow)
  • Internal links
  • Structured data

A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.

In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.

Responsive site review

A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.

Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.

To signal browsers that a page is responsive, a meta=”viewport” tag should be in place within the <head> of each HTML page.

<meta name="viewport" content="width=device-width, initial-scale=1.0">

If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.

Separate mobile URLs review

If the mobile website uses separate URLs from desktop, make sure that:

  1. Each desktop page has a tag pointing to the corresponding mobile URL.
  2. Each mobile page has a rel=”canonical” tag pointing to the corresponding desktop URL.
  3. When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
  4. Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
  5. There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
  6. The mobile URLs return a 200 server response.

Dynamic serving review

Dynamic serving websites serve different code to each device, but on the same URL.

On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.

Mobile-friendliness review

Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:

  1. The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
  2. The font size isn’t too small.
  3. Touch elements (i.e. buttons, links) aren’t too close.
  4. There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
  5. Mobile pages aren’t too slow to load (see next section).

Google’s mobile-friendly test tool can help diagnose most of the above issues:

Google’s mobile-friendly test tool in action

AMP site review

If there is an AMP website and a desktop version of the site is available, make sure that:

  • Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
  • Each AMP page has a rel=”canonical” tag pointing to the corresponding desktop page.
  • Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.

You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.

Mixed content errors

With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.

Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.

Mixed content errors in Chrome’s JavaScript Console

There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.

Image assets review

Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.

Site performance review

Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.

Analytics tracking review

Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.

Redirects testing

Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.

Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:

  • Redirect loops (a URL that infinitely redirects to itself)
  • Redirects with a 4xx or 5xx server response.
  • Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
  • Canonical URLs that return a 4xx or 5xx server response.
  • Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
  • Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
  • Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
  • Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
  • Invalid characters in URLs.

Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.


Phase 4: Launch day activities

When the site is down…

While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.

If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.

Technical spot checks

As soon as the new site has gone live, take a quick look at:

  1. The robots.txt file to make sure search engines are not blocked from crawling
  2. Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
  3. Top pages canonical tags
  4. Top pages server responses
  5. Noindex/nofollow directives, in case they are unintentional

The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.

Search Console actions

The following activities should take place as soon as the new website has gone live:

  1. Test & upload the XML sitemap(s)
  2. Set the Preferred location of the domain (www or non-www)
  3. Set the International targeting (if applicable)
  4. Configure the URL parameters to tackle early any potential duplicate content issues.
  5. Upload the Disavow file (if applicable)
  6. Use the Change of Address tool (if switching domains)

Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.

Blocked resources prevent Googlebot from rendering the content of the page


Phase 5: Post-launch review

Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.

However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.

In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.

Check crawl stats and server logs

Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.

Crawl stats on Google’s Search Console

Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.

Review crawl errors regularly

Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.

Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!

Other useful Search Console features

Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).

Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.

Measuring site speed

Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.

Evaluating speed using Google’s tools

Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.

ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:

  • Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
  • Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
  • Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
  • Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
  • Optimization suggestions: A list of best practices that could be applied to a page.

Google’s PageSpeed Insights in action

Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:

  • First Meaningful Paint that measures when the primary content of a page is visible.
  • Time to Interactive is the point at which the page is ready for a user to interact with.
  • Speed Index measures shows how quickly a page are visibly populated

Both tools provide recommendations to help improve any reported site performance issues.

Google’s Lighthouse in action

You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.

The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.

Measuring speed from real users

Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.

In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.


Phase 6: Measuring site migration performance

When to measure

Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.

In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.

But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.

How to measure

Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:

  • Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
  • Desktop and mobile rankings (from any reliable rank tracking tool)
  • User engagement (bounce rate, average time on page)
  • Sessions per page type (i.e. are the category pages driving as many sessions as before?)
  • Conversion rate per page type (i.e. are the product pages converting the same way as before?)
  • Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)

Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:

  • Number of indexed pages (Search Console)
  • Submitted vs indexed pages in XML sitemaps (Search Console)
  • Pages receiving at least one visit (analytics)
  • Site speed (PageSpeed Insights, Lighthouse, Google Analytics)

It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.

Good luck and if you need any consultation or assistance with your site migration, please get in touch!


Site migration checklist

An up-to-date site migration checklist is available to download from our site. Please note that the checklist is regularly updated to include all critical areas for a successful site migration.


Appendix: Useful tools

Crawlers

  • Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
  • Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
  • Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
  • Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
  • On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.

Handy Chrome add-ons

  • Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
  • User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
  • Ayima Redirect Path: A great header and redirect checker.
  • SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
  • Scraper: An easy way to scrape website data into a spreadsheet.

Site monitoring tools

  • Uptime Robot: Free website uptime monitoring.
  • Robotto: Free robots.txt monitoring tool.
  • Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
  • SEO Radar: Monitors all critical SEO elements and fires alerts when these change.

Site performance tools

  • PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
  • Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
  • Webpagetest.org: Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.

Structured data testing tools

Mobile testing tools

Backlink data sources

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

MozCon 2018: Making the Case for the Conference (& All the Snacks!)

Posted by Danielle_Launders

You’ve got that conference looming on the horizon. You want to go — you’ve spent the past few years desperately following hashtags on Twitter, memorizing catchy quotes, zooming in on grainy snapshots of a deck, and furiously downloading anything and everything you can scour from Slideshare.

But there’s a problem: conferences cost money, and your boss won’t even approve a Keurig in the communal kitchen, much less a ticket to a three-day-long learning sesh complete with its own travel and lodging expenses.

What’s an education-hungry digital marketer to do?

How do you convince your boss to send you to the conference of your dreams?

First of all, you gather evidence to make your case.

There are a plethora of excellent reasons why attending conferences is good for your career (and your bottom line). In digital marketing, we exist in the ever-changing tech space, hurtling toward the future at breakneck speed and often missing the details of the scenery along the way.

A good SEO conference will keep you both on the edge of your seat and on the cutting-edge of what’s new and noteworthy in our industry, highlighting some of the most important and impactful things your work depends on.

A good SEO conference will flip a switch for you, will trigger that lightbulb moment that empowers you and levels you up as both a marketer and a critical thinker.

If that doesn’t paint a beautiful enough picture to convince the folks that hold the credit card, though, there are also some great statistics and resources available:

Specifically, we’re talking about MozCon

Yes, that MozCon!

Let’s just take a moment to address the elephant in the room here: you all know why we wrote this post. We want to see your smiling face in the audience at MozCon this July (the 9th–11th, if you were wondering). There are a few specific benefits worth mentioning:

  • Speakers and content: Our speakers bring their A-game each year. We work with them to bring the best content and latest trends to the stage to help set you up for a year of success.
  • Videos to share with your team: About a month or so after the conference, we’ll send you a link to professionally edited videos of every presentation at the conference. Your colleagues won’t get to partake in the morning Top Pot doughnuts or Starbucks coffee, but they will get a chance to learn everything you did, for free.
  • Great food onsite: We understand that conference food isn’t typically worth mentioning, but at MozCon you can expect snacks from local Seattle vendors – in the past this includes Trophy cupcakes, KuKuRuZa popcorn, Starbucks’ Seattle Reserve cold brew, and did we mention bacon at breakfast? Let’s not forget the bacon.
  • Swag: Expect to go home with a one-of-a-kind Roger Mozbot, a super-soft t-shirt from American Apparel, and swag worth keeping. We’ve given away Roger Legos, Moleskine notebooks, phone chargers, and have even had vending machines with additional swag in case you didn’t get enough.
  • Networking: You work hard taking notes, learning new insights, and digesting all of that knowledge — that’s why we think you deserve a little fun in the evenings to chat with fellow attendees. Each night after the conference, we’ll offer a different networking event that adds to the value you’ll get from your day of education.
  • A supportive network after the fact: Our MozCon Facebook group is incredibly active, and it’s grown to have a life of its own — marketers ask one another SEO questions, post jobs, look for and offer advice and empathy, and more. It’s a great place to find TAGFEE support and camaraderie long after the conference itself has ended.
  • Discounts for subscribers and groups: Moz Pro subscribers get a whopping $ 500 off their ticket cost (even if you’re on a free 30-day trial!) and there are discounts for groups as well, so make sure to take advantage of savings where you can!
  • Ticket cost: At MozCon our goal is to break even, which means we invest all of your ticket price back into you. Check out the full breakdown below:

Can you tell we’re serious about the snacks?

You can check out videos from years past to get a taste for the caliber of our speakers. We’ll also be putting out a call for community speaker pitches in April, so if you’ve been thinking about breaking into the speaking circuit, it could be an amazing opportunity — keep an eye on the blog for your chance to submit a pitch.

If you’ve ever seriously considered attending an SEO conference like MozCon, now’s the time to do it. You’ll save actual hundreds of dollars by grabbing subscriber or group pricing while you can (think of all the Keurigs you could get for that communal kitchen!), and you’ll be bound for an unforgettable experience that lives and grows with you beyond just the three days you spend in Seattle.

Grab your ticket to MozCon!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Should SEOs & Content Marketers Play to the Social Networks’ "Stay-On-Our-Site" Algorithms? – Whiteboard Friday

Posted by randfish

Increasingly, social networks are tweaking their algorithms to favor content that remains on their site, rather than send users to an outside source. This spells trouble for those trying to drive traffic and visitors to external pages, but what’s an SEO or content marketer to do? Do you swim with the current, putting all your efforts toward placating the social network algos, or do you go against it and continue to promote your own content? This edition of Whiteboard Friday goes into detail on the pros and cons of each approach, then gives Rand’s recommendations on how to balance your efforts going forward.

Should SEOs and content marketers play to the social networks "stay-on-our-site" algorithms?

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about whether SEOs and content marketers, for that matter, should play to what the social networks are developing in their visibility and engagement algorithms, or whether we should say, “No. You know what? Forget about what you guys are doing. We’re going to try and do things on social networks that benefit us.” I’ll show you what I’m talking about.

Facebook

If you’re using Facebook and you’re posting content to it, Facebook generally tends to frown upon and lower the average visibility and ability of content to reach its audience on Facebook if it includes an external link. So, on average, posts that include an external link will fare more poorly in Facebooks’ news feed algorithm than on-site content, exclusively content that lives on Facebook.

For example, if you see this video promoted on Facebook.com/Moz or Facebook.com/RandFishkin, it will do more poorly than if Moz and I had promoted a Facebook native video of Whiteboard Friday. But we don’t want that. We want people to come visit our site and subscribe to Whiteboard Friday here and not stay on Facebook where we only reach 1 out of every 50 or 100 people who might subscribe to our page.

So it’s clearly in our interest to do this, but Facebook wants to keep you on Facebook’s website, because then they can do the most advertising and targeting to you and get the most time on site from you. That’s their business, right?

Twitter

The same thing is true of Twitter. So it tends to be the case that links off Twitter fare more poorly. Now, I am not 100% sure in Twitter’s case whether this is algorithmic or user-driven. I suspect it’s a little of both, that Twitter will promote or make most visible to you when you log in to Twitter the posts that have been made or the tweets that have been made that are self-contained. They live entirely on Twitter. They might contain a bunch of different stuff, a poll or images or be a thread. But links off Twitter will be dampened.

Instagram

The same thing is true on Instagram. Well, on Instagram, they’re kind of the worst. They don’t allow links at all. The only thing you can do is a link in profile. More engaging content on Instagram, as of just a couple weeks ago, more engaging content equals higher placement in the feed. In fact, Instagram has now just come out and said that they will show you content posts from people you’re not following but that they think will be engaging to you, which gives influential Instagram accounts that get lots of engagement an additional benefit, but kind of hurts everyone else that you’re normally following on the network.

LinkedIn

LinkedIn, LinkedIn’s algorithm includes extra visibility in the feed for self-contained post content, which is why you see a lot of these posts of, “Oh, here’s all the crazy amounts of work I did and what my experience was like building this or doing that.” If it’s a self-contained, sort of blog post-style content in LinkedIn that does not link out, it will do much better than posts that contain an external link, which LinkedIn sort of dampens in their visibility algorithm for their feed.

Play to the algos?

So all of these sites have these components of their algorithm that basically reward you if you are willing to play to their algos, meaning you keep all of the content on their sites and platform, their stuff, not yours. You essentially play to what they’re trying to achieve, which is more time on site for them, more engagement for them, less people going away to other places. You refuse or you don’t link out, so no external linking to other places. You maintain sort of what I call a high signal to noise ratio, so that rather than sharing all the things you might want to share, you only share posts that you can count on having relatively high engagement.

That track record is something that sticks with you on most of these networks. Facebook, for example, if I have posts that do well, many in a row, I will get more visibility for my next one. If my last couple of posts have performed poorly on Facebook, my next one will be dampened. You sort of get a string or get on a roll with these networks. Same thing is true on Twitter, by the way.

$ #@! the algos, serve your own site?

Or you say, “Forget you” to the algorithms and serve your own site instead, which means you use the networks to tease content, like, “Here’s this exciting, interesting thing. If you want the whole story or you want to watch full video or see all the graphs and charts or whatever it is, you need to come to our website where we host the full content.” You link externally so that you’re driving traffic back to the properties that you own and control, and you have to be willing to promote some potentially promotional content, in order to earn value from these social networks, even if that means slightly lower engagement or less of that get-on-a-roll reputation.

My recommendation

The recommendation that I have for SEOs and content marketers is I think we need to balance this. But if I had to, I would tilt it in favor of your site. Social networks, I know it doesn’t seem this way, but social networks come and go in popularity, and they change the way that they work. So investing very heavily in Facebook six or seven years ago might have made a ton of sense for a business. Today, a lot of those investments have been shown to have very little impact, because instead of reaching 20 or 30 out of 100 of your followers, you’re reaching 1 or 2. So you’ve lost an order of magnitude of reach on there. The same thing has been true generally on Twitter, on LinkedIn, and on Instagram. So I really urge you to tilt slightly to your own site.

Owned channels are your website, your email, where you have the email addresses of the people there. I would rather have an email or a loyal visitor or an RSS subscriber than I would 100 times as many Twitter followers, because the engagement you can get and the value that you can get as a business or as an organization is just much higher.

Just don’t ignore how these algorithms work. If you can, I would urge you to sometimes get on those rolls so that you can grow your awareness and reach by playing to these algorithms.

So, essentially, while I’m urging you to tilt slightly this way, I’m also suggesting that occasionally you should use what you know about how these algorithms work in order to grow and accelerate your growth of followers and reach on these networks so that you can then get more benefit of driving those people back to your site. You’ve got to play both sides, I think, today in order to have success with the social networks’ current reach and visibility algorithms.

All right, everyone, look forward to your comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Mozzy Good Wishes to You & Yours!

Posted by FeliciaCrawford

As the long holiday weekend draws to a close and we prepare to welcome a brand-new year, we at Moz just want to thank you all for a wonderful, fulfilling year on the blog. Your colorful commentary, delightful debates, thrilling thumbs-up, and vivacious visits have made the past twelve months sparkle and shine (and with that, I’ll bid the alliteration adieu).

Our “card” features a cameo from a little Moz Dog you may recognize: the inimitable Lettie Pickles!

At the Moz HQ, we practice a multitude of holiday traditions. Whether it’s Mozzers gathering in the common room (affectionately named “Roger”) to light candles on the menorah during Hanukkah, trading and stealing gifts for the company-wide White Elephant exchange (someone won a bonafide Commodore 64 this year!), or getting our boogie and our board gaming on at the Moz holiday party, we try to honor this special season with a healthy mix of reverence and good old-fashioned fun.

The folks who come to our blog for digital marketing advice hail from almost every remote corner of the world (we know; we looked at our analytics ;) . This week, when things tend to slow down and it’s just a little more difficult than usual to get anyone to reply to your emails, we’d love to invite you to share your own unique tales and traditions in the comments. What’s your favorite way to celebrate, in the office and at home? What mishaps and magical moments alike filled your days, and what’s your resolution for 2018? Let’s take a little breather as we gear up for all the new projects and responsibilities awaiting us just around the corner and share with each other; after all, that’s what being a community is all about! :)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Don’t Be Fooled by Data: 4 Data Analysis Pitfalls & How to Avoid Them

Posted by Tom.Capper

Digital marketing is a proudly data-driven field. Yet, as SEOs especially, we often have such incomplete or questionable data to work with, that we end up jumping to the wrong conclusions in our attempts to substantiate our arguments or quantify our issues and opportunities.

In this post, I’m going to outline 4 data analysis pitfalls that are endemic in our industry, and how to avoid them.

1. Jumping to conclusions

Earlier this year, I conducted a ranking factor study around brand awareness, and I posted this caveat:

“…the fact that Domain Authority (or branded search volume, or anything else) is positively correlated with rankings could indicate that any or all of the following is likely:

  • Links cause sites to rank well
  • Ranking well causes sites to get links
  • Some third factor (e.g. reputation or age of site) causes sites to get both links and rankings”
    ~ Me

However, I want to go into this in a bit more depth and give you a framework for analyzing these yourself, because it still comes up a lot. Take, for example, this recent study by Stone Temple, which you may have seen in the Moz Top 10 or Rand’s tweets, or this excellent article discussing SEMRush’s recent direct traffic findings. To be absolutely clear, I’m not criticizing either of the studies, but I do want to draw attention to how we might interpret them.

Firstly, we do tend to suffer a little confirmation bias — we’re all too eager to call out the cliché “correlation vs. causation” distinction when we see successful sites that are keyword-stuffed, but all too approving when we see studies doing the same with something we think is or was effective, like links.

Secondly, we fail to critically analyze the potential mechanisms. The options aren’t just causation or coincidence.

Before you jump to a conclusion based on a correlation, you’re obliged to consider various possibilities:

  • Complete coincidence
  • Reverse causation
  • Joint causation
  • Linearity
  • Broad applicability

If those don’t make any sense, then that’s fair enough — they’re jargon. Let’s go through an example:

Before I warn you not to eat cheese because you may die in your bedsheets, I’m obliged to check that it isn’t any of the following:

  • Complete coincidence - Is it possible that so many datasets were compared, that some were bound to be similar? Why, that’s exactly what Tyler Vigen did! Yes, this is possible.
  • Reverse causation - Is it possible that we have this the wrong way around? For example, perhaps your relatives, in mourning for your bedsheet-related death, eat cheese in large quantities to comfort themselves? This seems pretty unlikely, so let’s give it a pass. No, this is very unlikely.
  • Joint causation - Is it possible that some third factor is behind both of these? Maybe increasing affluence makes you healthier (so you don’t die of things like malnutrition), and also causes you to eat more cheese? This seems very plausible. Yes, this is possible.
  • Linearity - Are we comparing two linear trends? A linear trend is a steady rate of growth or decline. Any two statistics which are both roughly linear over time will be very well correlated. In the graph above, both our statistics are trending linearly upwards. If the graph was drawn with different scales, they might look completely unrelated, like this, but because they both have a steady rate, they’d still be very well correlated. Yes, this looks likely.
  • Broad applicability - Is it possible that this relationship only exists in certain niche scenarios, or, at least, not in my niche scenario? Perhaps, for example, cheese does this to some people, and that’s been enough to create this correlation, because there are so few bedsheet-tangling fatalities otherwise? Yes, this seems possible.

So we have 4 “Yes” answers and one “No” answer from those 5 checks.

If your example doesn’t get 5 “No” answers from those 5 checks, it’s a fail, and you don’t get to say that the study has established either a ranking factor or a fatal side effect of cheese consumption.

A similar process should apply to case studies, which are another form of correlation — the correlation between you making a change, and something good (or bad!) happening. For example, ask:

  • Have I ruled out other factors (e.g. external demand, seasonality, competitors making mistakes)?
  • Did I increase traffic by doing the thing I tried to do, or did I accidentally improve some other factor at the same time?
  • Did this work because of the unique circumstance of the particular client/project?

This is particularly challenging for SEOs, because we rarely have data of this quality, but I’d suggest an additional pair of questions to help you navigate this minefield:

  • If I were Google, would I do this?
  • If I were Google, could I do this?

Direct traffic as a ranking factor passes the “could” test, but only barely — Google could use data from Chrome, Android, or ISPs, but it’d be sketchy. It doesn’t really pass the “would” test, though — it’d be far easier for Google to use branded search traffic, which would answer the same questions you might try to answer by comparing direct traffic levels (e.g. how popular is this website?).

2. Missing the context

If I told you that my traffic was up 20% week on week today, what would you say? Congratulations?

What if it was up 20% this time last year?

What if I told you it had been up 20% year on year, up until recently?

It’s funny how a little context can completely change this. This is another problem with case studies and their evil inverted twin, traffic drop analyses.

If we really want to understand whether to be surprised at something, positively or negatively, we need to compare it to our expectations, and then figure out what deviation from our expectations is “normal.” If this is starting to sound like statistics, that’s because it is statistics — indeed, I wrote about a statistical approach to measuring change way back in 2015.

If you want to be lazy, though, a good rule of thumb is to zoom out, and add in those previous years. And if someone shows you data that is suspiciously zoomed in, you might want to take it with a pinch of salt.

3. Trusting our tools

Would you make a multi-million dollar business decision based on a number that your competitor could manipulate at will? Well, chances are you do, and the number can be found in Google Analytics. I’ve covered this extensively in other places, but there are some major problems with most analytics platforms around:

  • How easy they are to manipulate externally
  • How arbitrarily they group hits into sessions
  • How vulnerable they are to ad blockers
  • How they perform under sampling, and how obvious they make this

For example, did you know that the Google Analytics API v3 can heavily sample data whilst telling you that the data is unsampled, above a certain amount of traffic (~500,000 within date range)? Neither did I, until we ran into it whilst building Distilled ODN.

Similar problems exist with many “Search Analytics” tools. My colleague Sam Nemzer has written a bunch about this — did you know that most rank tracking platforms report completely different rankings? Or how about the fact that the keywords grouped by Google (and thus tools like SEMRush and STAT, too) are not equivalent, and don’t necessarily have the volumes quoted?

It’s important to understand the strengths and weaknesses of tools that we use, so that we can at least know when they’re directionally accurate (as in, their insights guide you in the right direction), even if not perfectly accurate. All I can really recommend here is that skilling up in SEO (or any other digital channel) necessarily means understanding the mechanics behind your measurement platforms — which is why all new starts at Distilled end up learning how to do analytics audits.

One of the most common solutions to the root problem is combining multiple data sources, but…

4. Combining data sources

There are numerous platforms out there that will “defeat (not provided)” by bringing together data from two or more of:

  • Analytics
  • Search Console
  • AdWords
  • Rank tracking

The problems here are that, firstly, these platforms do not have equivalent definitions, and secondly, ironically, (not provided) tends to break them.

Let’s deal with definitions first, with an example — let’s look at a landing page with a channel:

  • In Search Console, these are reported as clicks, and can be vulnerable to heavy, invisible sampling when multiple dimensions (e.g. keyword and page) or filters are combined.
  • In Google Analytics, these are reported using last non-direct click, meaning that your organic traffic includes a bunch of direct sessions, time-outs that resumed mid-session, etc. That’s without getting into dark traffic, ad blockers, etc.
  • In AdWords, most reporting uses last AdWords click, and conversions may be defined differently. In addition, keyword volumes are bundled, as referenced above.
  • Rank tracking is location specific, and inconsistent, as referenced above.

Fine, though — it may not be precise, but you can at least get to some directionally useful data given these limitations. However, about that “(not provided)”…

Most of your landing pages get traffic from more than one keyword. It’s very likely that some of these keywords convert better than others, particularly if they are branded, meaning that even the most thorough click-through rate model isn’t going to help you. So how do you know which keywords are valuable?

The best answer is to generalize from AdWords data for those keywords, but it’s very unlikely that you have analytics data for all those combinations of keyword and landing page. Essentially, the tools that report on this make the very bold assumption that a given page converts identically for all keywords. Some are more transparent about this than others.

Again, this isn’t to say that those tools aren’t valuable — they just need to be understood carefully. The only way you could reliably fill in these blanks created by “not provided” would be to spend a ton on paid search to get decent volume, conversion rate, and bounce rate estimates for all your keywords, and even then, you’ve not fixed the inconsistent definitions issues.

Bonus peeve: Average rank

I still see this way too often. Three questions:

  1. Do you care more about losing rankings for ten very low volume queries (10 searches a month or less) than for one high volume query (millions plus)? If the answer isn’t “yes, I absolutely care more about the ten low-volume queries”, then this metric isn’t for you, and you should consider a visibility metric based on click through rate estimates.
  2. When you start ranking at 100 for a keyword you didn’t rank for before, does this make you unhappy? If the answer isn’t “yes, I hate ranking for new keywords,” then this metric isn’t for you — because that will lower your average rank. You could of course treat all non-ranking keywords as position 100, as some tools allow, but is a drop of 2 average rank positions really the best way to express that 1/50 of your landing pages have been de-indexed? Again, use a visibility metric, please.
  3. Do you like comparing your performance with your competitors? If the answer isn’t “no, of course not,” then this metric isn’t for you — your competitors may have more or fewer branded keywords or long-tail rankings, and these will skew the comparison. Again, use a visibility metric.

Conclusion

Hopefully, you’ve found this useful. To summarize the main takeaways:

  • Critically analyse correlations & case studies by seeing if you can explain them as coincidences, as reverse causation, as joint causation, through reference to a third mutually relevant factor, or through niche applicability.
  • Don’t look at changes in traffic without looking at the context — what would you have forecasted for this period, and with what margin of error?
  • Remember that the tools we use have limitations, and do your research on how that impacts the numbers they show. “How has this number been produced?” is an important component in “What does this number mean?”
  • If you end up combining data from multiple tools, remember to work out the relationship between them — treat this information as directional rather than precise.

Let me know what data analysis fallacies bug you, in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Which of My Competitor’s Keywords Should (& Shouldn’t ) I Target? – Whiteboard Friday

Posted by randfish

You don’t want to try to rank for every one of your competitors’ keywords. Like most things with SEO, it’s important to be strategic and intentional with your decisions. In today’s Whiteboard Friday, Rand shares his recommended process for understanding your funnel, identifying the right competitors to track, and prioritizing which of their keywords you ought to target.

Which of my competitor's keyword should I target?

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. So this week we’re chatting about your competitors’ keywords and which of those competitive keywords you might want to actually target versus not.

Many folks use tools, like SEMrush and Ahrefs and KeywordSpy and Spyfu and Moz’s Keyword Explorer, which now has this feature too, where they look at: What are the keywords that my competitors rank for, that I may be interested in? This is actually a pretty smart way to do keyword research. Not the only way, but a smart way to do it. But the challenge comes in when you start looking at your competitors’ keywords and then realizing actually which of these should I go after and in what priority order. In the world of competitive keywords, there’s actually a little bit of a difference between classic keyword research.

So here I’ve plugged in Hammer and Heels, which is a small, online furniture store that has some cool designer furniture, and Dania Furniture, which is a competitor of theirs — they’re local in the Seattle area, but carry sort of modern, Scandinavian furniture — and IndustrialHome.com, similar space. So all three of these in a similar space, and you can see sort of keywords that return that several of these, one or more of these rank for. I put together difficulty, volume, and organic click-through rate, which are some of the metrics that you’ll find. You’ll find these metrics actually in most of the tools that I just mentioned.

Process:

So when I’m looking at this list, which ones do I want to actually go after and not, and how do I choose? Well, this is the process I would recommend.

I. Try and make sure you first understand your keyword to conversion funnel.

So if you’ve got a classic sort of funnel, you have people buying down here — this is a purchase — and you have people who search for particular keywords up here, and if you understand which people you lose and which people actually make it through the buying process, that’s going to be very helpful in knowing which of these terms and phrases and which types of these terms and phrases to actually go after, because in general, when you’re prioritizing competitive keywords, you probably don’t want to be going after these keywords that send traffic but don’t turn into conversions, unless that’s actually your goal. If your goal is raw traffic only, maybe because you serve advertising or other things, or because you know that you can capture a lot of folks very well through retargeting, for example maybe Hammer and Heels says, “Hey, the biggest traffic funnel we can get because we know, with our retargeting campaigns, even if a keyword brings us someone who doesn’t convert, we can convert them later very successfully,” fine. Go ahead.

II. Choose competitors that tend to target the same audience(s).

So the people you plug in here should tend to be competitors that tend to target the same audiences. Otherwise, your relevance and your conversion get really hard. For example, I could have used West Elm, which does generally modern furniture as well, but they’re very, very broad. They target just about everyone. I could have done Ethan Allen, which is sort of a very classic, old-school furniture maker. Probably a really different audience than these three websites. I could have done IKEA, which is sort of a low market brand for everybody. Again, not kind of the match. So when you are targeting conversion heavy, assuming that these folks were going after mostly conversion focused or retargeting focused rather than raw traffic, my suggestion would be strongly to go after sites with the same audience as you.

If you’re having trouble figuring out who those people are, one suggestion is to check out a tool called SimilarWeb. It’s expensive, but very powerful. You can plug in a domain and see what other domains people are likely to visit in that same space and what has audience overlap.

III. The keyword selection process should follow some of these rules:

A. Are easiest first.

So I would go after the ones that tend to be, that I think are going to be most likely for me to be able to rank for easiest. Why do I recommend that? Because it’s tough in SEO with a lot of campaigns to get budget and buy-in unless you can show progress early. So any time you can choose the easiest ones first, you’re going to be more successful. That’s low difficulty, high odds of success, high odds that you actually have the team needed to make the content necessary to rank. I wouldn’t go after competitive brands here.

B. Are similar to keywords you target that convert well now.

So if you understand this funnel well, you can use your AdWords campaign particularly well for this. So you look at your paid keywords and which ones send you highly converting traffic, boom. If you see that lighting is really successful for our furniture brand, “Oh, well look, glass globe chandelier, that’s got some nice volume. Let’s go after that because lighting already works for us.”

Of course, you want ones that fit your existing site structure. So if you say, “Oh, we’re going to have to make a blog for this, oh we need a news section, oh we need a different type of UI or UX experience before we can successfully target the content for this keyword,” I’d push that down a little further.

C. High volume, low difficulty, high organic click-through rate, or SERP features you can reach.

So basically, when you look at difficulty, that’s telling you how hard is it for me to rank for this potential keyword. If I look in here and I see some 50 and 60s, but I actually see a good number in the 30s and 40s, I would think that glass globe chandelier, S-shaped couch, industrial home furniture, these are pretty approachable. That’s impressive stuff.

Volume, I want as high as I can get, but oftentimes high volume leads to very high difficulty.
Organic click-through rate percentage, this is essentially saying what percent of people click on the 10 blue link style, organic search results. Classic SEO will help get me there. However, if you see low numbers, like a 55% for this type of chair, you might take a look at those search results and see that a lot of images are taking up the other organic click-through, and you might say, “Hey, let’s go after image SEO as well.” So it’s not just organic click-through rate. You can also target SERP features.

D. Are brands you carry/serve, generally not competitor’s brand names.

Then last, but not least, I would urge you to go after brands when you carry and serve them, but not when you don’t. So if this Ekornes chair is something that your furniture store, that Hammers and Heels actually carries, great. But if it’s something that’s exclusive to Dania, I wouldn’t go after it. I would generally not go after competitors’ brand names or branded product names with an exception, and I actually used this site to highlight this. Industrial Home Furniture is both a branded term, because it’s the name of this website — Industrial Home Furniture is their brand — and it’s also a generic. So in those cases, I would tell you, yes, it probably makes sense to go after a category like that.

If you follow these rules, you can generally use competitive intel on keywords to build up a really nice portfolio of targetable, high potential keywords that can bring you some serious SEO returns.

Look forward to your comments and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How to Track Your Local SEO & SEM

Posted by nickpierno

If you asked me, I’d tell you that proper tracking is the single most important element in your local business digital marketing stack. I’d also tell you that even if you didn’t ask, apparently.

A decent tracking setup allows you to answer the most important questions about your marketing efforts. What’s working and what isn’t?

Many digital marketing strategies today still focus on traffic. Lots of agencies/developers/marketers will slap an Analytics tracking code on your site and call it a day. For most local businesses, though, traffic isn’t all that meaningful of a metric. And in many cases (e.g. Adwords & Facebook), more traffic just means more spending, without any real relationship to results.

What you really need your tracking setup to tell you is how many leads (AKA conversions) you’re getting, and from where. It also needs to do so quickly and easily, without you having to log into multiple accounts to piece everything together.

If you’re spending money or energy on SEO, Adwords, Facebook, or any other kind of digital traffic stream and you’re not measuring how many leads you get from each source, stop what you’re doing right now and make setting up a solid tracking plan your next priority.

This guide is intended to fill you in on all the basic elements you’ll need to assemble a simple, yet flexible and robust tracking setup.

Google Analytics

Google Analytics is at the center of virtually every good web tracking setup. There are other supplemental ways to collect web analytics (like Heap, Hotjar, Facebook Pixels, etc), but Google Analytics is the free, powerful, and omnipresent tool that virtually every website should use. It will be the foundation of our approach in this guide.

Analytics setup tips

Analytics is super easy to set up. Create (or sign into) a Google account, add your Account and Property (website), and install the tracking code in your website’s template.

Whatever happens, don’t let your agency or developer set up your Analytics property on their own Account. Agencies and developers: STOP DOING THIS! Create a separate Google/Gmail account and let this be the “owner” of a new Analytics Account, then share permission with the agency/developer’s account, the client’s personal Google account, and so on.

The “All Website Data” view will be created by default for a new property. If you’re going to add filters or make any other advanced changes, be sure to create and use a separate View, keeping the default view clean and pure.

Also be sure to set the appropriate currency and time zone in the “View Settings.” If you ever use Adwords, using the wrong currency setting will result in a major disagreement between Adwords and Analytics.

Goals

Once your basic Analytics setup is in place, you should add some goals. This is where the magic happens. Ideally, every business objective your website can achieve should be represented as a goal conversion. Conversions can come in many forms, but here are some of the most common ones:

  • Contact form submission
  • Quote request form submission
  • Phone call
  • Text message
  • Chat
  • Appointment booking
  • Newsletter signup
  • E-commerce purchase

How you slice up your goals will vary with your needs, but I generally try to group similar “types” of conversions into a single goal. If I have several different contact forms on a site (like a quick contact form in the sidebar, and a heftier one on the contact page), I might group those as a single goal. You can always dig deeper to see the specific breakdown, but it’s nice to keep goals as neat and tidy as possible.

To create a goal in Analytics:

  1. Navigate to the Admin screen.
  2. Under the appropriate View, select Goals and then + New Goal.
  3. You can either choose between a goal Template, or Custom. Most goals are easiest to set up choosing Custom.
  4. Give your goal a name (ex. Contact Form Submission) and choose a type. Most goals for local businesses will either be a Destination or an Event.

Pro tip: Analytics allows you to associate a dollar value to your goal conversions. If you can tie your goals to their actual value, it can be a powerful metric to measure performance with. A common way to determine the value of a goal is to take the average value of a sale and multiply it by the average closing rate of Internet leads. For example, if your average sale is worth $ 1,000, and you typically close 1/10 of leads, your goal value would be $ 100.

Form tracking

The simplest way to track form fills is to have the form redirect to a “Thank You” page upon submission. This is usually my preferred setup; it’s easy to configure, and I can use the Thank You page to recommend other services, articles, etc. on the site and potentially keep the user around. I also find a dedicated Thank You page to provide the best affirmation that the form submission actually went through.

Different forms can all use the same Thank You page, and pass along variables in the URL to distinguish themselves from each other so you don’t have to create a hundred different Thank You pages to track different forms or goals. Most decent form plugins for WordPress are capable of this. My favorite is Gravityforms. Contact Form 7 and Ninja Forms are also very popular (and free).

Another option is using event tracking. Event tracking allows you to track the click of a button or link (the submit button, in the case of a web form). This would circumvent the need for a thank you page if you don’t want to (or can’t) send the user elsewhere when they submit a form. It’s also handy for other, more advanced forms of tracking.

Here’s a handy plugin for Gravityforms that makes setting up event tracking a snap.

Once you’ve got your form redirecting to a Thank You page or generating an event, you just need to create a goal in Analytics with the corresponding value.

You can use Thank You pages or events in a similar manner to track appointment booking, web chats, newsletter signups, etc.

Call tracking

Many businesses and marketers have adopted form tracking, since it’s easy and free. That’s great. But for most businesses, it leaves a huge volume of web conversions untracked.

If you’re spending cash to generate traffic to your site, you could be hemorrhaging budget if you’re not collecting and attributing the phone call conversions from your website.

There are several solutions and approaches to call tracking. I use and recommend CallRail, which also seems to have emerged as the darling of the digital marketing community over the past few years thanks to its ease of use, great support, fair pricing, and focus on integration. Another option (so I don’t come across as completely biased) is CallTrackingMetrics.

You’ll want to make sure your call tracking platform allows for integration with Google Analytics and offers something called “dynamic number insertion.”

Dynamic number insertion uses JavaScript to detect your actual local phone number on your website and replace it with a tracking number when a user loads your page.

Dynamic insertion is especially important in the context of local SEO, since it allows you to keep your real, local number on your site, and maintain NAP consistency with the rest of your business’s citations. Assuming it’s implemented properly, Google will still see your real number when it crawls your site, but users will get a tracked number.

Basically, magic.

There are a few ways to implement dynamic number insertion. For most businesses, one of these two approaches should fit the bill.

Number per source

With this approach, you’ll create a tracking number for each source you wish to track calls for. These sources might be:

  • Organic search traffic
  • Paid search traffic
  • Facebook referral traffic
  • Yelp referral traffic
  • Direct traffic
  • Vanity URL traffic (for visitors coming from an offline TV or radio ad, for example)

When someone arrives at your website from one of these predefined sources, the corresponding number will show in place of your real number, wherever it’s visible. If someone calls that number, an event will be passed to Analytics along with the source.

This approach isn’t perfect, but it’s a solid solution if your site gets large amounts of traffic (5k+ visits/day) and you want to keep call tracking costs low. It will do a solid job of answering the basic questions of how many calls your site generates and where they came from, but it comes with a few minor caveats:

  • Calls originating from sources you didn’t predefine will be missed.
  • Events sent to Analytics will create artificial sessions not tied to actual user sessions.
  • Call conversions coming from Adwords clicks won’t be attached to campaigns, ad groups, or keywords.

Some of these issues have more advanced workarounds. None of them are deal breakers… but you can avoid them completely with number pools — the awesomest call tracking method.

Number pools

“Keyword Pools,” as CallRail refers to them, are the killer app for call tracking. As long as your traffic doesn’t make this option prohibitively expensive (which won’t be a problem for most local business websites), this is the way to go.

In this approach, you create a pool with several numbers (8+ with CallRail). Each concurrent visitor on your site is assigned a different number, and if they call it, the conversion is attached to their session in Analytics, as well as their click in Adwords (if applicable). No more artificial sessions or disconnected conversions, and as long as you have enough numbers in your pool to cover your site’s traffic, you’ll capture all calls from your site, regardless of source. It’s also much quicker to set up than a number per source, and will even make you more attractive and better at sports!

You generally have to pay your call tracking provider for additional numbers, and you’ll need a number for each concurrent visitor to keep things running smoothly, so this is where massive amounts of traffic can start to get expensive. CallRail recommends you look at your average hourly traffic during peak times and include ¼ the tally as numbers in your pool. So if you have 30 visitors per hour on average, you might want ~8 numbers.

Implementation

Once you’ve got your call tracking platform configured, you’ll need to implement some code on your site to allow the dynamic number insertion to work its magic. Most platforms will provide you with a code snippet and instructions for installation. If you use CallRail and WordPress, there’s a handy plugin to make things even simpler. Just install, connect, and go.

To get your calls recorded in Analytics, you’ll just need to enable that option from your call tracking service. With CallRail you simply enable the integration, add your domain, and calls will be sent to your Analytics account as Events. Just like with your form submissions, you can add these events as a goal. Usually it makes sense to add a single goal called “Phone Calls” and set your event conditions according to the output from your call tracking service. If you’re using CallRail, it will look like this:

Google Search Console

It’s easy to forget to set up Search Console (formerly Webmaster Tools), because most of the time it plays a backseat role in your digital marketing measurement. But miss it, and you’ll forego some fundamental technical SEO basics (country setting, XML sitemaps, robots.txt verification, crawl reports, etc.), and you’ll miss out on some handy keyword click data in the Search Analytics section. Search Console data can also be indispensable for diagnosing penalties and other problems down the road, should they ever pop up.

Make sure to connect your Search Console with your Analytics property, as well as your Adwords account.

With all the basics of your tracking setup in place, the next step is to bring your paid advertising data into the mix.

Google Adwords

Adwords is probably the single most convincing reason to get proper tracking in place. Without it, you can spend a lot of money on clicks without really knowing what you get out of it. Conversion data in Adwords is also absolutely critical in making informed optimizations to your campaign settings, ad text, keywords, and so on.

If you’d like some more of my rantings on conversions in Adwords and some other ways to get more out of your campaigns, check out this recent article :)

Getting your data flowing in all the right directions is simple, but often overlooked.

Linking with Analytics

First, make sure your Adwords and Analytics accounts are linked. Always make sure you have auto-tagging enabled on your Adwords account. Now all your Adwords data will show up in the Acquisition > Adwords area of Analytics. This is a good time to double-check that you have the currency correctly set in Analytics (Admin > View Settings); otherwise, your Adwords spend will be converted to the currency set in Analytics and record the wrong dollar values (and you can’t change data that’s already been imported).

Next, you’ll want to get those call and form conversions from Analytics into Adwords.

Importing conversions in Adwords

Some Adwords management companies/consultants might disagree, but I strongly advocate an Analytics-first approach to conversion tracking. You can get call and form conversions pulled directly into Adwords by installing a tracking code on your site. But don’t.

Instead, make sure all your conversions are set up as goals in Analytics, and then import them into Adwords. This allows Analytics to act as your one-stop-shop for reviewing your conversion data, while providing all the same access to that data inside Adwords.

Call extensions & call-only ads

This can throw some folks off. You will want to track call extensions natively within Adwords. These conversions are set up automatically when you create a call extension in Adwords and elect to use a Google call forwarding number with the default settings.

Don’t worry though, you can still get these conversions tracked in Analytics if you want to (I could make an argument either for or against). Simply create a single “offline” tracking number in your call tracking platform, and use that number as the destination for the Google forwarding number.

This also helps counteract one of the oddities of Google’s call forwarding system. Google will actually only start showing the forwarding number on desktop ads after they have received a certain (seemingly arbitrary) minimum number of clicks per week. As a result, some calls are tracked and some aren’t — especially on smaller campaigns. With this little trick, Analytics will show all the calls originating from your ads — not just ones that take place once you’ve paid Google enough each week.

Adwords might give you a hard time for using a number in your call extensions that isn’t on your website. If you encounter issues with getting your number verified for use as a call extension, just make sure you have linked your Search Console to your Adwords account (as indicated above).

Now you’ve got Analytics and Adwords all synced up, and your tracking regimen is looking pretty gnarly! There are a few other cool tools you can use to take full advantage of your sweet setup.

Google Tag Manager

If you’re finding yourself putting a lot of code snippets on your site (web chat, Analytics, call tracking, Adwords, Facebook Pixels, etc), Google Tag Manager is a fantastic tool for managing them all from one spot. You can also do all sorts of advanced slicing and dicing.

GTM is basically a container that you put all your snippets in, and then you put a single GTM snippet on your site. Once installed, you never need to go back to your site’s code to make changes to your snippets. You can manage them all from the GTM interface in a user-friendly, version-controlled environment.

Don’t bother if you just need Analytics on your site (and are using the CallRail plugin). But for more robust needs, it’s well worth considering for its sheer power and simplicity.

Here’s a great primer on making use of Google Tag Manager.

UTM tracking URLs & Google Campaign URL Builder

Once you’ve got conversion data occupying all your waking thoughts, you might want to take things a step further. Perhaps you want to track traffic and leads that come from an offline advertisement, a business card, an email signature, etc. You can build tracking URLs that include UTM parameters (campaign, source, and medium), so that when visitors come to your site from a certain place, you can tell where that place was!

Once you know how to build these URLs, you don’t really need a tool, but Google’s Campaign URL Builder makes quick enough work of it that it’s bound to earn a spot in your browser’s bookmarks bar.

Pro tip: Use a tracking URL on your Google My Business listing to help distinguish traffic/conversions coming in from your listing vs traffic coming in from the organic search results. I’d recommend using:

Source: google
Medium: organic
Campaign name: gmb-listing (or something)

This way your GMB traffic still shows up in Analytics as normal organic traffic, but you can drill down to the gmb-listing campaign to see its specific performance.

Bonus pro tip: Use a vanity domain or a short URL on print materials or offline ads, and point it to a tracking URL to measure their performance in Analytics.

Rank tracking

Whaaat? Rank tracking is a dirty word to conversion tracking purists, isn’t it?

Nah. It’s true that rank tracking is a poor primary metric for your digital marketing efforts, but it can be very helpful as a supplemental metric and for helping to diagnose changes in traffic, as Darren Shaw explored here.

For local businesses, we think our Local Rank Tracker is a pretty darn good tool for the job.

Google My Business Insights

Your GMB listing is a foundational piece of your local SEO infrastructure, and GMB Insights offer some meaningful data (impressions and clicks for your listing, mostly). It also tries to tell you how many calls your listing generates for you, but it comes up a bit short since it relies on “tel:” links instead of tracking numbers. It will tell you how many people clicked on your phone number, but not how many actually made the call. It also won’t give you any insights into calls coming from desktop users.

There’s a great workaround though! It just might freak you out a bit…

Fire up your call tracking platform once more, create an “offline” number, and use it as your “primary number” on your GMB listing. Don’t panic. You can preserve your NAP consistency by demoting your real local number to an “additional number” slot on your GMB listing.

I don’t consider this a necessary step, because you’re probably not pointing your paid clicks to your GMB listing. However, combined with a tracking URL pointing to your website, you can now fully measure the performance of Google My Business for your business!

Disclaimer: I believe that this method is totally safe, and I’m using it myself in several instances, but I can’t say with absolute certainty that it won’t impact your rankings. Whitespark is currently testing this out on a larger scale, and we’ll share our findings once they’re assembled!

Taking it all in

So now you’ve assembled a lean, mean tracking machine. You’re already feeling 10 years younger, and everyone pays attention when you enter the room. But what can you do with all this power?

Here are a few ways I like to soak up this beautiful data.

Pop into Analytics

Since we’ve centralized all our tracking in Analytics, we can answer pretty much any performance questions we have within a few simple clicks.

  • How many calls and form fills did we get last month from our organic rankings?
  • How does that compare to the month before? Last year?
  • How many paid conversions are we getting? How much are we paying on average for them?
  • Are we doing anything expensive that isn’t generating many leads?
  • Does our Facebook page generate any leads on our website?

There are a billion and seven ways to look at your Analytics data, but I do most of my ogling from Acquisition > All Traffic > Channels. Here you get a great overview of your traffic and conversions sliced up by channels (Organic Search, Paid Search, Direct, Referral, etc). You can obviously adjust date ranges, compare to past date ranges, and view conversion metrics individually or as a whole. For me, this is Analytics home base.

Acquisition > All Traffic > Source/Medium can be equally interesting, especially if you’ve made good use of tracking URLs.

Make some sweet SEO reports

I can populate almost my entire standard SEO client report from the Acquisition section of Analytics. Making conversions the star of the show really helps to keep clients engaged in their monthly reporting.

Google Analytics dashboards

Google’s Dashboards inside Analytics provide a great way to put the most important metrics together on a single screen. They’re easy to use, but I’ve always found them a bit limiting. Fortunately for data junkies, Google has recently released its next generation data visualization product…

Google Data Studio

This is pretty awesome. It’s very flexible, powerful, and user-friendly. I’d recommend skipping the Analytics Dashboards and going straight to Data Studio.

It will allow to you to beautifully dashboard-ify your data from Analytics, Adwords, Youtube, DoubleClick, and even custom databases or spreadsheets. All the data is “live” and dynamic. Users can even change data sources and date ranges on the fly! Bosses love it, clients love it, and marketers love it… provided everything is performing really well ;)

Supermetrics

If you want to get really fancy, and build your own fully custom dashboard, develop some truly bespoke analysis tools, or automate your reporting regimen, check out Supermetrics. It allows you to pull data from just about any source into Google Sheets or Excel. From there, your only limitation is your mastery of spreadsheet-fu and your imagination.

TL;DR

So that’s a lot of stuff. If you’d like to skip the more nuanced explanations, pro tips, and bad jokes, here’s the gist in point form:

  • Tracking your digital marketing is super important.
  • Don’t just track traffic. Tracking conversions is critical.
  • Use Google Analytics. Don’t let your agency use their own account.
  • Set up goals for every type of lead (forms, calls, chats, bookings, etc).
  • Track forms with destinations (thank you pages) or events.
  • Track your calls, probably using CallRail.
  • Use “number per source” if you have a huge volume of traffic; otherwise, use number pools (AKA keyword pools). Pools are better.
  • Set up Search Console and link it to your Analytics and Adwords accounts.
  • Link Adwords with Analytics.
  • Import Analytics conversions into Adwords instead of using Adwords’ native conversion tracking snippet…
  • …except for call extensions. Track those within and Adwords AND in Analytics (if you want to) by using an “offline” tracking number as the destination for your Google forwarding numbers.
  • Use Google Tag Manager if you have more than a couple third-party scripts to run on your site (web chat, Analytics, call tracking, Facebook Pixels etc).
  • Use Google Campaign URL Builder to create tracked URLs for tracking visitors from various sources like offline advertising, email signatures, etc.
  • Use a tracked URL on your GMB listing.
  • Use a tracked number as your “primary” GMB listing number (if you do this, make sure you put your real local number as a “secondary” number). Note: We think this is safe, but we don’t have quite enough data to say so unequivocally. YMMV.
  • Use vanity domains or short URLs that point to your tracking URLs to put on print materials, TV spots, etc.
  • Track your rankings like a boss.
  • Acquisition > All Traffic > Channels is your new Analytics home base.
  • Consider making some Google Analytics Dashboards… and then don’t, because Google Data Studio is way better. So use that.
  • Check out Supermetrics if you want to get really hardcore.
  • Don’t let your dreams be dreams.

If you’re new to tracking your digital marketing, I hope this provides a helpful starting point, and helps cut through some of the confusion and uncertainty about how to best get set up.

If you’re a conversion veteran, I hope there are a few new or alternative ideas here that you can use to improve your setup.

If you’ve got anything to add, correct, or ask, leave a comment!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Going Beyond Google: Are Search Engines Ready for JavaScript Crawling & Indexing?

Posted by goralewicz

I recently published the results of my JavaScript SEO experiment where I checked which JavaScript frameworks are properly crawled and indexed by Google. The results were shocking; it turns out Google has a number of problems when crawling and indexing JavaScript-rich websites.

Google managed to index only a few out of multiple JavaScript frameworks tested. And as I proved, indexing content doesn’t always mean crawling JavaScript-generated links.

This got me thinking. If Google is having problems with JavaScript crawling and indexing, how are Google’s smaller competitors dealing with this problem? Is JavaScript going to lead you to full de-indexing in most search engines?

If you decide to deploy a client-rendered website (meaning a browser or Googlebot needs to process the JavaScript before seeing the HTML), you’re not only risking problems with your Google rankings — you may completely kill your chances at ranking in all the other search engines out there.

Google + JavaScript SEO experiment

To see how search engines other than Google deal with JavaScript crawling and indexing, we used our experiment website, http:/jsseo.expert, to check how Googlebot crawls and indexes JavaScript (and JavaScript frameworks’) generated content.

The experiment was quite simple: http://jsseo.expert has subpages with content parsed by different JavaScript frameworks. If you disable JavaScript, the content isn’t visible — i.e. if you go to http://jsseo.expert/angular2/, all the content within the red box is generated by Angular 2. If the content isn’t indexed in Yahoo, for example, we know that Yahoo’s indexer didn’t process the JavaScript.

Here are the results:

As you can see, Google and Ask are the only search engines to properly index JavaScript-generated content. Bing, Yahoo, AOL, DuckDuckGo, and Yandex are completely JavaScript-blind and won’t see your content if it isn’t HTML.

The next step: Can other search engines index JavaScript?

Most SEOs only cover JavaScript crawling and indexing issues when talking about Google. As you can see, the problem is much more complex. When you launch a client-rendered JavaScript-rich website (JavaScript is processed by the browser/crawler to “build” HTML), you can be 100% sure that it’s only going to be indexed and ranked in Google and Ask. Unfortunately, Google and Ask cover only ~64% of the whole search engine market, according to statista.com.

This means that your new, shiny, JavaScript-rich website can cost you ~36% of your website’s visibility on all search engines.

Let’s start with Yahoo, Bing, and AOL, which are responsible for 35% of search queries in the US.

Yahoo, Bing, and AOL

Even though Yahoo and AOL were here long before Google, they’ve obviously fallen behind its powerful algorithm and don’t invest in crawling and indexing as much as Google. One reason is likely the relatively high cost of crawling and indexing the web compared to the popularity of the website.

Google can freely invest millions of dollars in growing their computing power without worrying as much about return on investment, whereas Bing, AOL, and Ask only have a small percentage of the search market.

However, Microsoft-owned Bing isn’t out of the running. Their growth has been quite aggressive over last 8 years:

Unfortunately, we can’t say the same about one of the market pioneers: AOL. Do you remember the days before Google? This video will surely bring back some memories from a simpler time.

If you want to learn more about search engine history, I highly recommend watching Marcus Tandler’s spectacular TEDx talk.

Ask.com

What about Ask.com? How is it possible that Ask, with less than 1% of the market, can invest in crawling and indexing JavaScript? It makes me question if the Ask network is powered by Google’s algorithm and crawlers. It’s even more interesting looking at Ask’s aversion towards Google. There were already some speculations about Ask’s relationship with Google after Google Penguin in 2012, but we can now confirm that Ask’s crawling is using Google’s technology.

DuckDuckGo and Yandex

Both DuckDuckGo and Yandex had no problem indexing all the URLs within http://jsseo.expert, but unfortunately, the only content that was indexed properly was the 100% HTML page (http://jsseo.expert/html/).

Baidu

Despite my best efforts, I didn’t manage to index http://jsseo.expert in Baidu.com. It turns out you need a mainland China phone number to do that. I don’t have any previous experience with Baidu, so any and all help with indexing our experimental website would be appreciated. As soon as I succeed, I will update this article with Baidu.com results.

Going beyond the search engines

What if you don’t really care about search engines other than Google? Even if your target market is heavily dominated by Google, JavaScript crawling and indexing is still in an early stage, as my JavaScript SEO experiment documented.

Additionally, even if crawled and indexed properly, there is proof that JavaScript reliance can affect your rankings. Will Critchlow saw a significant traffic improvement after shifting from JavaScript-driven pages to non-JavaScript reliant.

Is there a JavaScript SEO silver bullet?

There is no search engine that can understand and process JavaScript at the level our modern browsers can. Even so, JavaScript isn’t inherently bad for SEO. JavaScript is awesome, but just like SEO, it requires experience and close attention to best practices.

If you want to enjoy all the perks of JavaScript without worrying about problems like Hulu.com’s JavaScript SEO issues, look into isomorphic JavaScript. It allows you to enjoy dynamic and beautiful websites without worrying about SEO.

If you’ve already developed a client-rendered website and can’t go back to the drawing board, you can always use pre-rendering services or enable server-side rendering. They often aren’t ideal solutions, but can definitely help you solve the JavaScript crawling and indexing problem until you come up with a better solution.

Regardless of the search engine, yet again we come back to testing and experimenting as a core component of technical SEO.

The future of JavaScript SEO

I highly recommend you follow along with how http://jsseo.expert/ is indexed in Google and other search engines. Even if some of the other search engines are a little behind Google, they’ll need to improve how they deal with JavaScript-rich websites to meet the exponentially growing demand for what JavaScript frameworks offer, both to developers and end users.

For now, stick to HTML & CSS on your front-end. :)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Advert