Tag Archive | "Tests"

5 Ways You Might Mess up When Running SEO Split Tests

Posted by sam.nemzer

SEO split testing is a relatively new concept, but it’s becoming an essential tool for any SEO who wants to call themselves data-driven. People have been familiar with A/B testing in the context of Conversion Rate Optimisation (CRO) for a long time, and applying those concepts to SEO is a logical next step if you want to be confident that what you’re spending your time on is actually going to lead to more traffic.

At Distilled, we’ve been in the fortunate position of working with our own SEO A/B testing tool, which we’ve been using to test SEO recommendations for the last three years. Throughout this time, we’ve been able to hone our technique in terms of how best to set up and measure SEO split tests.

In this post, I’ll outline five mistakes that we’ve fallen victim to over the course of three years of running SEO split tests, and that we commonly see others making.

What is SEO Split testing?

Before diving into how it’s done wrong (and right), it’s worth stopping for a minute to explain what SEO split testing actually is.

CRO testing is the obvious point of comparison. In a CRO test, you’re generally comparing a control and variant version of a page (or group of pages) to see which performs better in terms of conversion. You do this by assigning your users into different buckets, and showing each bucket a different version of the website.

In SEO split testing, we’re trying to ascertain which version of a page will perform better in terms of organic search traffic. If we were to take a CRO-like approach of bucketing users, we would not be able to test the effect, as there’s only one version of Googlebot, which would only ever see one version of the page.

To get around this, SEO split tests bucket pages instead. We take a section of a website in which all of the pages follow a similar template (for example the product pages on an eCommerce website), and make a change to half the pages in that section (for all users). That way we can measure the traffic impact of the change across the variant pages, compared to a forecast based on the performance of the control pages.

For more details, you can read my colleague Craig Bradford’s post here.

Common SEO Split Testing Mistakes

1. Not leaving split tests running for long enough

As SEOs, we know that it can take a while for the changes we make to take effect in the rankings. When we run an SEO split test, this is borne out in the data. As you can see in the below graph, it takes a week or two for the variant pages (in black) to start out-stripping the forecast based on the control pages (in blue).


A typical SEO split test — it often takes a couple of weeks for the uplift to show.

It’s tempting to panic after a week or so that our test might not be making a difference, and call it off as a neutral result. However, we’ve seen over and over again that things often change after a week or two, so don’t call it too soon!

The other factor to bear in mind here is that the longer you leave it after this initial flat period, the more likely it is that your results will be significant, so you’ll have more certainty in the result you find.

A note for anyone reading with a CRO background — I imagine you’re shouting at your screen that it’s not OK to leave a test running longer to try and reach significance and that you must pre-determine your end date in order for the results to be valid. You’d be correct for a CRO test measured using standard statistical models. In the case of SEO split tests, we measure significance using Bayesian statistical methods, meaning that it’s valid to keep a test running until it reaches significance and you can be confident in your results at that point.

2. Testing groups of pages that don’t have enough traffic (or are dominated by a small number of pages)

The sites we’ve been able to run split tests on using Distilled ODN have ranged in traffic levels enormously, as have the site sections on which we’ve attempted to run split tests. Over the course of our experience with SEO split testing, we’ve generated a rule of thumb: if a site section of similar pages doesn’t receive at least 1,000 organic sessions per day in total, it’s going to be very hard to measure any uplift from your split test. If you have less traffic than that to the pages you’re testing, any signal of a positive or negative test result would be overtaken by the level of uncertainty involved.

Beyond 1,000 sessions per day, in general, the more traffic you have, the smaller the uplift you can detect. So far, the smallest effect size we’ve managed to measure with statistical confidence is a few percent.

On top of having a good amount of traffic in your site section, you need to make sure that your traffic is well distributed across a large number of pages. If more than 50 percent of the site section’s organic traffic is going to three or four pages, it means that your test is vulnerable to fluctuations in those pages’ performance that has nothing to do with the test. This may lead you to conclude that the change that you are testing is having an effect when it is actually being swayed by an irrelevant factor. By having the traffic well distributed across the site section, you ensure that these page-specific fluctuations will even themselves out and you can be more confident that any effect you measure is genuine.

3. Bucketing pages arbitrarily

In CRO tests, the best practice is to assign every user randomly into either the control and variant group. This works to ensure that both groups are essentially identical, because of the large number of users that tends to be involved.

In an SEO split test, we need to apply more nuance to this approach. For site sections with a very large number of pages, where the traffic is well distributed across them, the purely random approach may well lead to a fair bucketing, but most websites have some pages that get more traffic, and some that get less. As well as that, some pages may have different trends and spikes in traffic, especially if they serve a particular seasonal purpose.

In order to ensure that the control and variant groups of pages are statistically similar, we create them in such a way that they have:

  • Similar total traffic levels
  • Similar distributions of traffic between pages within them
  • Similar trends in traffic over time
  • Similarity in a range of other statistical measures

4. Running SEO split tests using JavaScript

For a lot of websites, it’s very hard to make changes, and harder still to split test them. A workaround that a lot of sites use (and that I have recommended in the past), is to deploy changes using a JavaScript-based tool such as Google Tag Manager.

Aside from the fact that we’ve seen pages that rely on JavaScript perform worse overall, another issue with this is that Google doesn’t consistently pick up changes that are implemented through JavaScript. There are two primary reasons for this:

  • The process of crawling, indexing, and rendering pages is a multi-phase process — once Googlebot has discovered a page, it first indexes the content within the raw HTML, then there is often a delay before any content or changes that rely on JavaScript are considered.
  • Even when Googlebot has rendered the JavaScript version of the page, it has a cut-off of five seconds after which it will stop processing any JavaScript. A lot of JavaScript changes to web pages, especially those that rely on third-party tools and plugins, take longer than five seconds, which means that Google has stopped paying attention before the changes have had a chance to take effect.

This can lead to inconsistency within tests. For example, if you are changing the format of your title tags using a JavaScript plugin, it may be that only a small number of your variant pages have that change picked up by Google. This means that whatever change you think you’re testing doesn’t have a chance of demonstrating a significant effect.

5. Doing pre/post tests instead of A/B tests

When people talk colloquially about SEO testing, often what they mean is making a change to an individual page (or across an entire site) and seeing whether their traffic or rankings improve. This is not a split test. If you’re just making a change and seeing what happens, your analysis is vulnerable to any external factors, including:

  • Seasonal variations
  • Algorithm updates
  • Competitor activity
  • Your site gaining or losing backlinks
  • Any other changes you make to your site during this time

The only way to really know if a change has an effect is to run a proper split test — this is the reason we created the ODN in the first place. In order to account for the above external factors, it’s essential to use a control group of pages from which you can model the expected performance of the pages you’re changing, and know for sure that your change is what’s having an effect.

And now, over to you! I’d love to hear what you think — what experiences have you had with split testing? And what have you learned? Tell me in the comments below! 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Search Buzz Video Recap: Widespread Google Indexing Issues, Algorithm Shifts, Icon Tests, AMP Changes & Google+ Dead

Today was an interesting day, a possible Google bug is dropping pages out of the Google index like flies. No word yet from Google on what the issue is yet. There was a potential Google update on March 29th again…


Search Engine Roundtable

Posted in IM NewsComments Off

Google Image Search Tests New Preview Screen Again

Google is once again testing a new user interface for the image search preview image window. This is similar to previous tests but this one takes the right hand window look and keeps the black background interface versus the white background interface.


Search Engine Roundtable

Posted in IM NewsComments Off

Not just for auto anymore: Google tests giant image search ads in new verticals.

The ads feature a carousel of images.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Google Tests Minimalist Local Pack Again

We’ve seen it before, a more subtle and more minimalistic layout and design for the Google local pack in the mobile search results. Here is another screen shot of it, this time from Mike Blumenthal.


Search Engine Roundtable

Posted in IM NewsComments Off

Google Tests Navigation Slider For Mobile Search Results

Google is testing a scroll navigational slider for the mobile search results. Valentin Pletzer spotted this one showing screen shots of the slider in action. He told me the navigation tool slides in from the right once you begin scrolling and disappears after a few seconds after inactivity.


Search Engine Roundtable

Posted in IM NewsComments Off

SearchCap: Google tests AMP labels, AdWords personalization & understanding user intent

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Google Tests New User Interface In Search For Google Posts

Google seems to be testing multiple variations of how Google Posts show up in the Google search results recently. In the past week or so…


Search Engine Roundtable

Posted in IM NewsComments Off

Digital Marketing News: CMO Diversity Shortfalls, Goo.gl Retirement, Facebook’s New A/B Tests

Brands Fail to Meet the ANA’s Diversity Goals, Too

Brands Fail to Meet the ANA’s Diversity Goals, Too
Progress has been strong in CMO gender balance while ethnic diversity continues to face significant shortfalls, according to new research from the Association of National Advertisers and its inaugural CMO scorecard. While 45 percent of top marketer positions examined in the ANA member data were female, only 13 percent were people of color. AdWeek

Instagram Makes Stories Advertising Easier with Automatic Full-screen Support
Instagram advertisers can now have square or landscape ad photos or videos automatically reformatted for full-screen utilization, one of several new features the firm recently announced as part of an effort to improve Instagram Stories. Marketing Land

YouTube Launches Reach-Based Pricing for User-Skippable Ads
YouTube advertisers can buy spots skippable after five seconds with prices based on a CPM basis, the firm has announced. With TruView for Reach, YouTube now offers an ad option aside from its in-stream non-skippable “bumper” ads and its traditional TrueView ads. Variety

Goo.gl Shutting Down – These are Your Options
Google’s popular URL shortener goo.gl is being phased out over the next year, with the Internet giant supporting a move to the newer take on short and persistent links that is offered with Firebase Dynamic Links (FDL). Existing goo.gl links will continue to function, however, Google has noted. Search Engine Journal

Advertisers on Facebook Have Some New Ways to Conduct A/B Tests
Facebook advertisers can now use split A/B tests in its Ads Manager’s Quick Creation system, the company announced Monday, a new option to augment the creative split testing it launched in October. The option to easily duplicate split tests while keeping them separate from existing settings was also among several new features Facebook rolled out this week. AdWeek

Snapchat Lays Off 100 From Advertising Division in Department Restructure
Three percent of Snapchat’s workforce has been cut in layoffs, with 100 workers in the firm’s advertising department being the latest affected in a series of downsizing that has followed lukewarm quarterly earnings results, Snapchat announced this week. AdWeek

Diversity And Gender Progress Is Mixed Among ANA Member CMOs

Facebook Will No Longer Allow Third-Party Data for Targeting Ads
Facebook has begun disabling its popular Partner Categories, as part of a continued recent effort to combat potentially vulnerable advertising practices, the company has announced. The Verge

Twitter’s Timestamps Lets You Share Live Videos from Any Specific Moment
The ability to schedule live videos with a new Timestamps feature has been announced by Twitter, as part of a new set of tool options that also allows video replays to begin at any point. The Verge

Snapchat is Testing ‘Connected Apps’ for Sharing Information
Snapchat has made way for the possibility of offering connected apps in its latest beta version, a move which could eventually mean a similar feature in its widely-used release version. Mashable

Google Lets Businesses Post Offers to Organic Search Results
Google is testing a new feature that allows businesses to present offers in both maps and directly in SERPs, from Google My Business pages, including offer photos, text, link, dates and times. Search Engine Journal

Facebook Restricts APIs, Axes Old Instagram Platform Amidst Scandals
Facebook is shutting down portions of the Instagram API for developers months ahead of a previously-scheduled July 31 deprecation, in the wake of Facebook’s must-publicized recent privacy concerns. TechCrunch

Bing Adds More Intelligent Search Features
Bing has launched several new search features, including aggregated facts from multiple sources, hover-over definitions for uncommon words, image search object detection zoom enhancements, along with updated handling of how-to questions, the company announced. Search Engine Roundtable

ON THE LIGHTER SIDE:

Marketoonist Personal Data Simplicity Comic

A lighthearted look at product proliferation, non-universal USB frustration, and Steve Jobs’ product matrix – Marketoonist

April Fools’ the Day After: Our Roundup of Every Brand Stunt You Missed the First Time Around – AdWeek

Google Rickrolls SEOs With Recrawl Now Button – SEO Roundtable

‘Stolen office lunch’ drama has Twitter gripped – BBC

TOPRANK MARKETING & CLIENTS IN THE NEWS:

  • LinkedIn (client) – How to Ignite Your LinkedIn Marketing Strategy [Infographic] — MarketingProfs
  • Lee Odden – 47 Quotes about content marketing from top content marketers — Medium
  • Steve Slater – Search Marketing Scoop with David Bain #5 [podcast] — SEM Rush
  • Ashley Zeckman – Romancing B2B Influencers: How to Attract, Engage and Persuade Influencers to Co-Create — AMA Iowa
  • DivvyHQ (client) – [Interactive Guide] Take Your Content Marketing Program Back to the Future with DivvyHQ — DivvyHQ

Don’t miss next week, when we’ll be sharing all new marketing news stories, and in the meantime you can follow us at @toprank on Twitter for even more timely daily news. Also, don’t miss the full video summary on our TopRank Marketing TV YouTube Channel.


Email Newsletter
Gain a competitive advantage by subscribing to the
TopRank® Online Marketing Newsletter.

© Online Marketing Blog – TopRank®, 2018. |
Digital Marketing News: CMO Diversity Shortfalls, Goo.gl Retirement, Facebook’s New A/B Tests | http://www.toprankblog.com

The post Digital Marketing News: CMO Diversity Shortfalls, Goo.gl Retirement, Facebook’s New A/B Tests appeared first on Online Marketing Blog – TopRank®.

Online Marketing Blog – TopRank®

Posted in IM NewsComments Off

Google Tests ‘Smart Reply,’ Sends Contextual Replies to All Your Favorite Chat Apps

Replying to common messages received via your Android device will soon be a lot easier. Google is developing an app that will give you a selection of preformatted responses allowing you to reply with just one click of a button.

The new project is aptly named “Reply,” which can be viewed as the mobile version of Google’s Smart Reply feature that is available in Gmail and Allo. The upcoming app, which will be initially available to Android users, will use artificial intelligence to automatically create response suggestions to inbound messages.

The “Reply” app aims to enable users to make faster responses to simple questions instead of typing out the entirety of their replies. For instance, users will be given the reply options “Yes,” “No,” or “I am here” when they receive questions such as “Are you at the restaurant?” or “When can you be home?” The AI-powered app will also take into account your current location when crafting an appropriate response.

[Image via Android Police]

The app is currently in development by Google’s Area 120 team. However, the company does not plan on limiting the useful feature only to its messaging apps. The team announced that the plan is for the upcoming app to work with other mainstream messaging apps.

In fact, it’s not necessary to change apps to enjoy the convenience of the upcoming “Reply” app at all. The Area 120 team is aiming for the app to have support among major messaging apps such as Hangouts, Allo, Whatsapp, Facebook Messenger, Android Messages, Skype, Twitter DMs, and Slack.

Aside from offering reply suggestions, the “Reply” app will also introduce other smart features. It comes with a Do Not Disturb mode which can be particularly useful when you are driving as it will silence your smartphone and automatically send a responses message saying that you can’t chat at the moment. 

At the moment, Team 120 is not disclosing any launch date estimate.

[Featured image via Pixabay]

The post Google Tests 'Smart Reply,' Sends Contextual Replies to All Your Favorite Chat Apps appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Advert