Tag Archive | "Surprising"

Surprising SEO A/B Test Results – Whiteboard Friday

Posted by willcritchlow

You can make all the tweaks and changes in the world, but how do you know they’re the best choice for the site you’re working on? Without data to support your hypotheses, it’s hard to say. In this week’s edition of Whiteboard Friday, Will Critchlow explains a bit about what A/B testing for SEO entails and describes some of the surprising results he’s seen that prove you can’t always trust your instinct in our industry.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, everyone. Welcome to another British Whiteboard Friday. My name is Will Critchlow. I’m the founder and CEO at Distilled. At Distilled, one of the things that we’ve been working on recently is building an SEO A/B testing platform. It’s called the ODN, the Optimization Delivery Network. We’re now deployed on a bunch of big sites, and we’ve been running these SEO A/B tests for a little while. I want to tell you about some of the surprising results that we’ve seen.

What is SEO A/B testing?

We’re going to link to some resources that will show you more about what SEO A/B testing is. But very quickly, the general principle is that you take a site section, so a bunch of pages that have a similar structure and layout and template and so forth, and you split those pages into control and variant, so a group of A pages and a group of B pages.

Then you make the change that you’re hypothesizing is going to make a difference just to one of those groups of pages, and you leave the other set unchanged. Then, using your analytics data, you build a forecast of what would have happened to the variant pages if you hadn’t made any changes to them, and you compare what actually happens to the forecast. Out of that you get some statistical confidence intervals, and you get to say, yes, this is an uplift, or there was no difference, or no, this hurt the performance of your site.

This is data that we’ve never really had in SEO before, because this is very different to running a controlled experiment in a kind of lab environment or on a test domain. This is in the wild, on real, actual, live websites. So let’s get to the material. The first surprising result I want to talk about is based off some of the most basic advice that you’ve ever seen.

Result #1: Targeting higher-volume keywords can actually result in traffic drops

I’ve stood on stage and given this advice. I have recommended this stuff to clients. Probably you have too. You know that process where you do some keyword research and you find that there’s one particular way of searching for whatever it is that you offer that has more search volume than the way that you’re talking about it on your website right now, so higher search volume for a particular way of phrasing?

You make the recommendation, “Let’s talk about this stuff on our website the way that people are searching for it. Let’s put this kind of phrasing in our title and elsewhere on our pages.” I’ve made those recommendations. You’ve probably made those recommendations. They don’t always work. We’ve seen a few times now actually of testing this kind of process and seeing what are actually dramatic drops.

We saw up to 20-plus-percent drops in organic traffic after updating meta information in titles and so forth to target the more commonly-searched-for variant. Various different reasons for this. Maybe you end up with a worse click-through rate from the search results. So maybe you rank where you used to, but get a worse click-through rate. Maybe you improve your ranking for the higher volume target term and you move up a little bit, but you move down for the other one and the new one is more competitive.

So yes, you’ve moved up a little bit, but you’re still out of the running, and so it’s a net loss. Or maybe you end up ranking for fewer variations of key phrases on these pages. However it happens, you can’t be certain that just putting the higher-volume keyword phrasing on your pages is going to perform better. So that’s surprising result number one. Surprising result number two is possibly not that surprising, but pretty important I think.

Result #2: 30–40% of common tech audit recommendations make no difference

So this is that we see as many as 30% or 40% of the common recommendations in a classic tech audit make no difference. You do all of this work auditing the website. You follow SEO best practices. You find a thing that, in theory, makes the website better. You go and make the change. You test it.

Nothing, flatlines. You get the same performance as the forecast, as if you had made no change. This is a big deal because it’s making these kinds of recommendations that damages trust with engineers and product teams. You’re constantly asking them to do stuff. They feel like it’s pointless. They do all this stuff, and there’s no difference. That is what burns authority with engineering teams too often.

This is one of the reasons why we built the platform is that we can then take our 20 recommendations and hypotheses, test them all, find the 5 or 6 that move the needle, only go to the engineering team to build those ones, and that builds so much trust and relationship over time, and they get to work on stuff that moves the needle on the product side.

So the big deal there is really be a bit skeptical about some of this stuff. The best practices, at the limit, probably make a difference. If everything else is equal and you make that one tiny, little tweak to the alt attribute or a particular image somewhere deep on the page, if everything else had been equal, maybe that would have made the difference.

But is it going to move you up in a competitive ranking environment? That’s what we need to be skeptical about.

Result #3: Many lessons don’t generalize

So surprising result number three is: How many lessons do not generalize? We’ve seen this broadly across different sections on the same website, even different industries. Some of this is about the competitive dynamics of the industry.

Some of it is probably just the complexity of the ranking algorithm these days. But we see this in particular with things like this. Who’s seen SEO text on a category page? Those kind of you’ve got all of your products, and then somebody says, “You know what? We need 200 or 250 words that mention our key phrase a bunch of times down at the bottom of the page.” Sometimes, helpfully, your engineers will even put this in an SEO-text div for you.

So we see this pretty often, and we’ve tested removing it. We said, “You know what? No users are looking at this. We know that overstuffing the keyword on the page can be a negative ranking signal. I wonder if we’ll do better if we just cut that div.” So we remove it, and the first time we did it, plus 6% result. This was a good thing.

The pages are better without it. They’re now ranking better. We’re getting better performance. So we say, “You know what? We’ve learnt this lesson. You should remove this really low-quality text from the bottom of your category pages.” But then we tested it on another site, and we see there’s a drop, a small one admittedly, but it was helping on these particular pages.

So I think what that’s just telling us is we need to be testing these recommendations every time. We need to be trying to build testing into our core methodologies, and I think this trend is only going to increase and continue, because the more complex the ranking algorithms get, the more machine learning is baked into it and it’s not as deterministic as it used to be, and the more competitive the markets get, so the narrower the gap between you and your competitors, the less stable all this stuff is, the smaller differences there will be, and the bigger opportunity there will be for something that works in one place to be null or negative in another.

So I hope I have inspired you to check out some SEO A/B testing. We’re going to link to some of the resources that describe how you do it, how you can do it yourself, and how you can build a program around this as well as some other of our case studies and lessons that we’ve learnt. But I hope you enjoyed this journey on surprising results from SEO A/B tests.


Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Posted in IM NewsComments Off

3 Surprising Benefits of Chatbots (No Creep Factor Required)

When you hear the word “bot,” what goes through your head? For me, it’s a toss-up: Election sabotage, death threats on Twitter, or the Cybermen. Not an awesome list of associations. But late last week, I happened to catch a session with Andrew Warner at Social Media Marketing World on how to use chatbots to
Read More…

The post 3 Surprising Benefits of Chatbots (No Creep Factor Required) appeared first on Copyblogger.


Posted in IM NewsComments Off

3 Simple Ways to Overcome Surprising Challenges of Working from Home

“Wow, you have the best job ever, getting to work from home.” “You’re so lucky. I wish I had that option.” Those are some of the comments I hear when I mention to others I work from home. Typically, I just nod and say, “Yes, it’s awesome.” I love working from home because I get
Read More…

The post 3 Simple Ways to Overcome Surprising Challenges of Working from Home appeared first on Copyblogger.


Posted in IM NewsComments Off

Evidence of the Surprising State of JavaScript Indexing

Posted by willcritchlow

Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.

In this article, I want to explore some things we’ve seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I’ve drawn about how it must be working.

A brief introduction to JS indexing

At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.

There are some complexities even in this basic definition (answers in brackets as I understand them):

  • What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
  • What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
  • What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
  • What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)

For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.

A high-level overview of my view of JavaScript best practices

Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.

Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that’s too susceptible to silent failures and falling out of date. We’ve seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.

These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).

Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it’s served in response to any fresh request.

I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:

Resources for auditing JavaScript

If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).

To do that, here are some resources I’ve found useful:

Some surprising/interesting results

There are likely to be timeouts on JavaScript execution

I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).

It may be more complicated than that, however. This segment of a thread is interesting. It’s from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):

“Actually, we did care about this content. I’m not at liberty to explain the details, but we did execute setTimeouts up to some time limit.

If they’re smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”

What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).

It matters how your JS is executed

I referenced this recent study earlier. In it, the author found:

Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot

The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they’re called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:


It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There’s more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.

CRO tests are getting indexed

When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:

  • For users:
    • CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
    • Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
    • A cookie is then set to make sure that the user sees the same version if they revisit that page later
  • For Googlebot:
    • The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
    • With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed

I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don’t do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.

Split tests show SEO improvements from removing a reliance on JS

Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.


A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.

Conclusion: How JavaScript indexing might work from a systems perspective

Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:

  • Googlebot crawls and caches HTML and core resources regularly
  • Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
    • Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
    • Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
    • Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
  • The JS rendering also, in addition to adding pages to the index:
    • May make modifications to the link graph
    • May add new URLs to the discovery/crawling queue for Googlebot

The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:

“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.

Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”

This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.

My best guess is that they’re using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.

Run a test, get publicity

I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Posted in IM NewsComments Off

3 Surprising Steps to Help You Think Outside the Content Marketing Box

content marketing where you least expect it

Let me tell you a fascinating story about John. 

He’s a fictional character but a good example of the type of people I help every day. In fact, if you pay close attention, you might realize you relate to him in more ways than one.

John is a developer. He writes code day in and day out for a large company.

He doesn’t think of himself as a copywriter. He essentially just codes, all the time. But John also has a strategy up his sleeve. In fact, he uses content marketing every day.

What John might lack in overt marketing skills, he makes up for with an ongoing list of small side projects. He spends hours of his free time building small code snippets and software. Then, he makes them free to download on the popular code-sharing site GitHub.

One day, John wakes up to hundreds of emails in his inbox — job offers, questions, and positive feedback all yell at him through his still-groggy eyelids. He might not be on his third cup of coffee yet, but John is smiling wide.

However, he isn’t surprised by his now-packed inbox; it was just a matter of time.

So, what really happened?

Recently, a big-time developer noticed one of John’s code snippets and decided to share it with his own audience.

This led to thousands of visits and additional links to John’s work, which exposed him to a larger audience, drove traffic to his website, and then resulted in a plethora of potential clients, fans, and leads.

John simply utilized content marketing in a fundamental way:

He provided free solutions to common problems.

Content marketing is more than just writing

Content marketing and copywriting are certainly a powerful combination.

But you can also find content marketing in unlikely places — produced by people who don’t touch the marketing department of a company.

When you offer a free solution to a problem and invite further dialogue by providing a way to contact you, you have the possibility of attracting previously unattainable business opportunities.

And this illustrates one wonderful truth: content marketing strategies can be applied by anyone.

How problem-solving creates profitable opportunities

While the above story of John the developer is inspirational, it might be difficult to understand how it relates to you and your own endeavors — especially if you’re not a developer!

In light of that, I’ve broken down this principle into three steps. Use this process to leverage content marketing to your advantage and grow your audience.

Step #1: Seek out problems

First, develop a list of problems you know people struggle with.

You could survey customers, run polls, or research your target market.

But there’s another tried-and-true method: take note of the problems you face each day.

I know, you’ve already got 50 problems in mind. It seems we humans have a surprisingly efficient ability to complain.

Once you’ve clearly defined a problem, ask your audience for feedback — even if that’s just your family members.

Here’s an example:

I recently had an idea for an application I wanted to build. It was a perfect opportunity to both solve a repetitive task I found myself doing and learn a new JavaScript framework.

So, the first thing I did was jump on Twitter and tweet out a poll. I asked if anyone would find the idea useful.

The answer? Eighty percent thought the idea had already been done.

Sure, my idea was likely not worth pursuing — but I got instant feedback and saved myself a lot of trouble.

Keep doing this, all the while keeping these ideas in a safe place. Eventually, you’ll stumble upon a few problems that get a resounding, “Yes, please!”

Step #2: Eliminate the expensive and time-consuming

The second step is to review your new list of problems and decide which ones you have the ability to solve.

Throw away the ones you don’t know how to solve (or save them for later) and create a revised list with the ones you do know how to solve.

With this process, you need to decide which problems you can solve absolutely free of charge.

At this point, you might be thinking about all of the free time you don’t have to produce free solutions.

But if you are strategic and smart with your time, you’ll be surprised by the value you can provide — it just takes focus and diligence.

Now, select one problem that:

  • You have the ability to solve
  • Doesn’t require an unreasonable amount of time and resources to solve
  • You can give away for free

This will be your baby. You’ll nurture it at every free moment you can spare. (I have three hours before work every morning dedicated to side projects like my newsletter for web designers.)

This problem, which you hate, should now be your favorite thing in the world.

Step #3: Solve the problem and provide the solution for free

Developing a solution to your problem is the shortest step in the process but undoubtedly the hardest and most crucial.

Solving problems is hard. Solving problems with excellence is even more difficult, as any entrepreneur will tell you.

But solving problems is the essential ingredient to success, and the quality of your solution will be what markets your capabilities.

Finally, once you’ve solved the problem — and double-checked that the solution is excellent — you’re ready to provide it for free.

With this process, don’t ask for anything in return, but don’t be a stranger either — always offer ways to connect and an easy way to get in touch with you.

Remember the inviting further dialogue part I mentioned earlier?

Make yourself available. Inviting further dialogue is your call to action.

Content marketing anyone can do

By following this system, you not only benefit your industry and community, you will also indirectly build authority.

John was just a typical developer before that morning of email bliss. He was regarded as the “company guy,” rather than a content marketer or entrepreneur.

Yet, over time and as a result of consistency, prospects recognized him as a trustworthy resource they wanted to do business with.

By following this process again and again, you’ll not only benefit everyone who cares about the problems you can solve, you’ll also gain loyal customers who trust you. They’ll respect you, support you, and market your expertise and products for free for years to come.

The best part of it all? Absolutely anyone in any industry can do it.

You just have to start.

Additional reading: If you found this article useful, you may also like How to Decide Which Content to Sell and What to Give Away for Free.

The post 3 Surprising Steps to Help You Think Outside the Content Marketing Box appeared first on Copyblogger.


Posted in IM NewsComments Off

The Surprising Spooky Secret to Enduring Success Habits

ghost figure in pumpkin patch with Autumn leaves

Are you addicted to productivity advice?

I was, for a long time. I bought every system, book, and blueprint out there.

I had a very spiffy David Allen-inspired GTD process that was only 642 steps long and took a mere 3 hours a day to implement (during which time I wasn’t actually, you know, getting anything done).

That wasn’t David Allen’s fault, by the way, it was mine. But I don’t think I was alone.

Every person who has a long to-do list also has a desire to do more.

And most of us are quite good at doing certain things. We don’t have a problem getting out of bed every day (even if we grumble), brushing our teeth, driving to work, or finding some lunch. As Seth Godin likes to say, “No one ever gets Talker’s Block.”

Why? Because those things are just ingrained habits. We don’t think about doing them, or need to find motivation to do them … we just do them.

Where we do tend to procrastinate and stumble is on the activities that we feel resistance around. Anything creative is a major one. Writing, in particular, is one of the few forms of procrastination that has its own name: Writer’s Block.

You might have made a million resolutions to write every day, or publish two blog posts a week, or finally get your damned autoresponder up and running. And a million times, you might have failed.

Today, I’d like to let you know what works for me. Because I believe it will work for you, too.

First things first.

Big resolutions don’t work

We all know it, and I don’t know why we keep doing it. Resolutions for massive, sweeping habit change just don’t work.

(They probably work for a few people. But those people aren’t reading this post, because they’re too busy climbing Everest while writing their best-selling memoir and running their four-hour-workweek business. Bless their hearts.)

Everyone I know who believes that sugar is a deadly poison is also stuffing donuts into their face every time I see them.

Everyone I know who absolutely, positively is going to have their novel done in 30 days has been working on that novel for 25 years.

Big change is scary, and we avoid it. With all the creativity and energy we can muster.

Maybe I just know more than my share of flakes, but I don’t think so. I think that massive change sounds like a good idea while we’re making those impassioned vows to ourselves. But once the real world hits, the part of our brains that actually does things wants nothing to do with it.

What works better

There’s an intriguing (and increasing) body of work that suggests that instead, itsy bitsy habit change is the thing that works.

There’s Robert Maurer’s excellent book, One Small Step Can Change Your Life: The Kaizen Way, which everyone should go read right now.

There’s BJ Fogg’s well-known Tiny Habits site, and accompanying TED talk.

There’s Stephen Guise’s book on Mini Habits, which lays out a stupidly easy plan to develop these stupidly easy small habit tweaks. You should go read that one right now if you’re not picking up the Maurer, or even if you are.

So if you want to get your book written? Commit to a ridiculously tiny habit of writing 50 words on it a day. Once the micro habit is in place, it’s funny how often you find yourself sticking around for a lot more than those 50 words. And on the days that you only do 50 — you still win.

Getting started on anything new or uncomfortable — writing, working out, improving your website — is always the hardest part. But once you’re in motion, you’ll tend to stay in motion. And once you have a solid habit formed, you’ll think of yourself as “the kind of person who” does that thing. You’ll be surprised at how much productivity that will spur.

Here are a few of my thoughts on how to get a micro habit started, how to best benefit from it, and some ideas about productive micro habits you might want to get rolling for yourself.

Getting started

I’ve read a few books on this (apparently I’m still addicted to productivity advice), and Stephen Guise’s Mini Habits is the best one I’ve found to just get you going. It’s a quick, easy read that lays out the process, as well as the benefits, succinctly.

Or if you’d rather start right now (an excellent idea), just pick one of the habits I’ve listed in this post. Do it every day. If you aren’t doing it every day, try my advice below.

One nice thing about these teeny habit changes is that you can do more than one at a time, if you like. I’m currently doing four, and will add a fifth in the next day or two. But start with just one for at least a week, to get yourself used to the new plan.

Plan for your crazy days

Your micro habit needs to work on your absolutely most insane days.

Think about your nuttiest day of the week — when you work late, your dog has swim practice, and your kid has obedience lessons. Or think about what your day looks like when you’re traveling for business. Or family. Or anything else that tends to be disruptive.

These little habits need to be so little that they’ll fit into your day, even when things are a zoo. Don’t be tempted to skip your micro habits on zoo days — that’s just when you most need them.

(If you or a loved person goes to the hospital for something serious, you have my permission to slack off. Anything short of that, the habit should be small enough to fit.)

The right timing

When I can, I like to time my little habits so that I have some free time after.

Why? Because that’s how 50 words on a key project turns into 2,000 words. That’s how completing your warm-up turns into a 40-minute workout.

Important, though: If you can’t time your teeny habit for that kind of time slot, do it anyway. If you have four habits and you do all of them right before bed, you still win.

Don’t unconsciously make your “real” habit Write 2,000 Words and start putting it off because you don’t have that much time or energy. Your habit is 50 words. If you do that, you win.

The value of fanatic consistency

Guise makes an excellent point about the need for rigid consistency with your micro habits.

“Self-efficacy,” or the belief in your ability to influence an outcome, plays a big part in mustering the willpower to do things. Getting a truly daily habit in place, even a tiny one, skyrockets your confidence in that ability to beat procrastination and do the things you want to do. It trains your willpower “muscle.”

… a problem many people develop is an expectation of failing to reach their goals. Over time, this crushes their self-efficacy because it’s hard to believe that next time will be different (especially if you’re using the same strategy that failed last time). ~ Stephen Guise

A little tiny habit is a surprisingly easy way to retrain your brain — but only if you do it daily.

If it’s not working

If it’s not working, your habit is probably a little too big. “Write one page” is small, but it’s not small enough to be tiny — it’s too much to handle on a day that’s crazy, or a travel day.

Trim them down until they are stupidly easy and quick to complete.

Reminding yourself how embarrassingly easy and quick they are is also a good tool if you’re tempted to skip a day.

Some habit ideas you can swipe

Here are some ideas you can steal for micro habits of your own to develop. I like to have a mix of professional and personal — two for my business, and two for my personal life. (If you want to know what my habits are, swing by the Google+ conversation and I’ll let you know.)

Try one of these, or make up your own. Remember, start with one for the first week, and if you want to, you can add a few more later.

  • Meditate for five minutes (or two minutes, if you find resistance to five)
  • Read or re-read two pages of a classic copywriting resource
  • Write 50 words on your Big Project
  • Do the warm-up for that workout you’ve been trying to do more often
  • Write three headlines for content you might write some day
  • Hand-copy out a paragraph of writing you admire
  • Walk for ten minutes (or less, if this feels too big)
  • Outline a post idea (it’s okay if these are very silly — they’re not to publish, just to warm up your writing brain)
  • Participate in your favorite online writing or business group (Only do this one if you don’t have this habit already)
  • Read two pages on a topic that has nothing to do with writing or your business

Got more? Join us over on Google+ with your suggestions — we’d love to hear them!

And I’ll leave you with one final quote from Guise, to push you over into trying this out for yourself. I think you’ll be happy when you see the results.

We’re quick to blame ourselves for lack of progress, but slow to blame our strategies. Then we repeat them over and over again, trying to make them work. But here’s the thing — if you fail using a strategy more than a few times, you need to try another one. ~ Stephen Guise

Flickr Creative Commons Image via Alexander C. Kafka.

About the author

Sonia Simone

Sonia Simone is co-founder and Chief Content Officer of Copyblogger Media. Get more from Sonia on Twitter and .

The post The Surprising Spooky Secret to Enduring Success Habits appeared first on Copyblogger.


Posted in IM NewsComments Off

The Surprising Effect of Freshness and Authority on Search Results

Image of Google Query

I want to let you in on a little secret about Copyblogger.

We are very competitive. Not in a “we win, you lose” sense, more of the simple “we want to be the best.”

And one area we tend to be a little obsessive about is search rankings — especially when it comes to our cornerstone content.

So, it may come as no surprise that one of those terms we obsess about is “content marketing.” In fact our content marketing landing page ranks very well on Google.

But recently, it didn’t.

Oh sure, that landing page has more than 21,000 links and many social media shares. But on December 22, 2012 all of that SEO goodness no longer mattered.

On that day, another site ranked higher than our page … and they did it with miniscule backlinks and no social media shares.

They accomplished it by being fresh and authoritative.

The content marketing ranking battle begins …

So who was this competitor that set their sites on our prized position in Google for the term content marketing? A group of black hat SEOs? No. In fact, it was much more benign.

You see, Mashable decided to create a category page on their site to aggregate all of their content marketing articles.

From all appearances, this was more of a year-end house keeping job to help properly classify some of their content. In no way did it appear that they where actively targeting the term to rank, since the number of links the category page obtained was less than 30, and it had earned no social media shares.

But when this page was indexed by Google, it suddenly ranked higher than all Copyblogger content marketing posts.

A quick review of Ahrefs.com clearly showed the vast difference in the core SEO metrics for each page.


Image of Copyblogger Ahrefs SEO Metrics


Image of Mashable Ahrefs SEO Metrics

And yet, while our content marketing landing page clearly had very strong links and shares, the page at Mashable outranked us.

How is this possible in the land of SEO — where links and shares are the very currency of the trade?

The answer lies in two core principles we’ve taught over and over — authority and freshness.

Query Deserves Freshness. Huh?

Unless you’re steeped in SEO terminology, you may not have heard of the acronym “QDF” or Query Deserves Freshness.

QDF, simply stated, is that for every query (“search term”) a search result list should include one (or more) piece of content that’s been recently published.

As we experienced first hand at Copyblogger, the sheer act of Mashable creating a category page on their site completely negated all of the SEO factors we had achieved.

It was as if links, shares, and the age of the page didn’t matter. Because — in this particular case — it didn’t. What mattered more than anything else was the fact that Mashable.com had published the page.

A quick check at SEOMoz shows that the Domain Authority of Mashable is 97 out of 100, with Copyblogger coming in at 92 of 100. Domain Authority is a feature of SEOMoz that “Predicts this domain’s ranking potential in the search engines based on an algorithmic combination of all link metrics.”

In essence, because Mashable had a higher Domain Authority than Copyblogger, Google determined that (for the term Content Marketing) the landing page at Mashable had higher relevance than our page.

This is the true power of QDF. For sites that have a strong Domain Authority, the simple act of publishing content around a particular term could supersede the benefits of in-bound links and social media shares.

But only for a little while …

Losing the battle, but winning the war

Inside Copyblogger, we were a little perturbed by this development.

Not so much that we were going to declare war to gain back ground on the term “content marketing,” but enough to wonder what was going to happen.

You see, while QDF is a powerful benefit to those with authoritative domains, it does not last.

Check for yourself.

Within 30 days of the Mashable page first getting indexed by Google, the listing had fallen off the front page and our content marketing landing page resumed its previous position.

Write with authority, and for the long term

We’ve often shared the principles and value of building authority at Copyblogger.

And, as content marketers, we use tools like our own Scribe content marketing software to research and analyze our content — ensuring that our keyword strategies align with the content and site we are publishing on.

This is one reason why Copyblogger.com ranks so well for a variety of search terms.

But as our experience shows, a quick spike in a search ranking is not enough. Establishing connections with other authoritative online sources is crucial for long term content viability — helping you build the links and social media shares to your content from authoritative sources.

While tools like Scribe can help identify these connections, it takes time and patience to build those quality links and social media shares.

And in the long run, these temporary ranking spikes due to factors like QDF will be replaced by the authoritative content.

Now, if we could just out rank Wikipedia ;-)

About the Author: Sean Jackson is CFO and Partner in Copyblogger Media. Get more from him on Twitter, LinkedIn, and Google+.

Related Stories


Posted in IM NewsComments Off