Tag Archive | "Google’s"

Google’s August 1st Core Update: Week 1

Posted by Dr-Pete

On August 1, Google (via Danny Sullivan’s @searchliaison account) announced that they released a “broad core algorithm update.” Algorithm trackers and webmaster chatter confirmed multiple days of heavy ranking flux, including our own MozCast system:

Temperatures peaked on August 1-2 (both around 114°F), with a 4-day period of sustained rankings flux (purple bars are all over 100°F). While this has settled somewhat, yesterday’s data suggests that we may not be done.

August 2nd set a 2018 record for MozCast at 114.4°F. Keep in mind that, while MozCast was originally tuned to an average temperature of 70°F, 2017-2018 average temperatures have been much higher (closer to 90° in 2018).

Temperatures by Vertical

There’s been speculation that this algo update targeted so called YMYL queries (Your Money or Your Life) and disproportionately impacted health and wellness sites. MozCast is broken up into 20 keyword categories (roughly corresponding to Google Ads categories). Here are the August 2nd temperatures by category:

At first glance, the “Health” category does appear to be the most impacted. Keywords in that category had a daily average temperature of 124°F. Note, though, that all categories showed temperatures over 100°F on August 1st – this isn’t a situation where one category was blasted and the rest were left untouched. It’s also important to note that this pattern shifted during the other three days of heavy flux, with other categories showing higher average temperatures. The multi-day update impacted a wide range of verticals.

Top 30 winners

So, who were the big winners (so far) of this update? I always hesitate to do a winners/losers analysis – while useful, especially for spotting patterns, there are plenty of pitfalls. First and foremost, a site can gain or lose SERP share for many reasons that have nothing to do with algorithm updates. Second, any winners/losers analysis is only a snapshot in time (and often just one day).

Since we know that this update spanned multiple days, I’ve decided to look at the percentage increase (or decrease) in SERP share between July 31st and August 7th. In this analysis, “Share” is a raw percentage of page-1 rankings in the MozCast 10K data set. I’ve limited this analysis to only sites that had at least 25 rankings across our data set on July 31 (below that the data gets very noisy). Here are the top 30…

The first column is the percentage increase across the 7 days. The final column is the overall share – this is very low for all but mega-sites (Wikipedia hovers in the colossal 5% range).

Before you over-analyze, note the second column – this is the percent change from the highest July SERP share for that site. What the 7-day share doesn’t tell us is whether the site is naturally volatile. Look at Time.com (#27) for a stark example. Time Magazine saw a +19.5% lift over the 7 days, which sounds great, except that they landed on a final share that was down 54.4% from their highest point in July. As a news site, Time’s rankings are naturally volatile, and it’s unclear whether this has much to do with the algorithm update.

Similarly, LinkedIn, AMC Theaters, OpenTable, World Market, MapQuest, and RE/MAX all show highs in July that were near or above their August 7th peaks. Take their gains with a grain of salt.

Top 30 losers

We can run the same analysis for the sites that lost the most ground. In this case, the “Max %” is calculated against the July low. Again, we want to be mindful of any site where the 7-day drop looks a lot different than the drop from that site’s July low-point…

Comparing the first two columns, Verywell Health immediately stands out. While the site ended the 7-day period down 52.3%, it was up just over 200% from July lows. It turns out that this site was sitting very low during the first week of July and then saw a jump in SERP share. Interestingly, Verywell Family and Verywell Fit also appear on our top 30 losers list, suggesting that there’s a deeper story here.

Anecdotally, it’s easy to spot a pattern of health and wellness sites in this list, including big players like Prevention and LIVESTRONG. Whether this list represents the entire world of sites hit by the algorithm update is impossible to say, but our data certainly seems to echo what others are seeing.

Are you what you E-A-T?

There’s been some speculation that this update is connected to Google’s recent changes to their Quality Rater Guidelines. While it’s very unlikely that manual ratings based on the new guidelines would drive major ranking shifts (especially so quickly), it’s entirely plausible that the guideline updates and this algorithm update share a common philosophical view of quality and Google’s latest thinking on the subject.

Marie Haynes’ post theorizing the YMYL connection also raises the idea that Google may be looking more closely at E-A-T signals (Expertise, Authoritativeness and Trust). While certainly an interesting theory, I can’t adequately address that question with this data set. Declines in sites like Fortune, IGN and Android Central pose some interesting questions about authoritativeness and trust outside of the health and wellness vertical, but I hesitate to speculate based only on a handful of outliers.

If your site has been impacted in a material way (including significant traffic gains or drops), I’d love to hear more details in the comments section. If you’ve taken losses, try to isolate whether those losses are tied to specific keywords, keyword groups, or pages/content. For now, I’d advise that this update could still be rolling out or being tweaked, and we all need to keep our eyes open.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Google’s John Mueller Shares His SEO Related Podcast List

A Reddit thread asks folks to share their favorite SEO related podcasts. I spotted John Mueller of Google share his list of his favorite SEO podcasts as well…


Search Engine Roundtable

Posted in IM NewsComments Off

Google’s Walled Garden: Are We Being Pushed Out of Our Own Digital Backyards?

Posted by Dr-Pete

Early search engines were built on an unspoken transaction — a pact between search engines and website owners — you give us your data, and we’ll send you traffic. While Google changed the game of how search engines rank content, they honored the same pact in the beginning. Publishers, who owned their own content and traditionally were fueled by subscription revenue, operated differently. Over time, they built walls around their gardens to keep visitors in and, hopefully, keep them paying.

Over the past six years, Google has crossed this divide, building walls around their content and no longer linking out to the sources that content was originally built on. Is this the inevitable evolution of search, or has Google forgotten their pact with the people’s whose backyards their garden was built on?

I don’t think there’s an easy answer to this question, but the evolution itself is undeniable. I’m going to take you through an exhaustive (yes, you may need a sandwich) journey of the ways that Google is building in-search experiences, from answer boxes to custom portals, and rerouting paths back to their own garden.


I. The Knowledge Graph

In May of 2012, Google launched the Knowledge Graph. This was Google’s first large-scale attempt at providing direct answers in search results, using structured data from trusted sources. One incarnation of the Knowledge Graph is Knowledge Panels, which return rich information about known entities. Here’s part of one for actor Chiwetel Ejiofor (note: this image is truncated)…

The Knowledge Graph marked two very important shifts. First, Google created deep in-search experiences. As Knowledge Panels have evolved, searchers have access to rich information and answers without ever going to an external site. Second, Google started to aggressively link back to their own resources. It’s easy to overlook those faded blue links, but here’s the full Knowledge Panel with every link back to a Google property marked…

Including links to Google Images, that’s 33 different links back to Google. These two changes — self-contained in-search experiences and aggressive internal linking — represent a radical shift in the nature of search engines, and that shift has continued and expanded over the past six years.

More recently, Google added a sharing icon (on the right, directly below the top images). This provides a custom link that allows people to directly share rich Google search results as content on Facebook, Twitter, Google+, and by email. Google no longer views these pages as a path to a destination. Search results are the destination.

The Knowledge Graph also spawned Knowledge Cards, more broadly known as “answer boxes.” Take any fact in the panel above and pose it as a question, and you’re likely to get a Knowledge Card. For example, “How old is Chiwetel Ejiofor?” returns the following…

For many searchers, this will be the end of their journey. Google has answered their question and created a self-contained experience. Note that this example also contains links to additional Google searches.

In 2015, Google launched Medical Knowledge Panels. These gradually evolved into fully customized content experiences created with partners in the medical field. Here’s one for “cardiac arrest” (truncated)…

Note the fully customized design (these images were created specifically for these panels), as well as the multi-tabbed experience. It is now possible to have a complete, customized content experience without ever leaving Google.


II. Live Results

In some specialized cases, Google uses private data partnerships to create customized answer boxes. Google calls these “Live Results.” You’ve probably seen them many times now on weather, sports and stock market searches. Here’s one for “Seattle weather”…

For the casual information seeker, these are self-contained information experiences with most or all of what we care about. Live Results are somewhat unique in that, unlike the general knowledge in the Knowledge Graph, each partnership represents a disruption to an industry.

These partnerships have branched out over time into even more specialized results. Consider, for example, “Snoqualmie ski conditions”…

Sports results are incredibly disruptive, and Google has expanded and enriched these results quite a bit over the past couple of years. Here’s one for “Super Bowl 2018″…

Note that clicking any portion of this Live Result leads to a customized portal on Google that can no longer be called a “search result” in any traditional sense (more on portals later). Special sporting events, such as the 2018 Winter Olympics, have even more rich features. Here are some custom carousels for “Olympic snowboarding results”…

Note that these are multi-column carousels that ultimately lead to dozens of smaller cards. All of these cards click to more Google search results. This design choice may look strange on desktop and marks another trend — Google’s shift to mobile-first design. Here’s the same set of results on a Google Pixel phone…

Here, the horizontal scrolling feels more intuitive, and the carousel is the full-width of the screen, instead of feeling like a free-floating design element. These features are not only rich experiences on mobile screens, but dominate mobile results much more than they do two-column desktop results.


III. Carousels

Speaking of carousels, Google has been experimenting with a variety of horizontal result formats, and many of them are built around driving traffic back to Google searches and properties. One of the older styles of carousels is the list format, which runs across the top of desktop searches (above other results). Here’s one for “Seattle Sounders roster”…

Each player links to a new search result with that player in a Knowledge Panel. This carousel expands to the width of the screen (which is unusual, since Google’s core desktop design is fixed-width). On my 1920×1080 screen, you can see 14 players, each linking to a new Google search, and the option to scroll for more…

This type of list carousel covers a wide range of topics, from “cat breeds” to “types of cheese.” Here’s an interesting one for “best movies of 1984.” The image is truncated, but the full result includes drop-downs to select movie genres and other years…

Once again, each result links to a new search with a Knowledge Panel dedicated to that movie. Another style of carousel is the multi-row horizontal scroller, like this one for “songs by Nirvana”…

In this case, not only does each entry click to a new search result, but many of them have prominent featured videos at the top of the left column (more on that later). My screen shows at least partial information for 24 songs, all representing in-Google links above the traditional search results…

A search for “laptops” (a very competitive, commercial term, unlike the informational searches above) has a number of interesting features. At the bottom of the search is this “Refine by brand” carousel…

Clicking on one of these results leads to a new search with the brand name prepended (e.g. “Apple laptops”). The same search shows this “Best of” carousel…

The smaller “Mentioned in:” links go to articles from the listed publishers. The main, product links go to a Google search result with a product panel. Here’s what I see when I click on “Dell XPS 13 9350″ (image is truncated)…

This entity live in the right-hand column and looks like a Knowledge Panel, but is commercial in nature (notice the “Sponsored” label in the upper right). Here, Google is driving searchers directly into a paid/advertising channel.


IV. Answers & Questions

As Google realized that the Knowledge Graph would never scale at the pace of the wider web, they started to extract answers directly from their index (i.e. all of the content in the world, or at least most of it). This led to what they call “Featured Snippets”, a special kind of answer box. Here’s one for “Can hamsters eat cheese?” (yes, I have a lot of cheese-related questions)…

Featured Snippets are an interesting hybrid. On the one hand, they’re an in-search experience (in this case, my basic question has been answered before I’ve even left Google). On the other hand, they do link out to the source site and are a form of organic search result.

Featured Snippets also power answers on Google Assistant and Google Home. If I ask Google Home the same question about hamsters, I hear the following:

On the website TheHamsterHouse.com, they say “Yes, hamsters can eat cheese! Cheese should not be a significant part of your hamster’s diet and you should not feed cheese to your hamster too often. However, feeding cheese to your hamster as a treat, perhaps once per week in small quantities, should be fine.”

You’ll see the answer is identical to the Featured Snippet shown above. Note the attribution (which I’ve bolded) — a voice search can’t link back to the source, posing unique challenges. Google does attempt to provide attribution on Google Home, but as they use answers extracted from the web more broadly, we may see the way original sources are credited change depending on the use case and device.

This broader answer engine powers another type of result, called “Related Questions” or the “People Also Ask” box. Here’s one on that same search…

These questions are at least partially machine-generated, which is why the grammar can read a little oddly — that’s a fascinating topic for another time. If you click on “What can hamsters eat list?” you get what looks a lot like a Featured Snippet (and links to an outside source)…

Notice two other things that are going on here. First, Google has included a link to search results for the question you clicked on (see the purple arrow). Second, the list has expanded. The two questions at the end are new. Let’s click “What do hamsters like to do for fun?” (because how can I resist?)…

This opens up a second answer, a second link to a new Google search, and two more answers. You can continue this to your heart’s content. What’s especially interesting is that this isn’t just some static list that expands as you click on it. The new questions are generated based on your interactions, as Google tries to understand your intent and shape your journey around it.

My colleague, Britney Muller, has done some excellent research on the subject and has taken to calling these infinite PAAs. They’re probably not quite infinite — eventually, the sun will explode and consume the Earth. Until then, they do represent a massively recursive in-Google experience.


V. Videos & Movies

One particularly interesting type of Featured Snippet is the Featured Video result. Search for “umbrella” and you should see a panel like this in the top-left column (truncated):

This is a unique hybrid — it has Knowledge Panel features (that link back to Google results), but it also has an organic-style link and large video thumbnail. While it appears organic, all of the Featured Videos we’ve seen in the wild have come from YouTube (Vevo is a YouTube partner), which essentially means this is an in-Google experience. These Featured Videos consume a lot of screen real-estate and appear even on commercial terms, like Rihanna’s “umbrella” (shown here) or Kendrick Lamar’s “swimming pools”.

Movie searches yield a rich array of features, from Live Results for local showtimes to rich Knowledge Panels. Last year, Google completely redesigned their mobile experience for movie results, creating a deep in-search experience. Here’s a mobile panel for “Black Panther”…

Notice the tabs below the title. You can navigate within this panel to a wealth of information, including cast members and photos. Clicking on any cast member goes to a new search about that actor/actress.

Although the search results eventually continue below this panel, the experience is rich, self-contained, and incredibly disruptive to high-ranking powerhouses in this space, including IMDB. You can even view trailers from the panel…

On my phone, Google displayed 10 videos (at roughly two per screen), and nine of those were links to YouTube. Given YouTube’s dominance, it’s difficult to say if Google is purposely favoring their own properties, but the end result is the same — even seemingly “external” clicks are often still Google-owned clicks.


VI. Local Results

A similar evolution has been happening in local results. Take the local 3-pack — here’s one on a search for “Seattle movie theaters”…

Originally, the individual business links went directly to each of those business’s websites. As of the past year or two, these instead go to local panels on Google Maps, like this one…

On mobile, these local panels stand out even more, with prominent photos, tabbed navigation and easy access to click-to-call and directions.

In certain industries, local packs have additional options to run a search within a search. Here’s a pack for Chicago taco restaurants, where you can filter results (from the broader set of Google Maps results) by rating, price, or hours…

Once again, we have a fully embedded search experience. I don’t usually vouch for any of the businesses in my screenshots, but I just had the pork belly al pastor at Broken English Taco Pub and it was amazing (this is my personal opinion and in no way reflects the taco preferences of Moz, its employees, or its lawyers).

The hospitality industry has been similarly affected. Search for an individual hotel, like “Kimpton Alexis Seattle” (one of my usual haunts when visiting the home office), and you’ll get a local panel like the one below. Pardon the long image, but I wanted you to have the full effect…

This is an incredible blend of local business result, informational panel, and commercial result, allowing you direct access to booking information. It’s not just organic local results that have changed, though. Recently, Google started offering ads in local packs, primarily on mobile results. Here’s one for “tax attorneys”…

Unlike traditional AdWords ads, these results don’t go directly to the advertiser’s website. Instead, like standard pack results, they go to a Google local panel. Here’s what the mobile version looks like…

In addition, Google has launched specialized ads for local service providers, such as plumbers and electricians. These appear carousel-style on desktop, such as this one for “plumbers in Seattle”…

Unlike AdWords advertisers, local service providers buy into a specialized program and these local service ads click to a fully customized Google sub-site, which brings us to the next topic — portals.


VII. Custom Portals

Some Google experiences have become so customized that they operate as stand-alone portals. If you click on a local service ad, you get a Google-owned portal that allows you to view the provider, check to see if they can handle your particular problem in your zip code, and (if not) view other, relevant providers…

You’ve completely left the search result at this point, and can continue your experience fully within this Google property. These local service ads have now expanded to more than 30 US cities.

In 2016, Google launched their own travel guides. Run a search like “things to do in Seattle” and you’ll see a carousel-style result like this one…

Click on “Seattle travel guide” and you’ll be taken to a customized travel portal for the city of Seattle. The screen below is a desktop result — note the increasing similarity to rich mobile experiences.

Once again, you’ve been taken to a complete Google experience outside of search results.

Last year, Google jumped into the job-hunting game, launching a 3-pack of job listings covering all major players in this space, like this one for “marketing jobs in Seattle”…

Click on any job listing, and you’ll be taken to a separate Google jobs portal. Let’s try Facebook…

From here, you can view other listings, refine your search, and even save jobs and set up alerts. Once again, you’ve jumped from a specialized Google result to a completely Google-controlled experience.

Like hotels, Google has dabbled in flight data and search for years. If I search for “flights to Seattle,” Google will automatically note my current location and offer me a search interface and a few choices…

Click on one of these choices and you’re taken to a completely redesigned Google Flights portal…

Once again, you can continue your journey completely within this Google-owned portal, never returning back to your original search. This is a trend we can expect to continue for the foreseeable future.


VIII. Hard Questions

If I’ve bludgeoned you with examples, then I apologize, but I want to make it perfectly clear that this is not a case of one or two isolated incidents. Google is systematically driving more clicks from search to new searches, in-search experiences, and other Google owned properties. This leads to a few hard questions…

Why is Google doing this?

Right about now, you’re rushing to the comments section to type “For the money!” along with a bunch of other words that may include variations of my name, “sheeple,” and “dumb-ass.” Yes, Google is a for-profit company that is motivated in part by making money. Moz is a for-profit company that is motivated in part by making money. Stating the obvious isn’t insight.

In some cases, the revenue motivation is clear. Suggesting the best laptops to searchers and linking those to shopping opportunities drives direct dollars. In traditional walled gardens, publishers are trying to produce more page-views, driving more ad impressions. Is Google driving us to more searches, in-search experiences, and portals to drive more ad clicks?

The answer isn’t entirely clear. Knowledge Graph links, for example, usually go to informational searches with few or no ads. Rich experiences like Medical Knowledge Panels and movie results on mobile have no ads at all. Some portals have direct revenues (local service providers have to pay for inclusion), but others, like travel guides, have no apparent revenue model (at least for now).

Google is competing directly with Facebook for hours in our day — while Google has massive traffic and ad revenue, people on average spend much more time on Facebook. Could Google be trying to drive up their time-on-site metrics? Possibly, but it’s unclear what this accomplishes beyond being a vanity metric to make investors feel good.

Looking to the long game, keeping us on Google and within Google properties does open up the opportunity for additional advertising and new revenue streams. Maybe Google simply realizes that letting us go so easily off to other destinations is leaving future money on the table.

Is this good for users?

I think the most objective answer I can give is — it depends. As a daily search user, I’ve found many of these developments useful, especially on mobile. If I can get an answer at a glance or in an in-search entity, such as a Live Result for weather or sports, or the phone number and address of a local restaurant, it saves me time and the trouble of being familiar with the user interface of thousands of different websites. On the other hand, if I feel that I’m being run in circles through search after search or am being given fewer and fewer choices, that can feel manipulative and frustrating.

Is this fair to marketers?

Let’s be brutally honest — it doesn’t matter. Google has no obligation to us as marketers. Sites don’t deserve to rank and get traffic simply because we’ve spent time and effort or think we know all the tricks. I believe our relationship with Google can be symbiotic, but that’s a delicate balance and always in flux.

In some cases, I do think we have to take a deep breath and think about what’s good for our customers. As a marketer, local packs linking directly to in-Google properties is alarming — we measure our success based on traffic. However, these local panels are well-designed, consistent, and have easy access to vital information like business addresses, phone numbers, and hours. If these properties drive phone calls and foot traffic, should we discount their value simply because it’s harder to measure?

Is this fair to businesses?

This is a more interesting question. I believe that, like other search engines before it, Google made an unwritten pact with website owners — in exchange for our information and the privilege to monetize that information, Google would send us traffic. This is not altruism on Google’s part. The vast majority of Google’s $ 95B in 2017 advertising revenue came from search advertising, and that advertising would have no audience without organic search results. Those results come from the collective content of the web.

As Google replaces that content and sends more clicks back to themselves, I do believe that the fundamental pact that Google’s success was built on is gradually being broken. Google’s garden was built on our collective property, and it does feel like we’re slowly being herded out of our own backyards.

We also have to consider the deeper question of content ownership. If Google chooses to pursue private data partnerships — such as with Live Results or the original Knowledge Graph — then they own that data, or at least are leasing it fairly. It may seem unfair that they’re displacing us, but they have the right to do so.

Much of the Knowledge Graph is built on human-curated sources such as Wikidata (i.e. Wikipedia). While Google undoubtedly has an ironclad agreement with Wikipedia, what about the people who originally contributed and edited that content? Would they have done so knowing their content could ultimately displace other content creators (including possibly their own websites) in Google results? Are those contributors willing participants in this experiment? The question of ownership isn’t as easy as it seems.

If Google extracts the data we provide as part of the pact, such as with Featured Snippets and People Also Ask results, and begins to wall off those portions of the garden, then we have every right to protest. Even the concept of a partnership isn’t always black-and-white. Some job listing providers I’ve spoken with privately felt pressured to enter Google’s new jobs portal (out of fear of cutting off the paths to their own gardens), but they weren’t happy to see the new walls built.

Google is also trying to survive. Search has to evolve, and it has to answer questions and fit a rapidly changing world of device formats, from desktop to mobile to voice. I think the time has come, though, for Google to stop and think about the pact that built their nearly hundred-billion-dollar ad empire.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Google’s Project Zero Team Exposes Microsoft Edge Bug

Microsoft has been pretty aggressive in marketing its Edge browser and even launched two commercials earlier this year specifically pointing out its advantages over rival Chrome. After being silent for a while, it appears that Google finally counterattacked by disclosing Edge’s security flaw.

Google’s Project Zero, which found the vulnerability last November, h released the technical details of their discovery. Due to the existence of the flaw, it is theoretically possible for hackers to bypass Edge’s security features and insert their own malicious code into their target’s computer. While indeed a possibility, it must be noted there has been no reported instance of the problem being successfully taken advantage of by hackers so far.

Posted in IM NewsComments Off

Google’s New Map Pin Design

Google has updated and rolled out the new map pin design for the local pack search results. Sergey Alakov spotted the test and then the rollout and I personally see the new map pins myself…


Search Engine Roundtable

Posted in IM NewsComments Off

What Do Google’s New, Longer Snippets Mean for SEO? – Whiteboard Friday

Posted by randfish

Snippets and meta descriptions have brand-new character limits, and it’s a big change for Google and SEOs alike. Learn about what’s new, when it changed, and what it all means for SEO in this edition of Whiteboard Friday.

What do Google's now, longer snippets mean for SEO?

Click on the whiteboard image above to open a high-resolution version in a new tab!


Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about Google’s big change to the snippet length.

This is the display length of the snippet for any given result in the search results that Google provides. This is on both mobile and desktop. It sort of impacts the meta description, which is how many snippets are written. They’re taken from the meta description tag of the web page. Google essentially said just last week, “Hey, we have officially increased the length, the recommended length, and the display length of what we will show in the text snippet of standard organic results.”

So I’m illustrating that for you here. I did a search for “net neutrality bill,” something that’s on the minds of a lot of Americans right now. You can see here that this article from The Hill, which is a recent article — it was two days ago — has a much longer text snippet than what we would normally expect to find. In fact, I went ahead and counted this one and then showed it here.

So basically, at the old 165-character limit, which is what you would have seen prior to the middle of December on most every search result, occasionally Google would have a longer one for very specific kinds of search results, but more than 90%, according to data from SISTRIX, which put out a great report and I’ll link to it here, more than 90% of search snippets were 165 characters or less prior to the middle of November. Then Google added basically a few more lines.

So now, on mobile and desktop, instead of an average of two or three lines, we’re talking three, four, five, sometimes even six lines of text. So this snippet here is 266 characters that Google is displaying. The next result, from Save the Internet, is 273 characters. Again, this might be because Google sort of realized, “Hey, we almost got all of this in here. Let’s just carry it through to the end rather than showing the ellipsis.” But you can see that 165 characters would cut off right here. This one actually does a good job of displaying things.

So imagine a searcher is querying for something in your field and they’re just looking for a basic understanding of what it is. So they’ve never heard of net neutrality. They’re not sure what it is. So they can read here, “Net neutrality is the basic principle that prohibits internet service providers like AT&T, Comcast, and Verizon from speeding up, slowing down, or blocking any . . .” And that’s where it would cut off. Or that’s where it would have cut off in November.

Now, if I got a snippet like that, I need to visit the site. I’ve got to click through in order to learn more. That doesn’t tell me enough to give me the data to go through. Now, Google has tackled this before with things, like a featured snippet, that sit at the top of the search results, that are a more expansive short answer. But in this case, I can get the rest of it because now, as of mid-November, Google has lengthened this. So now I can get, “Any content, applications, or websites you want to use. Net neutrality is the way that the Internet has always worked.”

Now, you might quibble and say this is not a full, thorough understanding of what net neutrality is, and I agree. But for a lot of searchers, this is good enough. They don’t need to click any more. This extension from 165 to 275 or 273, in this case, has really done the trick.

What changed?

So this can have a bunch of changes to SEO too. So the change that happened here is that Google updated basically two things. One, they updated the snippet length, and two, they updated their guidelines around it.

So Google’s had historic guidelines that said, well, you want to keep your meta description tag between about 160 and 180 characters. I think that was the number. They’ve updated that to where they say there’s no official meta description recommended length. But on Twitter, Danny Sullivan said that he would probably not make that greater than 320 characters. In fact, we and other data providers, that collect a lot of search results, didn’t find many that extended beyond 300. So I think that’s a reasonable thing.

When?

When did this happen? It was starting at about mid-November. November 22nd is when SISTRIX’s dataset starts to notice the increase, and it was over 50%. Now it’s sitting at about 51% of search results that have these longer snippets in at least 1 of the top 10 as of December 2nd.

Here’s the amazing thing, though — 51% of search results have at least one. Many of those, because they’re still pulling old meta descriptions or meta descriptions that SEO has optimized for the 165-character limit, are still very short. So if you’re the person in your search results, especially it’s holiday time right now, lots of ecommerce action, if you’re the person to go update your important pages right now, you might be able to get more real estate in the search results than any of your competitors in the SERPs because they’re not updating theirs.

How will this affect SEO?

So how is this going to really change SEO? Well, three things:

A. It changes how marketers should write and optimize the meta description.

We’re going to be writing a little bit differently because we have more space. We’re going to be trying to entice people to click, but we’re going to be very conscientious that we want to try and answer a lot of this in the search result itself, because if we can, there’s a good chance that Google will rank us higher, even if we’re actually sort of sacrificing clicks by helping the searcher get the answer they need in the search result.

B. It may impact click-through rate.

We’ll be looking at Jumpshot data over the next few months and year ahead. We think that there are two likely ways they could do it. Probably negatively, meaning fewer clicks on less complex queries. But conversely, possible it will get more clicks on some more complex queries, because people are more enticed by the longer description. Fingers crossed, that’s kind of what you want to do as a marketer.

C. It may lead to lower click-through rate further down in the search results.

If you think about the fact that this is taking up the real estate that was taken up by three results with two, as of a month ago, well, maybe people won’t scroll as far down. Maybe the ones that are higher up will in fact draw more of the clicks, and thus being further down on page one will have less value than it used to.

What should SEOs do?

What are things that you should do right now? Number one, make a priority list — you should probably already have this — of your most important landing pages by search traffic, the ones that receive the most search traffic on your website, organic search. Then I would go and reoptimize those meta descriptions for the longer limits.

Now, you can judge as you will. My advice would be go to the SERPs that are sending you the most traffic, that you’re ranking for the most. Go check out the limits. They’re probably between about 250 and 300, and you can optimize somewhere in there.

The second thing I would do is if you have internal processes or your CMS has rules around how long you can make a meta description tag, you’re going to have to update those probably from the old limit of somewhere in the 160 to 180 range to the new 230 to 320 range. It doesn’t look like many are smaller than 230 now, at least limit-wise, and it doesn’t look like anything is particularly longer than 320. So somewhere in there is where you’re going to want to stay.

Good luck with your new meta descriptions and with your new snippet optimization. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Does Googlebot Support HTTP/2? Challenging Google’s Indexing Claims – An Experiment

Posted by goralewicz

I was recently challenged with a question from a client, Robert, who runs a small PR firm and needed to optimize a client’s website. His question inspired me to run a small experiment in HTTP protocols. So what was Robert’s question? He asked…

Can Googlebot crawl using HTTP/2 protocols?

You may be asking yourself, why should I care about Robert and his HTTP protocols?

As a refresher, HTTP protocols are the basic set of standards allowing the World Wide Web to exchange information. They are the reason a web browser can display data stored on another server. The first was initiated back in 1989, which means, just like everything else, HTTP protocols are getting outdated. HTTP/2 is one of the latest versions of HTTP protocol to be created to replace these aging versions.

So, back to our question: why do you, as an SEO, care to know more about HTTP protocols? The short answer is that none of your SEO efforts matter or can even be done without a basic understanding of HTTP protocol. Robert knew that if his site wasn’t indexing correctly, his client would miss out on valuable web traffic from searches.

The hype around HTTP/2

HTTP/1.1 is a 17-year-old protocol (HTTP 1.0 is 21 years old). Both HTTP 1.0 and 1.1 have limitations, mostly related to performance. When HTTP/1.1 was getting too slow and out of date, Google introduced SPDY in 2009, which was the basis for HTTP/2. Side note: Starting from Chrome 53, Google decided to stop supporting SPDY in favor of HTTP/2.

HTTP/2 was a long-awaited protocol. Its main goal is to improve a website’s performance. It’s currently used by 17% of websites (as of September 2017). Adoption rate is growing rapidly, as only 10% of websites were using HTTP/2 in January 2017. You can see the adoption rate charts here. HTTP/2 is getting more and more popular, and is widely supported by modern browsers (like Chrome or Firefox) and web servers (including Apache, Nginx, and IIS).

Its key advantages are:

  • Multiplexing: The ability to send multiple requests through a single TCP connection.
  • Server push: When a client requires some resource (let’s say, an HTML document), a server can push CSS and JS files to a client cache. It reduces network latency and round-trips.
  • One connection per origin: With HTTP/2, only one connection is needed to load the website.
  • Stream prioritization: Requests (streams) are assigned a priority from 1 to 256 to deliver higher-priority resources faster.
  • Binary framing layer: HTTP/2 is easier to parse (for both the server and user).
  • Header compression: This feature reduces overhead from plain text in HTTP/1.1 and improves performance.

For more information, I highly recommend reading “Introduction to HTTP/2” by Surma and Ilya Grigorik.

All these benefits suggest pushing for HTTP/2 support as soon as possible. However, my experience with technical SEO has taught me to double-check and experiment with solutions that might affect our SEO efforts.

So the question is: Does Googlebot support HTTP/2?

Google’s promises

HTTP/2 represents a promised land, the technical SEO oasis everyone was searching for. By now, many websites have already added HTTP/2 support, and developers don’t want to optimize for HTTP/1.1 anymore. Before I could answer Robert’s question, I needed to know whether or not Googlebot supported HTTP/2-only crawling.

I was not alone in my query. This is a topic which comes up often on Twitter, Google Hangouts, and other such forums. And like Robert, I had clients pressing me for answers. The experiment needed to happen. Below I’ll lay out exactly how we arrived at our answer, but here’s the spoiler: it doesn’t. Google doesn’t crawl using the HTTP/2 protocol. If your website uses HTTP/2, you need to make sure you continue to optimize the HTTP/1.1 version for crawling purposes.

The question

It all started with a Google Hangouts in November 2015.

When asked about HTTP/2 support, John Mueller mentioned that HTTP/2-only crawling should be ready by early 2016, and he also mentioned that HTTP/2 would make it easier for Googlebot to crawl pages by bundling requests (images, JS, and CSS could be downloaded with a single bundled request).

“At the moment, Google doesn’t support HTTP/2-only crawling (…) We are working on that, I suspect it will be ready by the end of this year (2015) or early next year (2016) (…) One of the big advantages of HTTP/2 is that you can bundle requests, so if you are looking at a page and it has a bunch of embedded images, CSS, JavaScript files, theoretically you can make one request for all of those files and get everything together. So that would make it a little bit easier to crawl pages while we are rendering them for example.”

Soon after, Twitter user Kai Spriestersbach also asked about HTTP/2 support:

His clients started dropping HTTP/1.1 connections optimization, just like most developers deploying HTTP/2, which was at the time supported by all major browsers.

After a few quiet months, Google Webmasters reignited the conversation, tweeting that Google won’t hold you back if you’re setting up for HTTP/2. At this time, however, we still had no definitive word on HTTP/2-only crawling. Just because it won’t hold you back doesn’t mean it can handle it — which is why I decided to test the hypothesis.

The experiment

For months as I was following this online debate, I still received questions from our clients who no longer wanted want to spend money on HTTP/1.1 optimization. Thus, I decided to create a very simple (and bold) experiment.

I decided to disable HTTP/1.1 on my own website (https://goralewicz.com) and make it HTTP/2 only. I disabled HTTP/1.1 from March 7th until March 13th.

If you’re going to get bad news, at the very least it should come quickly. I didn’t have to wait long to see if my experiment “took.” Very shortly after disabling HTTP/1.1, I couldn’t fetch and render my website in Google Search Console; I was getting an error every time.

My website is fairly small, but I could clearly see that the crawling stats decreased after disabling HTTP/1.1. Google was no longer visiting my site.

While I could have kept going, I stopped the experiment after my website was partially de-indexed due to “Access Denied” errors.

The results

I didn’t need any more information; the proof was right there. Googlebot wasn’t supporting HTTP/2-only crawling. Should you choose to duplicate this at home with our own site, you’ll be happy to know that my site recovered very quickly.

I finally had Robert’s answer, but felt others may benefit from it as well. A few weeks after finishing my experiment, I decided to ask John about HTTP/2 crawling on Twitter and see what he had to say.

(I love that he responds.)

Knowing the results of my experiment, I have to agree with John: disabling HTTP/1 was a bad idea. However, I was seeing other developers discontinuing optimization for HTTP/1, which is why I wanted to test HTTP/2 on its own.

For those looking to run their own experiment, there are two ways of negotiating a HTTP/2 connection:

1. Over HTTP (unsecure) – Make an HTTP/1.1 request that includes an Upgrade header. This seems to be the method to which John Mueller was referring. However, it doesn’t apply to my website (because it’s served via HTTPS). What is more, this is an old-fashioned way of negotiating, not supported by modern browsers. Below is a screenshot from Caniuse.com:

2. Over HTTPS (secure) – Connection is negotiated via the ALPN protocol (HTTP/1.1 is not involved in this process). This method is preferred and widely supported by modern browsers and servers.

A recent announcement: The saga continues

Googlebot doesn’t make HTTP/2 requests

Fortunately, Ilya Grigorik, a web performance engineer at Google, let everyone peek behind the curtains at how Googlebot is crawling websites and the technology behind it:

If that wasn’t enough, Googlebot doesn’t support the WebSocket protocol. That means your server can’t send resources to Googlebot before they are requested. Supporting it wouldn’t reduce network latency and round-trips; it would simply slow everything down. Modern browsers offer many ways of loading content, including WebRTC, WebSockets, loading local content from drive, etc. However, Googlebot supports only HTTP/FTP, with or without Transport Layer Security (TLS).

Googlebot supports SPDY

During my research and after John Mueller’s feedback, I decided to consult an HTTP/2 expert. I contacted Peter Nikolow of Mobilio, and asked him to see if there were anything we could do to find the final answer regarding Googlebot’s HTTP/2 support. Not only did he provide us with help, Peter even created an experiment for us to use. Its results are pretty straightforward: Googlebot does support the SPDY protocol and Next Protocol Navigation (NPN). And thus, it can’t support HTTP/2.

Below is Peter’s response:


I performed an experiment that shows Googlebot uses SPDY protocol. Because it supports SPDY + NPN, it cannot support HTTP/2. There are many cons to continued support of SPDY:

    1. This protocol is vulnerable
    2. Google Chrome no longer supports SPDY in favor of HTTP/2
    3. Servers have been neglecting to support SPDY. Let’s examine the NGINX example: from version 1.95, they no longer support SPDY.
    4. Apache doesn’t support SPDY out of the box. You need to install mod_spdy, which is provided by Google.

To examine Googlebot and the protocols it uses, I took advantage of s_server, a tool that can debug TLS connections. I used Google Search Console Fetch and Render to send Googlebot to my website.

Here’s a screenshot from this tool showing that Googlebot is using Next Protocol Navigation (and therefore SPDY):

I’ll briefly explain how you can perform your own test. The first thing you should know is that you can’t use scripting languages (like PHP or Python) for debugging TLS handshakes. The reason for that is simple: these languages see HTTP-level data only. Instead, you should use special tools for debugging TLS handshakes, such as s_server.

Type in the console:

sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -WWW -tlsextdebug -state -msg
sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -www -tlsextdebug -state -msg

Please note the slight (but significant) difference between the “-WWW” and “-www” options in these commands. You can find more about their purpose in the s_server documentation.

Next, invite Googlebot to visit your site by entering the URL in Google Search Console Fetch and Render or in the Google mobile tester.

As I wrote above, there is no logical reason why Googlebot supports SPDY. This protocol is vulnerable; no modern browser supports it. Additionally, servers (including NGINX) neglect to support it. It’s just a matter of time until Googlebot will be able to crawl using HTTP/2. Just implement HTTP 1.1 + HTTP/2 support on your own server (your users will notice due to faster loading) and wait until Google is able to send requests using HTTP/2.


Summary

In November 2015, John Mueller said he expected Googlebot to crawl websites by sending HTTP/2 requests starting in early 2016. We don’t know why, as of October 2017, that hasn’t happened yet.

What we do know is that Googlebot doesn’t support HTTP/2. It still crawls by sending HTTP/ 1.1 requests. Both this experiment and the “Rendering on Google Search” page confirm it. (If you’d like to know more about the technology behind Googlebot, then you should check out what they recently shared.)

For now, it seems we have to accept the status quo. We recommended that Robert (and you readers as well) enable HTTP/2 on your websites for better performance, but continue optimizing for HTTP/ 1.1. Your visitors will notice and thank you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Google’s DeepMind Starts Ethics Group to Examine AI’s Impact on Society

Google is finally taking steps to ensure that its rapid development in the field of AI will only bring about positive change for the whole of humanity. London-based company DeepMind, a subsidiary of Google parent firm Alphabet, has formed a new research unit called “Ethics & Society,” tasked to steer the group’s AI efforts.

“Our intention is always to promote research that ensures AI works for all,” DeepMind explains in a blog post. Promising to “help technologists put ethics into practice,” DeepMind Ethics & Society group outlined the principles that will guide its future endeavors: social benefit, being rigorous and evidence-based, transparency and diversity.

The group is comprised of thinkers and experts from a variety of disciplines. They include Nick Bostrom (Oxford University philosopher), Diane Coyle (economist from University of Manchester), Edward W. Felten (computer scientist from Princeton University) and Christiana Figueres (Mission 2020 convener) to name a few, Gizmodo reported. The group lists some of the key issues it will address including AI risk management, setting up standards of AI morality and values as well as lessening the economic disruption AI will likely bring when it replaces real people in the workforce.

It still remains to be seen just how persuasive DeepMind Ethics & Society will be in terms of imposing its recommendations on Google’s AI ambitions. A clash between the two groups is likely to happen in the future considering that Google’s thrust of churning out potentially profitable AI-powered products may run counter to the Ethics & Society’s goals and principles.

The rapid development of artificial intelligence is a rather divisive issue even among industry titans. One of the most vocal opponents of unregulated research on AI is Tesla CEO Elon Musk who view artificial intelligence as a potential threat to mankind, calling for a proactive stance in its regulation.

“AI is the rare case where I think we need to be proactive in regulation instead of reactive,” Musk said earlier this year.” Because I think by the time we are reactive in AI regulation, it’ll be too late. AI is a fundamental risk to the existence of human civilization.”

[Featured Image via YouTube]

The post Google's DeepMind Starts Ethics Group to Examine AI's Impact on Society appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

How to Determine if a Page is "Low Quality" in Google’s Eyes – Whiteboard Friday

Posted by randfish

What are the factors Google considers when weighing whether a page is high or low quality, and how can you identify those pages yourself? There’s a laundry list of things to examine to determine which pages make the grade and which don’t, from searcher behavior to page load times to spelling mistakes. Rand covers it all in this episode of Whiteboard Friday.

How to identify low quality pages

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about how to figure out if Google thinks a page on a website is potentially low quality and if that could lead us to some optimization options.

So as we’ve talked about previously here on Whiteboard Friday, and I’m sure many of you have been following along with experiments that Britney Muller from Moz has been conducting about removing low-quality pages, you saw Roy Hinkis from SimilarWeb talk about how they had removed low-quality pages from their site and seen an increase in rankings on a bunch of stuff. So many people have been trying this tactic. The challenge is figuring out which pages are actually low quality. What does that constitute?

What constitutes “quality” for Google?

So Google has some ideas about what’s high quality versus low quality, and a few of those are pretty obvious and we’re familiar with, and some of them may be more intriguing. So…

  • Google wants unique content.
  • They want to make sure that the value to searchers from that content is actually unique, not that it’s just different words and phrases on the page, but the value provided is actually different. You can check out the Whiteboard Friday on unique value if you have more questions on that.
  • They like to see lots of external sources linking editorially to a page. That tells them that the page is probably high quality because it’s reference-worthy.
  • They also like to see high-quality pages, not just sources, domains but high-quality pages linking to this. That can be internal and external links. So it tends to be the case that if your high-quality pages on your website link to another page on your site, Google often interprets that that way.
  • The page successfully answers the searcher’s query.

This is an intriguing one. So if someone performs a search, let’s say here I type in a search on Google for “pressure washing.” I’ll just write “pressure wash.” This page comes up. Someone clicks on that page, and they stay here and maybe they do go back to Google, but then they perform a completely different search, or they go to a different task, they visit a different website, they go back to their email, whatever it is. That tells Google, great, this page solved the query.

If instead someone searches for this and they go, they perform the search, they click on a link, and they get a low-quality mumbo-jumbo page and they click back and they choose a different result instead, that tells Google that page did not successfully answer that searcher’s query. If this happens a lot, Google calls this activity pogo-sticking, where you visit this one, it didn’t answer your query, so you go visit another one that does. It’s very likely that this result will be moved down and be perceived as low quality in Google.

  • The page has got to load fast on any connection.
  • They want to see high-quality accessibility with intuitive user experience and design on any device, so mobile, desktop, tablet, laptop.
  • They want to see actually grammatically correct and well-spelled content. I know this may come as a surprise, but we’ve actually done some tests and seen that by having poor spelling or bad grammar, we can get featured snippets removed from Google. So you can have a featured snippet, it’s doing great in the SERPs, you change something in there, you mess it up, and Google says, “Wait, no, that no longer qualifies. You are no longer a high-quality answer.” So that tells us that they are analyzing pages for that type of information.
  • Non-text content needs to have text alternatives. This is why Google encourages use of the alt attribute. This is why on videos they like transcripts. Here on Whiteboard Friday, as I’m speaking, there’s a transcript down below this video that you can read and get all the content without having to listen to me if you don’t want to or if you don’t have the ability to for whatever technical or accessibility, handicapped reasons.
  • They also like to see content that is well-organized and easy to consume and understand. They interpret that through a bunch of different things, but some of their machine learning systems can certainly pick that up.
  • Then they like to see content that points to additional sources for more information or for follow-up on tasks or to cite sources. So links externally from a page will do that.

This is not an exhaustive list. But these are some of the things that can tell Google high quality versus low quality and start to get them filtering things.

How can SEOs & marketers filter pages on sites to ID high vs. low quality?

As a marketer, as an SEO, there’s a process that we can use. We don’t have access to every single one of these components that Google can measure, but we can look at some things that will help us determine this is high quality, this is low quality, maybe I should try deleting or removing this from my site or recreating it if it is low quality.

In general, I’m going to urge you NOT to use things like:

A. Time on site, raw time on site

B. Raw bounce rate

C. Organic visits

D. Assisted conversions

Why not? Because by themselves, all of these can be misleading signals.

So a long time on your website could be because someone’s very engaged with your content. It could also be because someone is immensely frustrated and they cannot find what they need. So they’re going to return to the search result and click something else that quickly answers their query in an accessible fashion. Maybe you have lots of pop-ups and they have to click close on them and it’s hard to find the x-button and they have to scroll down far in your content. So they’re very unhappy with your result.

Bounce rate works similarly. A high bounce rate could be a fine thing if you’re answering a very simple query or if the next step is to go somewhere else or if there is no next step. If I’m just trying to get, “Hey, I need some pressure washing tips for this kind of treated wood, and I need to know whether I’ll remove the treatment if I pressure wash the wood at this level of pressure,” and it turns out no, I’m good. Great. Thank you. I’m all done. I don’t need to visit your website anymore. My bounce rate was very, very high. Maybe you have a bounce rate in the 80s or 90s percent, but you’ve answered the searcher’s query. You’ve done what Google wants. So bounce rate by itself, bad metric.

Same with organic visits. You could have a page that is relatively low quality that receives a good amount of organic traffic for one reason or another, and that could be because it’s still ranking for something or because it ranks for a bunch of long tail stuff, but it is disappointing searchers. This one is a little bit better in the longer term. If you look at this over the course of weeks or months as opposed to just days, you can generally get a better sense, but still, by itself, I don’t love it.

Assisted conversions is a great example. This page might not convert anyone. It may be an opportunity to drop cookies. It might be an opportunity to remarket or retarget to someone or get them to sign up for an email list, but it may not convert directly into whatever goal conversions you’ve got. That doesn’t mean it’s low-quality content.

THESE can be a good start:

So what I’m going to urge you to do is think of these as a combination of metrics. Any time you’re analyzing for low versus high quality, have a combination of metrics approach that you’re applying.

1. That could be a combination of engagement metrics. I’m going to look at…

  • Total visits
  • External and internal
  • I’m going to look at the pages per visit after landing. So if someone gets to the page and then they browse through other pages on the site, that is a good sign. If they browse through very few, not as good a sign, but not to be taken by itself. It needs to be combined with things like time on site and bounce rate and total visits and external visits.

2. You can combine some offsite metrics. So things like…

  • External links
  • Number of linking root domains
  • PA and your social shares like Facebook, Twitter, LinkedIn share counts, those can also be applicable here. If you see something that’s getting social shares, well, maybe it doesn’t match up with searchers’ needs, but it could still be high-quality content.

3. Search engine metrics. You can look at…

  • Indexation by typing a URL directly into the search bar or the browser bar and seeing whether the page is indexed.
  • You can also look at things that rank for their own title.
  • You can look in Google Search Console and see click-through rates.
  • You can look at unique versus duplicate content. So if I type in a URL here and I see multiple pages come back from my site, or if I type in the title of a page that I’ve created and I see multiple URLs come back from my own website, I know that there’s some uniqueness problems there.

4. You are almost definitely going to want to do an actual hand review of a handful of pages.

  • Pages from subsections or subfolders or subdomains, if you have them, and say, “Oh, hang on. Does this actually help searchers? Is this content current and up to date? Is it meeting our organization’s standards?”

Make 3 buckets:

Using these combinations of metrics, you can build some buckets. You can do this in a pretty easy way by exporting all your URLs. You could use something like Screaming Frog or Moz’s crawler or DeepCrawl, and you can export all your pages into a spreadsheet with metrics like these, and then you can start to sort and filter. You can create some sort of algorithm, some combination of the metrics that you determine is pretty good at ID’ing things, and you double-check that with your hand review. I’m going to urge you to put them into three kinds of buckets.

I. High importance. So high importance, high-quality content, you’re going to keep that stuff.

II. Needs work. second is actually stuff that needs work but is still good enough to stay in the search engines. It’s not awful. It’s not harming your brand, and it’s certainly not what search engines would call low quality and be penalizing you for. It’s just not living up to your expectations or your hopes. That means you can republish it or work on it and improve it.

III. Low quality. It really doesn’t meet the standards that you’ve got here, but don’t just delete them outright. Do some testing. Take a sample set of the worst junk that you put in the low bucket, remove it from your site, make sure you keep a copy, and see if by removing a few hundred or a few thousand of those pages, you see an increase in crawl budget and indexation and rankings and search traffic. If so, you can start to be more or less judicious and more liberal with what you’re cutting out of that low-quality bucket and a lot of times see some great results from Google.

All right, everyone. Hope you’ve enjoyed this edition of Whiteboard Friday, and we’ll see you again next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

RankRanger: 99% Of Google’s Page One Results Have At Least One HTTPS Result

HTTPS in the Google search results continue to rise and rise. In fact, according to RankRanger’s Google’s feature tracker tool, 99% of the page one results they track have at least one or more HTTPS result within them…


Search Engine Roundtable

Posted in IM NewsComments Off

Advert