Tag Archive | "Site"

Redirects: One Way to Make or Break Your Site Migration – Whiteboard Friday

Posted by KameronJenkins

Correctly redirecting your URLs is one of the most important things you can do to make a site migration go smoothly, but there are clear processes to follow if you want to get it right. In this week’s Whiteboard Friday, Kameron Jenkins breaks down the rules of redirection for site migrations to make sure your URLs are set up for success.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hey, guys. Welcome to this week’s edition of Whiteboard Friday. My name is Kameron Jenkins, and I work here at Moz. What we’re going to be talking about today is redirects and how they’re one way that you can make or break your site migration. Site migration can mean a lot of different things depending on your context.

Migrations?

I wanted to go over quickly what I mean before we dive into some tips for avoiding redirection errors. When I talk about migration, I’m coming from the experience of these primary activities.

CMS moving/URL format

One example of a migration I might be referring to is maybe we’re taking on a client and they previously used a CMS that had a default kind of URL formatting, and it was dated something.

So it was like /2018/May/ and then the post. Then we’re changing the CMS. We have more flexibility with how our pages, our URLs are structured, so we’re going to move it to just /post or something like that. In that way a lot of URLs are going to be moving around because we’re changing the way that those URLs are structured.

“Keywordy” naming conventions

Another instance is that sometimes we’ll get clients that come to us with kind of dated or keywordy URLs, and we want to change this to be a lot cleaner, shorten them where possible, just make them more human-readable.

An example of that would be maybe the client used URLs like /best-plumber-dallas, and we want to change it to something a little bit cleaner, more natural, and not as keywordy, to just /plumbers or something like that. So that can be another example of lots of URLs moving around if we’re taking over a whole site and we’re kind of wanting to do away with those.

Content overhaul

Another example is if we’re doing a complete content overhaul. Maybe the client comes to us and they say, “Hey, we’ve been writing content and blogging for a really long time, and we’re just not seeing the traffic and the rankings that we want. Can you do a thorough audit of all of our content?” Usually what we notice is that you have maybe even thousands of pages, but four of them are ranking.

So there are a lot of just redundant pages, pages that are thin and would be stronger together, some pages that just don’t really serve a purpose and we want to just let die. So that’s another example where we would be merging URLs, moving pages around, just letting some drop completely. That’s another example of migrating things around that I’m referring to.

Don’t we know this stuff? Yes, but…

That’s what I’m referring to when it comes to migrations. But before we dive in, I kind of wanted to address the fact that like don’t we know this stuff already? I mean I’m talking to SEOs, and we all know or should know the importance of redirection. If there’s not a redirect, there’s no path to follow to tell Google where you’ve moved your page to.

It’s frustrating for users if they click on a link that no longer works, that doesn’t take them to the proper destination. We know it’s important, and we know what it does. It passes link equity. It makes sure people aren’t frustrated. It helps to get the correct page indexed, all of those things. So we know this stuff. But if you’re like me, you’ve also been in those situations where you have to spend entire days fixing 404s to correct traffic loss or whatever after a migration, or you’re fixing 301s that were maybe done but they were sent to all kinds of weird, funky places.

Mistakes still happen even though we know the importance of redirects. So I want to talk about why really quickly.

Unclear ownership

Unclear ownership is something that can happen, especially if you’re on a scrappier team, a smaller team and maybe you don’t handle these things very often enough to have a defined process for this. I’ve been in situations where I assumed the tech was going to do it, and the tech assumed that the project assistant was going to do it.

We’re all kind of pointing fingers at each other with no clear ownership, and then the ball gets dropped because no one really knows whose responsibility it is. So just make sure that you designate someone to do it and that they know and you know that that person is going to be handling it.

Deadlines

Another thing is deadlines. Internal and external deadlines can affect this. So one example that I encountered pretty often is the client would say, “Hey, we really need this project done by next Monday because we’re launching another initiative. We’re doing a TV commercial, and our domain is going to be listed on the TV commercial. So I’d really like this stuff wrapped up when those commercials go live.”

So those kind of external deadlines can affect how quickly we have to work. A lot of times it just gets left by the wayside because it is not a very visible thing. If you don’t know the importance of redirects, you might handle things like content and making sure the buttons all work and the template looks nice and things like that, the visible things. Where people assume that redirects, oh, that’s just a backend thing. We can take care of it later. Unfortunately, redirects usually fall into that category if the person doing it doesn’t really know the importance of it.

Another thing with deadlines is internal deadlines. Sometimes maybe you might have a deadline for a quarterly game or a monthly game. We have to have all of our projects done by this date. The same thing with the deadlines. The redirects are usually unfortunately something that tends to miss the cutoff for those types of things.

Non-SEOs handling the redirection

Then another situation that can cause site migration errors and 404s after moving around is non-SEOs handling this. Now you don’t have to be a really experienced SEO usually to handle these types of things. It depends on your CMS and how complicated is the way that you’re implementing your redirects. But sometimes if it’s easy, if your CMS makes redirection easy, it can be treated as like a data entry-type of job, and it can be delegated to someone who maybe doesn’t know the importance of doing all of them or formatting them properly or directing them to the places that they’re supposed to go.

The rules of redirection for site migrations

Those are all situations that I’ve encountered issues with. So now that we kind of know what I’m talking about with migrations and why they kind of sometimes still happen, I’m going to launch into some rules that will hopefully help prevent site migration errors because of failed redirects.

1. Create one-to-one redirects

Number one, always create one-to-one redirects. This is super important. What I’ve seen sometimes is oh, man, it could save me tons of time if I just use a wildcard and redirect all of these pages to the homepage or to the blog homepage or something like that. But what that tells Google is that Page A has moved to Page B, whereas that’s not the case. You’re not moving all of these pages to the homepage. They haven’t actually moved there. So it’s an irrelevant redirect, and Google has even said, I think, that they treat those essentially as a soft 404. They don’t even count. So make sure you don’t do that. Make sure you’re always linking URL to its new location, one-to-one every single time for every URL that’s moving.

2. Watch out for redirect chains

Two, watch out for chains. I think Google says something oddly specific, like watch out for redirect chains, three, no more than five. Just try to limit it as much as possible. By chains, I mean you have URL A, and then you redirect it to B, and then later you decide to move it to a third location. Instead of doing this and going through a middleman, A to B to C, shorten it if you can. Go straight from the source to the destination, A to C.

3. Watch out for loops

Three, watch out for loops. Similarly what can happen is you redirect position A to URL B to another version C and then back to A. What happens is it’s chasing its tail. It will never resolve, so you’re redirecting it in a loop. So watch out for things like that. One way to check those things I think is a nifty tool, Screaming Frog has a redirect chains report. So you can see if you’re kind of encountering any of those issues after you’ve implemented your redirects.

4. 404 strategically

Number four, 404 strategically. The presence of 404s on your site alone, that is not going to hurt your site’s rankings. It is letting pages die that were ranking and bringing your site traffic that is going to cause issues. Obviously, if a page is 404ing, eventually Google is going to take that out of the index if you don’t redirect it to its new location. If that page was ranking really well, if it was bringing your site traffic, you’re going to lose the benefits of it. If it had links to it, you’re going to lose the benefits of that backlink if it dies.

So if you’re going to 404, just do it strategically. You can let pages die. Like in these situations, maybe you’re just outright deleting a page and it has no new location, nothing relevant to redirect it to. That’s okay. Just know that you’re going to lose any of the benefits that URL was bringing your site.

5. Prioritize “SEO valuable” URLs

Number five, prioritize “SEO valuable” URLs, and I do that because I prefer to obviously redirect everything that you’re moving, everything that’s legitimately moving.

But because of situations like deadlines and things like that, when we’re down to the wire, I think it’s really important to at least have started out with your most important URLs. So those are URLs that are ranking really well, giving you a lot of good traffic, URLs that you’ve earned links to. So those really SEO valuable URLs, if you have a deadline and you don’t get to finish all of your redirects before this project goes live, at least you have those most critical, most important URLs handled first.

Again, obviously, it’s not ideal, I don’t think in my mind, to save any until after the launch. Obviously, I think it’s best to have them all set up by the time it goes live. But if that’s not the case and you’re getting rushed and you have to launch, at least you will have handled the most important URLs for SEO value.

6. Test!

Number six, just to end it off, test. I think it’s super important just to monitor these things, because you could think that you have set these all up right, but maybe there were some formatting errors, or maybe you mistakenly redirected something to the wrong place. It is super important just to test. So what you can do, you can do a site:domain.com and just start clicking on all the results that come up and see if any are redirecting to the wrong place, maybe they’re 404ing.

Just checking all of those indexed URLs to make sure that they’re going to a proper new destination. I think Moz’s Site Crawl is another huge benefit here for testing purposes. What it does, if you have a domain set up or a URL set up in a campaign in Moz Pro, it checks this every week, and you can force another run if you want it to.

But it will scan your site for errors like this, 404s namely. So if there are any issues like that, 500 or 400 type errors, Site Crawl will catch it and notify you. If you’re not managing the domain that you’re working on in a campaign in Moz Pro, there’s on-demand crawl too. So you can run that on any domain that you’re working on to test for things like that.

There are plenty of other ways you can test and find errors. But the most important thing to remember is just to do it, just to test and make sure that even once you’ve implemented these things, that you’re checking and making sure that there are no issues after a launch. I would check right after a launch and then a couple of days later, and then just kind of taper off until you’re absolutely positive that everything has gone smoothly.

So those are my tips, those are my rules for how to implement redirects properly, why you need to, when you need to, and the risks that can happen with that. If you have any tips of your own that you’d like to share, pop them in the comments and share it with all of us in the SEO community. That’s it for this week’s Whiteboard Friday.

Come back again next week for another one. Thanks, everybody.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How to avoid a site migration disaster

Undertaking a site migration can seem like a daunting task, but it can be relatively painless if you follow the right steps.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

SearchCap: Google My Business app, Bing ads insights & web site trust

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

What Does It Take To Launch And Sustain A Profitable Membership Site?

 [ Download MP3 | Transcript | iTunes | Soundcloud | Raw RSS ] In this solo podcast, I dive into all the various experiences I have had selling membership and subscription-based products online, including insights I have gained from interviews and coaching other successful membership site owners. My first ever product, although it was […]

The post What Does It Take To Launch And Sustain A Profitable Membership Site? appeared first on Yaro.Blog.

Entrepreneurs-Journey.com by Yaro Starak

Posted in IM NewsComments Off

Internal Linking & Mobile First: Large Site Crawl Paths in 2018 & Beyond

Posted by Tom.Capper

By now, you’ve probably heard as much as you can bear about mobile first indexing. For me, there’s been one topic that’s been conspicuously missing from all this discussion, though, and that’s the impact on internal linking and previous internal linking best practices.

In the past, there have been a few popular methods for providing crawl paths for search engines — bulky main navigations, HTML sitemap-style pages that exist purely for internal linking, or blocks of links at the bottom of indexed pages. Larger sites have typically used at least two or often three of these methods. I’ll explain in this post why all of these are now looking pretty shaky, and what I suggest you do about it.

Quick refresher: WTF are “internal linking” & “mobile-first,” Tom?

Internal linking is and always has been a vital component of SEO — it’s easy to forget in all the noise about external link building that some of our most powerful tools to affect the link graph are right under our noses. If you’re looking to brush up on internal linking in general, it’s a topic that gets pretty complex pretty quickly, but there are a couple of resources I can recommend to get started:

I’ve also written in the past that links may be mattering less and less as a ranking factor for the most competitive terms, and though that may be true, they’re still the primary way you qualify for that competition.

A great example I’ve seen recently of what happens if you don’t have comprehensive internal linking is eflorist.co.uk. (Disclaimer: eFlorist is not a client or prospective client of Distilled, nor are any other sites mentioned in this post)

eFlorist has local landing pages for all sorts of locations, targeting queries like “Flower delivery in [town].” However, even though these pages are indexed, they’re not linked to internally. As a result, if you search for something like “flower delivery in London,” despite eFlorist having a page targeted at this specific query (which can be found pretty much only through use of advanced search operators), they end up ranking on page 2 with their “flowers under £30” category page:

¯\_(ツ)_/¯

If you’re looking for a reminder of what mobile-first indexing is and why it matters, these are a couple of good posts to bring you up to speed:

In short, though, Google is increasingly looking at pages as they appear on mobile for all the things it was previously using desktop pages for — namely, establishing ranking factors, the link graph, and SEO directives. You may well have already seen an alert from Google Search Console telling you your site has been moved over to primarily mobile indexing, but if not, it’s likely not far off.

Get to the point: What am I doing wrong?

If you have more than a handful of landing pages on your site, you’ve probably given some thought in the past to how Google can find them and how to make sure they get a good chunk of your site’s link equity. A rule of thumb often used by SEOs is how many clicks a landing page is from the homepage, also known as “crawl depth.”

Mobile-first indexing impacts this on two fronts:

  1. Some of your links aren’t present on mobile (as is common), so your internal linking simply won’t work in a world where Google is going primarily with the mobile-version of your page
  2. If your links are visible on mobile, they may be hideous or overwhelming to users, given the reduced on-screen real estate vs. desktop

If you don’t believe me on the first point, check out this Twitter conversation between Will Critchlow and John Mueller:

In particular, that section I’ve underlined in red should be of concern — it’s unclear how much time we have, but sooner or later, if your internal linking on the mobile version of your site doesn’t cut it from an SEO perspective, neither does your site.

And for the links that do remain visible, an internal linking structure that can be rationalized on desktop can quickly look overbearing on mobile. Check out this example from Expedia.co.uk’s “flights to London” landing page:

Many of these links are part of the site-wide footer, but they vary according to what page you’re on. For example, on the “flights to Australia” page, you get different links, allowing a tree-like structure of internal linking. This is a common tactic for larger sites.

In this example, there’s more unstructured linking both above and below the section screenshotted. For what it’s worth, although it isn’t pretty, I don’t think this is terrible, but it’s also not the sort of thing I can be particularly proud of when I go to explain to a client’s UX team why I’ve asked them to ruin their beautiful page design for SEO reasons.

I mentioned earlier that there are three main methods of establishing crawl paths on large sites: bulky main navigations, HTML-sitemap-style pages that exist purely for internal linking, or blocks of links at the bottom of indexed pages. I’ll now go through these in turn, and take a look at where they stand in 2018.

1. Bulky main navigations: Fail to scale

The most extreme example I was able to find of this is from Monoprice.com, with a huge 711 links in the sitewide top-nav:

Here’s how it looks on mobile:

This is actually fairly usable, but you have to consider the implications of having this many links on every page of your site — this isn’t going to concentrate equity where you need it most. In addition, you’re potentially asking customers to do a lot of work in terms of finding their way around such a comprehensive navigation.

I don’t think mobile-first indexing changes the picture here much; it’s more that this was never the answer in the first place for sites above a certain size. Many sites have tens of thousands (or more), not hundreds of landing pages to worry about. So simply using the main navigation is not a realistic option, let alone an optimal option, for creating crawl paths and distributing equity in a proportionate or targeted way.

2. HTML sitemaps: Ruined by the counterintuitive equivalence of noindex,follow & noindex,nofollow

This is a slightly less common technique these days, but still used reasonably widely. Take this example from Auto Trader UK:

The idea is that this page is linked to from Auto Trader’s footer, and allows link equity to flow through into deeper parts of the site.

However, there’s a complication: this page in an ideal world be “noindex,follow.” However, it turns out that over time, Google ends up treating “noindex,follow” like “noindex,nofollow.” It’s not 100% clear what John Mueller meant by this, but it does make sense that given the low crawl priority of “noindex” pages, Google could eventually stop crawling them altogether, causing them to behave in effect like “noindex,nofollow.” Anecdotally, this is also how third-party crawlers like Moz and Majestic behave, and it’s how I’ve seen Google behave with test pages on my personal site.

That means that at best, Google won’t discover new links you add to your HTML sitemaps, and at worst, it won’t pass equity through them either. The jury is still out on this worst case scenario, but it’s not an ideal situation in either case.

So, you have to index your HTML sitemaps. For a large site, this means you’re indexing potentially dozens or hundreds of pages that are just lists of links. It is a viable option, but if you care about the quality and quantity of pages you’re allowing into Google’s index, it might not be an option you’re so keen on.

3. Link blocks on landing pages: Good, bad, and ugly, all at the same time

I already mentioned that example from Expedia above, but here’s another extreme example from the Kayak.co.uk homepage:

Example 1

Example 2

It’s no coincidence that both these sites come from the travel search vertical, where having to sustain a massive number of indexed pages is a major challenge. Just like their competitor, Kayak have perhaps gone overboard in the sheer quantity here, but they’ve taken it an interesting step further — notice that the links are hidden behind dropdowns.

This is something that was mentioned in the post from Bridget Randolph I mentioned above, and I agree so much I’m just going to quote her verbatim:

Note that with mobile-first indexing, content which is collapsed or hidden in tabs, etc. due to space limitations will not be treated differently than visible content (as it may have been previously), since this type of screen real estate management is actually a mobile best practice.

Combined with a more sensible quantity of internal linking, and taking advantage of the significant height of many mobile landing pages (i.e., this needn’t be visible above the fold), this is probably the most broadly applicable method for deep internal linking at your disposal going forward. As always, though, we need to be careful as SEOs not to see a working tactic and rush to push it to its limits — usability and moderation are still important, just as with overburdened main navigations.

Summary: Bite the on-page linking bullet, but present it well

Overall, the most scalable method for getting large numbers of pages crawled, indexed, and ranking on your site is going to be on-page linking — simply because you already have a large number of pages to place the links on, and in all likelihood a natural “tree” structure, by very nature of the problem.

Top navigations and HTML sitemaps have their place, but lack the scalability or finesse to deal with this situation, especially given what we now know about Google’s treatment of “noindex,follow” tags.

However, the more we emphasize mobile experience, while simultaneously relying on this method, the more we need to be careful about how we present it. In the past, as SEOs, we might have been fairly nervous about placing on-page links behind tabs or dropdowns, just because it felt like deceiving Google. And on desktop, that might be true, but on mobile, this is increasingly going to become best practice, and we have to trust Google to understand that.

All that said, I’d love to hear your strategies for grappling with this — let me know in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

SearchCap: Ask an SMXpert, Google HTTPS site migration, GSC news & more

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Google Confirms Chrome Usage Data Used to Measure Site Speed

Posted by Tom-Anthony

During a discussion with Google’s John Mueller at SMX Munich in March, he told me an interesting bit of data about how Google evaluates site speed nowadays. It has gotten a bit of interest from people when I mentioned it at SearchLove San Diego the week after, so I followed up with John to clarify my understanding.

The short version is that Google is now using performance data aggregated from Chrome users who have opted in as a datapoint in the evaluation of site speed (and as a signal with regards to rankings). This is a positive move (IMHO) as it means we don’t need to treat optimizing site speed for Google as a separate task from optimizing for users.

Previously, it has not been clear how Google evaluates site speed, and it was generally believed to be measured by Googlebot during its visits — a belief enhanced by the presence of speed charts in Search Console. However, the onset of JavaScript-enabled crawling made it less clear what Google is doing — they obviously want the most realistic data possible, but it’s a hard problem to solve. Googlebot is not built to replicate how actual visitors experience a site, and so as the task of crawling became more complex, it makes sense that Googlebot may not be the best mechanism for this (if it ever was the mechanism).

In this post, I want to recap the pertinent data around this news quickly and try to understand what this may mean for users.

Google Search Console

Firstly, we should clarify our understand of what the “time spent downloading a page” metric in Google Search Console is telling us. Most of us will recognize graphs like this one:

Until recently, I was unclear about exactly what this graph was telling me. But handily, John Mueller comes to the rescue again with a detailed answer [login required] (hat tip to James Baddiley from Chillisauce.com for bringing this to my attention):

John clarified what this graph is showing:

It’s technically not “downloading the page” but rather “receiving data in response to requesting a URL” – it’s not based on rendering the page, it includes all requests made.

And that it is:

this is the average over all requests for that day

Because Google may be fetching a very different set of resources every day when it’s crawling your site, and because this graph does not account for anything to do with page rendering, it is not useful as a measure of the real performance of your site.

For that reason, John points out that:

Focusing blindly on that number doesn’t make sense.

With which I quite agree. The graph can be useful for identifying certain classes of backend issues, but there are also probably better ways for you to do that (e.g. WebPageTest.org, of which I’m a big fan).

Okay, so now we understand that graph and what it represents, let’s look at the next option: the Google WRS.

Googlebot & the Web Rendering Service

Google’s WRS is their headless browser mechanism based on Chrome 41, which is used for things like “Fetch as Googlebot” in Search Console, and is increasingly what Googlebot is using when it crawls pages.

However, we know that this isn’t how Google evaluates pages because of a Twitter conversation between Aymen Loukil and Google’s Gary Illyes. Aymen wrote up a blog post detailing it at the time, but the important takeaway was that Gary confirmed that WRS is not responsible for evaluating site speed:

Twitter conversation with Gary Ilyes

At the time, Gary was unable to clarify what was being used to evaluate site performance (perhaps because the Chrome User Experience Report hadn’t been announced yet). It seems as though things have progressed since then, however. Google is now able to tell us a little more, which takes us on to the Chrome User Experience Report.

Chrome User Experience Report

Introduced in October last year, the Chrome User Experience Report “is a public dataset of key user experience metrics for top origins on the web,” whereby “performance data included in the report is from real-world conditions, aggregated from Chrome users who have opted-in to syncing their browsing history and have usage statistic reporting enabled.”

Essentially, certain Chrome users allow their browser to report back load time metrics to Google. The report currently has a public dataset for the top 1 million+ origins, though I imagine they have data for many more domains than are included in the public data set.

In March I was at SMX Munich (amazing conference!), where along with a small group of SEOs I had a chat with John Mueller. I asked John about how Google evaluates site speed, given that Gary had clarified it was not the WRS. John was kind enough to shed some light on the situation, but at that point, nothing was published anywhere.

However, since then, John has confirmed this information in a Google Webmaster Central Hangout [15m30s, in German], where he explains they’re using this data along with some other data sources (he doesn’t say which, though notes that it is in part because the data set does not cover all domains).

At SMX John also pointed out how Google’s PageSpeed Insights tool now includes data from the Chrome User Experience Report:

The public dataset of performance data for the top million domains is also available in a public BigQuery project, if you’re into that sort of thing!

We can’t be sure what all the other factors Google is using are, but we now know they are certainly using this data. As I mentioned above, I also imagine they are using data on more sites than are perhaps provided in the public dataset, but this is not confirmed.

Pay attention to users

Importantly, this means that there are changes you can make to your site that Googlebot is not capable of detecting, which are still detected by Google and used as a ranking signal. For example, we know that Googlebot does not support HTTP/2 crawling, but now we know that Google will be able to detect the speed improvements you would get from deploying HTTP/2 for your users.

The same is true if you were to use service workers for advanced caching behaviors — Googlebot wouldn’t be aware, but users would. There are certainly other such examples.

Essentially, this means that there’s no longer a reason to worry about pagespeed for Googlebot, and you should instead just focus on improving things for your users. You still need to pay attention to Googlebot for crawling purposes, which is a separate task.

If you are unsure where to look for site speed advice, then you should look at:

That’s all for now! If you have questions, please comment here and I’ll do my best! Thanks!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Announcing 5 NEW Feature Upgrades to Moz Pro’s Site Crawl, Including Pixel-Length Title Data

Posted by Dr-Pete

While Moz is hard at work on some major new product features (we’re hoping for two more big launches in 2017), we’re also working hard to iterate on recent advances. I’m happy to announce that, based on your thoughtful feedback, and our own ever-growing wish lists, we’ve recently launched five upgrades to Site Crawl.

1. Mark Issues as Fixed

It’s fine to ignore issues that don’t matter to your site or business, but many of you asked for a way to audit fixes or just let us know that you’ve made a fix prior to our next data update. So, from any issues page, you can now select items and “Mark as fixed” (screens below edited for content).

Fixed items will immediately be highlighted and, like Ignored issues, can be easily restored…

Unlike the “Ignore” feature, we’ll also monitor these issues for you and warn you if they reappear. In a perfect world, you’d fix an issue once and be done, but we all know that real web development just doesn’t work out that way.

2. View/Ignore/Fix More Issues

When we launched the “Ignore” feature, many of you were very happy (it was, frankly, long overdue), until you realized you could only ignore issues in chunks of 25 at a time. We have heard you loud and clear (seriously, Carl, stop calling) and have taken two steps. First, you can now view, ignore, and fix issues 100 at a time. This is the default – no action or extra clicks required.

3. Ignore Issues by Type

Second, you can now ignore entire issue types. Let’s say, for example, that Moz.com intentionally has 33,000 Meta Noindex tags (for example). We really don’t need to be reminded of that every week. So, once we make sure none of those are unintentional, we can go to the top of the issue page and click “Ignore Issue Type”:

Look for this in the upper-right of any individual issue page. Just like individual issues, you can easily track all of your ignored issues and start paying attention to them again at any time. We just want to help you clear out the noise so that you can focus on what really matters to you.

4. Pixel-length Title Data

For years now, we’ve known that Google cut display titles by pixel length. We’ve provided research on this subject and have built our popular title tag checker around pixel length, but providing this data at product scale proved to be challenging. I’m happy to say that we’ve finally overcome those challenges, and “Pixel Length” has replaced Character Length in our title tag diagnostics.

Google currently uses a 600-pixel container, but you may notice that you receive warnings below that length. Due to making space to add the “…” and other considerations, our research has shown that the true cut-off point that Google uses is closer to 570 pixels. Site Crawl reflects our latest research on the subject.

As with other issues, you can export the full data to CSV, to sort and filter as desired:

Looks like we’ve got some work to do when it comes to brevity. Long title tags aren’t always a bad thing, but this data will help you much better understand how and when Google may be cutting off your display titles in SERPs and decide whether you want to address it in specific cases.

5. Full Issue List Export

When we rebuilt Site Crawl, we were thrilled to provide data and exports on all pages crawled. Unfortunately, we took away the export of all issues (choosing to divide those up into major issue types). Some of you had clearly come to rely on the all issues export, and so we’ve re-added that functionality. You can find it next to “All Issues” on the main “Site Crawl Overview” page:

We hope you’ll try out all of the new features and report back as we continue to improve on our Site Crawl engine and UI over the coming year. We’d love to hear what’s working for you and what kind of results you’re seeing as you fix your most pressing technical SEO issues.

Find and fix site issues now

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Canada’s Supreme Court orders Google to de-index site globally, opening door to censorship

Decision is dangerous to free speech and the free flow of online information.

The post Canada’s Supreme Court orders Google to de-index site globally, opening door to censorship appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Google: Moving A Site Won’t Help A Site Impacted By Panda

It has been some time since we talked about Panda here and I am sure most of you are happy about that. But since Gary Illyes from Google brought it up yesterday on Twitter…


Search Engine Roundtable

Posted in IM NewsComments Off

Advert