Tag Archive | "Handle"

All About Fraggles (Fragment + Handle) – Whiteboard Friday

Posted by Suzzicks

What are “fraggles” in SEO and how do they relate to mobile-first indexing, entities, the Knowledge Graph, and your day-to-day work? In this glimpse into her 2019 MozCon talk, Cindy Krum explains everything you need to understand about fraggles in this edition of Whiteboard Friday.


Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hi, Moz fans. My name is Cindy Krum, and I’m the CEO of MobileMoxie, based in Denver, Colorado. We do mobile SEO and ASO consulting. I’m here in Seattle, speaking at MozCon, but also recording this Whiteboard Friday for you today, and we are talking about fraggles.

So fraggles are obviously a name that I’m borrowing from Jim Henson, who created “Fraggle Rock.” But it’s a combination of words. It’s a combination of fragment and handle. I talk about fraggles as a new way or a new element or thing that Google is indexing.

Fraggles and mobile-first indexing

Let’s start with the idea of mobile-first indexing, because you have to kind of understand that before you can go on to understand fraggles. So I believe mobile-first indexing is about a little bit more than what Google says. Google says that mobile-first indexing was just a change of the crawler.

They had a desktop crawler that was primarily crawling and indexing, and now they have a mobile crawler that’s doing the heavy lifting for crawling and indexing. While I think that’s true, I think there’s more going on behind the scenes that they’re not talking about, and we’ve seen a lot of evidence of this. So what I believe is that mobile-first indexing was also about indexing, hence the name.

Knowledge Graph and entities

So I think that Google has reorganized their index around entities or around specifically entities in the Knowledge Graph. So this is kind of my rough diagram of a very simplified Knowledge Graph. But Knowledge Graph is all about person, place, thing, or idea.

Nouns are entities. Knowledge Graph has nodes for all of the major person, place, thing, or idea entities out there. But it also indexes or it also organizes the relationships of this idea to this idea or this thing to this thing. What’s useful for that to Google is that these things, these concepts, these relationships stay true in all languages, and that’s how entities work, because entities happen before keywords.

This can be a hard concept for SEOs to wrap their brain around because we’re so used to dealing with keywords. But if you think about an entity as something that’s described by a keyword and can be language agnostic, that’s how Google thinks about entities, because entities in the Knowledge Graph are not written up per se or their the unique identifier isn’t a word, it’s a number and numbers are language agnostic.

But if we think about an entity like mother, mother is a concept that exists in all languages, but we have different words to describe it. But regardless of what language you’re speaking, mother is related to father, is related to daughter, is related to grandfather, all in the same ways, even if we’re speaking different languages. So if Google can use what they call the “topic layer”and entities as a way to filter in information and understand the world, then they can do it in languages where they’re strong and say, “We know that this is true absolutely 100% all of the time.”

Then they can apply that understanding to languages that they have a harder time indexing or understanding, they’re just not as strong or the algorithm isn’t built to understand things like complexities of language, like German where they make really long words or other languages where they have lots of short words to mean different things or to modify different words.

Languages all work differently. But if they can use their translation API and their natural language APIs to build out the Knowledge Graph in places where they’re strong, then they can use it with machine learning to also build it and do a better job of answering questions in places or languages where they’re weak. So when you understand that, then it’s easy to think about mobile-first indexing as a massive Knowledge Graph build-out.

We’ve seen this happening statistically. There are more Knowledge Graph results and more other things that seem to be related to Knowledge Graph results, like people also ask, people also search for, related searches. Those are all describing different elements or different nodes on the Knowledge Graph. So when you see those things in the search, I want you to think, hey, this is the Knowledge Graph showing me how this topic is related to other topics.

So when Google launched mobile-first indexing, I think this is the reason it took two and a half years is because they were reindexing the entire web and organizing it around the Knowledge Graph. If you think back to the AMA that John Mueller did right about the time that Knowledge Graph was launching, he answered a lot of questions that were about JavaScript and href lang.

When you put this in that context, it makes more sense. He wants the entity understanding, or he knows that the entity understanding is really important, so the href lang is also really important. So that’s enough of that. Now let’s talk about fraggles.

Fraggles = fragment + handle

So fraggles, as I said, are a fragment plus a handle. It’s important to know that fraggles — let me go over here —fraggles and fragments, there are lots of things out there that have fragments. So you can think of native apps, databases, websites, podcasts, and videos. Those can all be fragmented.

Even though they don’t have a URL, they might be useful content, because Google says its goal is to organize the world’s information, not to organize the world’s websites. I think that, historically, Google has kind of been locked into this crawling and indexing of websites and that that’s bothered it, that it wants to be able to show other stuff, but it couldn’t do that because they all needed URLs.

But with fragments, potentially they don’t have to have a URL. So keep these things in mind — apps, databases and stuff like that — and then look at this. 

So this is a traditional page. If you think about a page, Google has kind of been forced, historically by their infrastructure, to surface pages and to rank pages. But pages sometimes struggle to rank if they have too many topics on them.

So for instance, what I’ve shown you here is a page about vegetables. This page may be the best page about vegetables, and it may have the best information about lettuce, celery, and radishes. But because it’s got those topics and maybe more topics on it, they all kind of dilute each other, and this great page may struggle to rank because it’s not focused on the one topic, on one thing at a time.

Google wants to rank the best things. But historically they’ve kind of pushed us to put the best things on one page at a time and to break them out. So what that’s created is this “content is king, I need more content, build more pages” mentality in SEO. The problem is everyone can be building more and more pages for every keyword that they want to rank for or every keyword group that they want to rank for, but only one is going to rank number one.

Google still has to crawl all of those pages that it told us to build, and that creates this character over here, I think, Marjory the Trash Heap, which if you remember the Fraggles, Marjory the Trash Heap was the all-knowing oracle. But when we’re all creating kind of low- to mid-quality content just to have a separate page for every topic, then that makes Google’s life harder, and that of course makes our life harder.

So why are we doing all of this work? The answer is because Google can only index pages, and if the page is too long or too many topics, Google gets confused. So we’ve been enabling Google to do this. But let’s pretend, go with me on this, because this is a theory, I can’t prove it. But if Google didn’t have to index a full page or wasn’t locked into that and could just index a piece of a page, then that makes it easier for Google to understand the relationships of different topics to one page, but also to organize the bits of the page to different pieces of the Knowledge Graph.

So this page about vegetables could be indexed and organized under the vegetable node of the Knowledge Graph. But that doesn’t mean that the lettuce part of the page couldn’t be indexed separately under the lettuce portion of the Knowledge Graph and so on, celery to celery and radish to radish. Now I know this is novel, and it’s hard to think about if you’ve been doing SEO for a long time.

But let’s think about why Google would want to do this. Google has been moving towards all of these new kinds of search experiences where we have voice search, we have the Google Home Hub kind of situation with a screen, or we have mobile searches. If you think about what Google has been doing, we’ve seen the increase in people also ask, and we’ve seen the increase in featured snippets.

They’ve actually been kind of, sort of making fragments for a long time or indexing fragments and showing them in featured snippets. The difference between that and fraggles is that when you click through on a fraggle, when it ranks in a search result, Google scrolls to that portion of the page automatically. That’s the handle portion.

So handles you may have heard of before. They’re kind of old-school web building. We call them bookmarks, anchor links, anchor jump links, stuff like that. It’s when it automatically scrolls to the right portion of the page. But what we’ve seen with fraggles is Google is lifting bits of text, and when you click on it, they’re scrolling directly to that piece of text on a page.

So we see this already happening in some results. What’s interesting is Google is overlaying the link. You don’t have to program the jump link in there. Google actually finds it and puts it there for you. So Google is already doing this, especially with AMP featured snippets. If you have a AMP featured snippet, so a featured snippet that’s lifted from an AMP page, when you click through, Google is actually scrolling and highlighting the featured snippet so that you could read it in context on the page.

But it’s also happening in other kind of more nuanced situations, especially with forums and conversations where they can pick a best answer. The difference between a fraggle and something like a jump link is that Google is overlaying the scrolling portion. The difference between a fraggle and a site link is site links link to other pages, and fraggles, they’re linking to multiple pieces of the same long page.

So we want to avoid continuing to build up low-quality or mid-quality pages that might go to Marjory the Trash Heap. We want to start thinking in terms of can Google find and identify the right portion of the page about a specific topic, and are these topics related enough that they’ll be understood when indexing them towards the Knowledge Graph.

Knowledge Graph build-out into different areas

So I personally think that we’re seeing the build-out of the Knowledge Graph in a lot of different things. I think featured snippets are kind of facts or ideas that are looking for a home or validation in the Knowledge Graph. People also ask seem to be the related nodes. People also search for, same thing. Related searches, same thing. Featured snippets, oh, they’re on there twice, two featured snippets. Found on the web, which is another way where Google is putting expanders by topic and then giving you a carousel of featured snippets to click through on.



 So we’re seeing all of those things, and some SEOs are getting kind of upset that Google is lifting so much content and putting it in the search results and that you’re not getting the click. We know that 61% of mobile searches don’t get a click anymore, and it’s because people are finding the information that they want directly in a SERP.

That’s tough for SEOs, but great for Google because it means Google is providing exactly what the user wants. So they’re probably going to continue to do this. I think that SEOs are going to change their minds and they’re going to want to be in those windowed content, in the lifted content, because when Google starts doing this kind of thing for the native apps, databases, and other content, websites, podcasts, stuff like that, then those are new competitors that you didn’t have to deal with when it was only websites ranking, but those are going to be more engaging kinds of content that Google will be showing or lifting and showing in a SERP even if they don’t have to have URLs, because Google can just window them and show them.

So you’d rather be lifted than not shown at all. So that’s it for me and featured snippets. I’d love to answer your questions in the comments, and thanks very much. I hope you like the theory about fraggles.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How To Hire Someone To Handle All Your Email For You

[ Download MP3 | Transcript | iTunes | Soundcloud | Raw RSS ] Can you imagine going two weeks without checking your email? How about after waiting two weeks, you log into your inbox, and it’s empty…. because someone has handled your email for you! By far the single most…

The post How To Hire Someone To Handle All Your Email For You appeared first on Yaro.blog.

Entrepreneurs-Journey.com by Yaro Starak

Posted in IM NewsComments Off

How To Hire Someone To Handle All Your Email For You

[ Download MP3 | Transcript Coming Soon | iTunes | Soundcloud | Raw RSS ] Can you imagine going two weeks without checking your email? How about after waiting two weeks, you log into your inbox, and it’s empty…. because someone has handled your email for you! By far the…

The post How To Hire Someone To Handle All Your Email For You appeared first on Entrepreneurs-Journey.com.

Entrepreneurs-Journey.com by Yaro Starak

Posted in IM NewsComments Off

How Does Google Handle CSS + Javascript "Hidden" Text? – Whiteboard Friday

Posted by randfish

Does Google treat text kept behind “read more” links with the same importance as non-hidden text? The short answer is “no,” but there’s more nuance to it than that. In today’s Whiteboard Friday, Rand explains just how the search engine giant weighs text hidden from view using CSS and JavaScript.

How Google handles CSS and Javascript

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat a little bit about hidden text, hidden text of several kinds. I really don’t mean the spammy, black on a black background, white on a white background-like, hidden text type of keyword stuffing from the ’90s and early 2000s. I’m talking about what we do with CSS and JavaScript with overlays and with folders inside a page, that kind of hidden text.

It’s become very popular in modern web design to basically use CSS or to use JavaScript to load text after a user has taken some action on a page. So perhaps they’ve clicked on a separate section of your e-commerce page about your product to see other information, or maybe they’ve clicked a “read more” link in an article to read the rest of the article. This actually creates problems with Google and with SEO, and they’re not obvious problems, because when you use something like Google’s fetch and render tool or when you look at Google’s cache, Google appears to be able to crawl and parse all of that text. But they’re not treating all of it equally.

So here’s an example. I’ve got this text about coconut marble furnishings, which is just a ridiculous test phrase that I’m going to use for this purpose. But let’s say I’ve got page A, which essentially shows the first paragraph of this text, and then I have page B, which only shows part of the first sentence and then a “read more” link, which is very common in lots of articles.

Many folks do this, by the way, because they want to get engagement data about how many people actually read the rest of the piece. Others are using it for serving advertising, or they’re using it to track something, and some people are using it just because of the user experience it provides. Maybe the page is crowded with other types of content. They want to make sure that if someone wants to display this particular piece or that particular piece, that it’s available to them in as convenient a format as possible for design purposes or what have you.

What’s true in these instances is that Google is not going to treat what happens after this “read more” link is clicked, which is that the rest of this text would become visible here, they’re not going to treat that with the same weight that they otherwise would.

All other things being equal

So they’re on similar domains, they have similar link profiles, all that other kind of stuff.

  • A is going to outrank B for “coconut marble furnishings” even though this is in the title here. Because this text is relevant to that keyword and is serving to create greater relevance, Google is going to weight this one higher.
  • It’s also true that the content that’s hidden behind this “read more” here, it doesn’t matter. If it’s CSS-based, JavaScript-based, post load or loaded when the HTML is, it doesn’t matter, it’s going to be weighted less by Google. It will be treated as though that text were not as important.
  • Interestingly, fascinatingly, perhaps, Bing and Yahoo do not appear to discern between these. So they’ll treat these more equally. Google is the only one who seems to be, at least right now, from some test data — I’ll talk about that in a sec — who is treating these differently, who is basically weighting this hidden content less.

Best practices for SEO and “hidden” text

So what can we discern from this? What should SEOs do when we’re working with our web design teams and with our content teams around these types of issues?

I. We have to expect that any time we hide text with CSS, with JavaScript, what have you, that it will have less ranking influence. It’s not that it won’t be counted at all. If I were to search for “hardwood-like material creates beautiful shine,” like that exact phrase in Google with quotes, both of these pages would come up, this one almost certainly first, but both of these pages would come up.

So Google knows the text is there. It just isn’t counting it as highly. It’s like content that isn’t carrying the same weight as it would if it were visible by default. So, given that we know that, we have to decide in the tradeoff situation whether it’s worth it to lose the ranking value and the potential visitors in exchange for whatever we’re gaining by having this element.

II. We’ve got to consider some creative alternatives. It is possible to make text visible by default and to instead have something like an overlay element. We could have a brief overlay here that’s easily close-able with a message. Maybe that could give us the same types of engagement statistics, because 95% of people are going to close that before they scroll down, or they’re going to receive a popover message or those kinds of things. Granted, as we’ve discussed previously on Whiteboard Friday, overlays have their own issues that we need to be aware of, but these are possible. We can also measure scroll depth by doing some JavaScript tracking. There’s lots of software that does that by default and plenty of GitHub repositories, that are open source, that we could use to track that. So there might be other ways to get the same goals.

III. If it is the case that you have to use the “read more” or any other text hiding elements, I would urge you to go ahead and place the crucial information, including the keyword phrases and the most related terms and phrases that you know are going to be very important to rankings, up in that most visible top portion of the page so that you maximize the ranking weight of the most important pieces rather than losing those below or behind whatever sorts of post-loading situation you’ve got. Make those the default visible portions of text.

I do want to give special thanks. One of the reasons that we know this, certainly Google has mentioned it on occasion, but over the course of the last few years there’s been a lot of skepticism, especially from folks in the web design community who have sort of said, “Look, it seems like Google can see this. It doesn’t seem to be a problem. When I search in quotes for this text, Google is bringing it back.” That has been correct.

But thanks to Shai — I’m sorry if I mispronounce your name — Aharony from Reboot Online, RebootOnline.com, and I’ll link to the specific test that they performed, but they performed some wonderful, large-scale, long-term tests of CSS, of text area, of visible text, and of JavaScript hiding across many domains over a long period and basically proved to us that what Google says is in fact true, that they are treating this text behind here with less weight. So we really appreciate the efforts of folks like that, who go through intense effort to give us the truth about how Google works.

That said, we will hopefully see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How Should You Handle Expired Content?

Posted by Stephanie Chang

Introduction

Handling expired content can be an overwhelming experience for any SEO in charge of a dynamic website, whether it be an e-commerce, a classified (example: job search, real estate listings), or a seasonal/promotional (example: New York Fashion Week) site. Even something as fundamental as glancing at the Google Webmaster Tools account for the site can evoke gut-wrenching emotions, especially if the site has amassed tens of thousands of 404 errors. How are you supposed to come up with a process to manage this? What should the process even look like? 

What Qualifies as Expired Content?

There are a number of examples that would be considered "expired" content. Expired content is content on a website that is only relevant for a limited period of time. Below are examples of different scenarios that would need to be considered expired content.

Job Search/Real Estate Listings: Job listings routinely expire, especially when positions become filled. The same is true for real estate when property is sold. 

  1. What is the best way to handle expired listings, especially if the content is only available for a very limited amount of time?

E-commerce: Expired products can occur when products that are sold on the site routinely change for one reason or another, such as:

  1. What happens when the site no longer sells a product? 
  2. What happens if the product becomes temporarily out-of-stock?
  3. What about seasonal products that are only sold during limited times of the year? 

Perhaps most importantly, sites that have to worry about expired content tend to be enormous – often comprised of hundreds of thousands of pages. Thus, recommendations need to be manageable and clear. Taking an individual look at all expired and out-of-stock products is unrealistic. Start thinking, is there a way we can build a process for these type of changes? 

The Options

Like most SEO solutions, there isn't necessarily one right answer. We need to take a look at each individual situation on a case-by-case basis and take into consideration the current back-end of the site, as well as the resources and the technological capabilities of the site's team. There is a time and a place to use each of these options for expired content. Identifying the right scenario for each situation is very powerful. 

I. The 404 Error

It makes sense for webmasters to think that 404ing expired content on the site is the approach to take. After all, isn't that the very definition of a 404 page

(Distilled's 404 Page)

In most situations, a page on the site should not be 404ed. Why?

Disadvantages of 404 pages

404ing pages that used to be live on the site is just not beneficial for SEO because it alerts search engines that there are errors on the site. Essentially, you're wasting the site's crawl allowance on crawling/indexing pages that no longer exist.

Also another issue with 404 pages is that they tend to bounce – users land on the page, see that the page no longer exists, and quickly leave. Users are vital to the site and our goal as SEOs is to not only ensure that the site gains organic traffic, but that the users stay, browse through the site, and ultimately, convert. 

Custom 404 Page

If you must 404 pages for one reason or another, consider creating a custom 404 page, so that in the chance that a visitor lands on the page, there is an opportunity for them to convert. A custom 404 page can also include keyword-rich links to other pages on the site (for instance: see Crate and Barrel's 404 page). 

Determining the Right Approach for Expired Content

Now that we know the disadvantages of 404ing pages, what is the right approach in dealing with expired content? To determine this, multiple considerations need to be taken into account, such as:

  1. Was there significant traffic (and not just organic, but also consider direct) coming to this page?
  2. How can we provide the best user experience?
  3. Has this page received external links? How is this page currently internally linked to?
  4. Is there content/resources on the page that users would still find useful?

II. The 301 Permanent Redirect

Advantages of 301 Redirects 

For the vast majority of scenarios, I'd suggest 301 redirecting your expired content to another page. This is usually the best option for SEO and can also be customized to enhance the user experience via dynamically-generated messages. For instance, if a product page had garnered external links, you're able to retain most of the link equity from those links via a 301 redirect (whereas with a 404, that link equity is lost). Why would you want to lose the link equity that you had worked so hard to obtain? Furthermore, it demonstrates to search engines that your site is well-maintained and up-to-date or "fresh".

(screenshot of infographic from Dr. Pete's epic status code post

Where should you 301 redirect these pages?

Consider what would result in the best user experience. You want to redirect these pages to the most relevant page. A suggestion is to take a look at the breadcrumbs and redirect the page based on the internal navigation of the site. For instance, the product page can be redirected to the most relevant sub-category page. You want to be careful that you're redirecting the page to another page that is likely to stay on the site in the foreseeable future, otherwise you run the risk of having to deal with this issue again (not to mention that having a 301 redirect lead to another 301 redirect to another 301 redirect is not considered good SEO practice). A safe bet is to redirect these pages to the most relevant category page, as these are pages on the site that are least likely to change. 

Dynamically-Generated Messages

You can customize and improve the user experience by implementing a dynamically-generated message via cookies during a 301 redirect. This would result in users who have landed on expired products receiving a message letting them know that the original product they were seeking is no longer available. This enhances the user experience because it informs users on why they are being redirected. 

Disadvantages of 301 Redirects 

For some sites, implementing multiple 301 redirects might affect server performance (though for a well-designed site, this should not be an issue). However, if it is true for your site, knowing that site speed is a search engine ranking factor, we want to be wary of the impact we may have by implementing this strategy. If this is the case for your site, consider only 301 redirecting the pages that have gained external links or have received significant amounts of traffic and directing the remaining pages to a customized 404 page. Please bear in mind that this is not an ideal scenario and is just a workaround. 

III. Leave the Page on the Site

Advantages of Leaving the Page As Is

Sometimes product pages still garner significant amounts of traffic or are rich in unique content and contain information that is still useful to visitors. It would be worth leaving the original product there, especially if the page has unique, high-quality, evergreen content, but have a message that the product has been discontinued. This will likely provide the best user experience as it will provide a strong call-to-action. 

 How Could You Set Up the Message?

Implement a JS overlay that would include similar products as the one that has been discontinued and drive users to those new products. Consider incorporating keyword-targeted internal links to drive traffic to those sites. This provides a positive user experience and is especially important for repeat customers. 

Example: Real Estate

For this niche, expired listings bring tons of traffic since people are curious about what has been sold and what the market looks like. Thus, consider leaving these pages on the site, but include additional information on the top of the page, such as "contact us to see similar listings" or "here are some other houses in the area that have similar selling prices."

Disadvantages of Leaving the Page As Is

You want to be wary of the practice of leaving old pages, especially if they aren't enhancing the value of the site. Why? Because this will require more bandwidth from search engine bots to crawl your site as you continue to add new product pages to the site. You don't want to risk wasting your crawl allowance having bots crawl pages that are thin in unique content and value. Also, having search engines crawl such pages indicates to them that the site is not "fresh."  

Also often times, new products contain the same content as an older variation of the product. For instance, the names of new products may vary only slightly to their previous version and the product description can be a close duplicate. Having all these pages live on the site can result in massive duplicate content issues.

How to Deal with Out-of-Stock Products

If a product is out-of-stock and is expected to be restocked, the page should remain on the site, but an out-of-stock notice should be implemented on the page. However, please bear in mind that out-of-stock pages do tend to generate high bounce rates. To confront high bounce rates issues and improve the overall user experience, consider ensuring that users know similar products are still sold on the site or have users sign-up to be notified when the product becomes available again. 

How to Deal with Seasonal Products – at the Category/Sub-Category Level

If a product is seasonal, such as the case for fashion products (example: swimsuits), you might want to leave the page on the site permanently. Why? Because overtime, these pages can retain their link equity year-after-year. If the swimsuit page garnered 3 links this year and 5 links the next, you can continue to accumulate those links. Overtime, you've developed a page that has retained a significant amount of link equity making it much more difficult for competitors to keep up. Thus, giving your site a huge advantage. 

And if you don't want the page to be indexed in the off-season, add a meta tag to noindex/follow the page. Users will no longer be able to get to that link from search results (and hopefully from internal results as well), but only through direct links or bookmarks. Once the season starts again, remove the noindex/follow meta tag to an index/follow. 

Building Processes/Checklists

Based on the specific needs of your site, it would be helpful to develop a checklist for your technical team. For example, if my site had seasonal products, I would compile a checklist that would include the following:

  1. Remove noindex/follow tag from the [products] page in [month]
  2. Update and resubmit XML site map
  3. Submit this page to "Fetch as Googlebot" in Google Webmaster Tools

Consider creating separate checklists for the steps that you, as an SEO, would need to take to determine which pages to 301 redirect, which ones you need to 404 (if you absolutely need to), and which ones to leave as is. Checklists should also be created to help develop the framework for how your technical team would implement these changes. After awhile, an overall framework should emerge on how your site handles its expired content, which will help make the entire process run much more smoothly. 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


SEOmoz Daily SEO Blog

Posted in IM NewsComments Off


Advert