Tag Archive | "User"

5 Ways We Improved User Experience and Organic Reach on the New Moz Help Hub

Posted by jocameron

We’re proud to announce that we recently launched our brand-new Help Hub! This is the section of our site where we store all our guides and articles on how to use Moz Pro, Moz Local, and our research tools like Link Explorer.

Our Help Hub contains in-depth guides, quick and easy FAQs, and some amazing videos like this one. The old Help Hub served us very well over the years, but with time it became a bit dusty and increasingly difficult to update, in addition to looking a bit old and shabby. So we set out to rebuild it from scratch, and we’re already seeing some exciting changes in the search results — which will impact the way people self-serve when they need help using our tools.

I’m going to take you through 5 ways we improved the accessibility and reach of the Help Hub with our redesign. If you write software guides, work in customer experience, or simply write content that answers questions, then this post is worth a look.

If you’re thinking this is just a blatant excuse to inject some Mozzy news into an SEO-style blog post, then you’re right! But if you stick with me, I’ll make sure it’s more fun than switching between the same three apps on your phone with a scrunched-up look of despair etched into your brow. :)

Research and discovery

To understand what features we needed to implement, we decided to ask our customers how they search for help when they get stuck. The results were fascinating, and they helped us build a new Help Hub that serves both our customers and their behavior.

We discovered that 78% of people surveyed search for an answer first before reaching out:

This is a promising sign, and perhaps no surprise that people working in digital marketing and search are very much in the habit of searching for the answers to their questions. However, we also discovered that a staggering 36% couldn’t find a sufficient answer when they searched:

We also researched industry trends and dug into lots of knowledge bases and guides for popular tools like Slack and Squarespace. With this research in our back pockets we felt sure of our goal: to build a Help Hub that reduces the length of the question-search-answer journey and gets answers in front of people with questions.

Let’s not hang about — here are 5 ways we improved organic reach with our beautiful new Help Hub.

#1: Removing features that hide content

Tabbed content used to be a super cool way of organizing a long, wordy guide. Tabs digitally folded the content up like an origami swan. The tabs were all on one page and on one URL, and they worked like jump links to teleport users to that bit of content.

Our old Help Hub design had tabbed content that was hard to find and wasn’t being correctly indexed

The problem: searchers couldn’t easily find this content. There were two reasons for this: one, no one expected to have to click on tabs for discovery; and two (and most importantly), only the first page of content was being linked to in the SERPs. This decimated our organic reach. It was also tricky to link directly to the tabbed content. When our help team members were chatting with our lovely community, it was nearly impossible to quickly send a link to a specific piece of information in a tabbed guide.

Now, instead of having all that tabbed content stacked away like a Filofax, we’ve got beautifully styled and designed content that’s easy to navigate. We pulled previously hidden content on to unique pages that we could link people to directly. And at the top of the page, we added breadcrumbs so folks can orient themselves within the guide and continue self-serving answers to their heart’s content.

Our new design uses breadcrumbs to help folks navigate and keep finding answers

What did we learn?

Don’t hide your content. Features that were originally built in an effort to organize your content can become outdated and get between you and your visitors. Make your content accessible to both search engine crawlers and human visitors; your customer’s journey from question to answer will be more straightforward, making navigation between content more natural and less of a chore. Your customers and your help team will thank you.

#2: Proudly promote your FAQs

This follows on from the point above, and you have had a sneak preview in the screenshot above. I don’t mind repeating myself because our new FAQs more than warrant their own point, and I’ll tell you why. Because, dear reader, people search for their questions. Yup, it’s this new trend and gosh darn it the masses love it.

I mentioned in the point above that tabbed content was proving hard to locate and to navigate, and it wasn’t showing up in the search results. Now we’re displaying common queries where they belong, right at the top of the guides:

FAQ placement, before and after

This change comprises two huge improvements. Firstly, questions our customers are searching, either via our site or in Google, are proudly displayed at the top of our guides, accessible and indexable. Additionally, when our customers search for their queries (as we know they love to do), they now have a good chance of finding the exact answer just a click away.

Address common issues at the top of the page to alleviate frustration

I’ve run a quick search in Keyword Explorer and I can see we’re now in position 4 for this keyword phrase — we weren’t anywhere near that before.

SERP analysis from Keyword Explorer

This is what it looks like in the organic results — the answer is there for all to see.

Our FAQ answer showing up in the search results

And when people reach out? Now we can send links with the answers listed right at the top. No more messing about with jump links to tabbed content.

What did we learn?

In addition to making your content easily accessible, you should address common issues head-on. It can sometimes feel uncomfortable to highlight issues right at the top of the page, but you’ll be alleviating frustration for people encountering errors and reduce the workload for your help team.

You can always create specific troubleshooting pages to store questions and answers to common issues.

#3: Improve article quality and relevance to build trust

This involves using basic on-page optimization techniques when writing or updating your articles. This is bread and butter for seasoned SEOs, although often overlooked by creators of online guides and technical writers.

It’s no secret that we love to inject a bit of Mozzy fun into what we do, and the Help Hub is no exception. It’s a challenge that we relish: to explain the software in clear language that is, hopefully, a treat to explore. However, it turns out we’d become too preoccupied with fun, and our basic on-page optimization sadly lagged behind.

Mirroring customers’ language

Before we started work on our beautiful new Help Hub, we analyzed our most frequently asked questions and commonly searched topics on our site. Next, we audited the corresponding pages on the Help Hub. It was immediately clear that we could do a better job of integrating the language our customers were using to write in to us. By using relevant language in our Help Hub content, we’d be helping searchers find the right guides and videos before they needed to reach out.

Using the MozBar guide as an example, we tried a few different things to improve the CTR over a period of 12 months. We added more content, we updated the meta tags, we added jump links. Around 8 weeks after the guide was made more relevant and specific to searchers’ troubleshooting queries, we saw a massive uptick in traffic for that MozBar page, with pageviews increasing from around ~2.5k per month to ~10k between February 2018 and July 2018. Traffic from organic searches doubled.

Updates to the Help Hub content and the increased traffic over time from Google Analytics

It’s worth noting that traffic to troubleshooting pages can spike if there are outages or bugs, so you’ll want to track this over an 8–12 month period to get the full picture.

What we’re seeing in the chart above is a steady and consistent increase in traffic for a few months. In fact, we started performing too well, ranking for more difficult, higher-volume keywords. This wasn’t exactly what we wanted to achieve, as the content wasn’t relevant to people searching for help for any old plugin. As a result, we’re seeing a drop in August. There’s a sweet spot for traffic to troubleshooting guides. You want to help people searching for answers without ranking for more generic terms that aren’t relevant, which leads us to searcher intent.

Focused on searcher intent

If you had a chance to listen to Dr. Pete’s MozCon talk, you’ll know that while it may be tempting to try to rank well for head vanity keywords, it’s most helpful to rank for keywords where your content matches the needs and intent of the searcher.

While it may be nice to think our guide can rank for “SEO toolbar for chrome” (which we did for a while), we already have a nice landing page for MozBar that was optimized for that search.

When I saw a big jump in our organic traffic, I entered the MozBar URL into Keyword Explorer to hunt down our ranking keywords. I then added these keywords in my Moz Pro campaign to see how we performed over time.

You can see that after our big jump in organic traffic, our MozBar troubleshooting guide dropped 45 places right out of the top 5 pages for this keyword. This is likely because it wasn’t getting very good engagement, as people either didn’t click or swiftly returned to search. We’re happy to concede to the more relevant MozBar landing page.

The troubleshooting guide dropped in the results for this general SEO toolbar query, and rightly so

It’s more useful for our customers and our help team for this page to rank for something like “why wont moz chrome plugin work.” Though this keyword has slightly fewer searches, there we are in the top spot consistently week after week, ready to help.

We want to retain this position for queries that match the nature of the guide

10x content

Anyone who works in customer experience will know that supporting a free tool is a challenge, and I must say our help team does an outstanding job. But we weren’t being kind to ourselves. We found that we were repeating the same responses, day in and day out.

This is where 10x content comes into play. We asked ourselves a very important question: why are we replying individually to one hundred people when we can create content that helps thousands of people?

We tracked common queries and created a video troubleshooting guide. This gave people the hand-holding they required without having to supply it one-to-one, on demand.

The videos for our SEO tools that offer some form of free access attract high views and engagement as folks who are new to them level up.

Monthly video views for tools that offer some free access

To put this into context, if you add up the views every month for these top 4 videos, they outperform all the other 35 videos on our Help hub put together:

Video views for tools with some free access vs all the other 35 videos on the Help Hub

What did we learn?

By mirroring your customers’ language and focusing on searcher intent, you can get your content in front of people searching for answers before they need to reach out. If your team is answering the same queries daily, figure out where your content is lacking and think about what you can do in the way of a video or images to assist searchers when they get stuck.

Most SEO work doesn’t have an immediate impact, so track when you’ve made changes and monitor your traffic to draw correlations between visitors arriving on your guides and the changes you’ve made. Try testing updates on a portion of pages and tracking results. Then rolling out updates to the rest of your pages.

More traffic isn’t always a good thing, it could indicate an outage or issue with your tool. Analyzing traffic data is the start of the journey to understanding the needs of people who use your tools.

#4: Winning SERP features by reformatting article structure

While we ramped up our relevance, we also reviewed our guide structure ready for migration to the new Help Hub CMS. We took paragraphs of content and turned them into clearly labelled step-by-step guides.

Who is this helping? I’m looking at you, 36% of people who couldn’t find what they were looking for! We’re coming at you from two angles here: people who never found the page they were searching for, and people who did, but couldn’t digest the content.

Here is an example from our guide on adding keywords to Moz Pro. We started with blocks of paragraphed content interspersed with images. After reformatting, we have a video right at the top and then a numbered list which outlines the steps.

Before: text and images. After: clearly numbered step-by-step guides.

When researching the results for this blog post, I searched for a few common questions to see how we were looking in the search results. And what did I find? Just a lovely rich snippet with our newly formatted steps! Magic!

Our new rich snippet with the first 4 steps and a screenshot of our video

We’ve got all the things we want in a rich snippet: the first 4 steps with the “more items” link (hello, CTR!), a link to the article, and a screenshot of the video. On one hand, the image of the video looks kind of strange, but it also clearly labels it as a Moz guide, which could prove to be rather tempting for people clicking through from the results. We’ll watch how this performs over time to figure out if we can improve on it in future.

Let’s go briefly back in time and see what the original results were for this query, pre-reformatting. Not quite so helpful, now, is it?

Search results before we reformatted the guide

What did we learn?

By clearly arranging your guide’s content into steps or bullet points, you’re improving the readability for human visitors and for search engines, who may just take it and use it in a rich snippet. The easier it is for people to comprehend and follow the steps of a process, the more likely they are to succeed — and that must feel significantly better than wading through a wall of text.

#5: Helping people at the end of the guide

At some point, someone will be disappointed by the guide they ended up on. Maybe it doesn’t answer their question to their satisfaction. Maybe they ended up in the wrong place.

That’s why we have two new features at the end of our guides: Related Articles and Feedback buttons.

The end of the guides, before and after

Related Articles

Related Articles help people to continue to self-serve, honing in on more specific guides. I’m not saying that you’re going to buckle down and binge-read ALL the Moz help guides — I know it’s not exactly Netflix. But you never know — once you hit a guide on Keyword Lists, you may think to yourself, “Gosh, I also want to know how to port my lists over to my Campaign. Oh, and while I’m here, I’m going to check on my Campaign Settings. And ohh, a guide about setting up Campaigns for subdomains? Don’t mind if I do!” Guide lovers around the world, rejoice!

Feedback buttons

I know that feedback buttons are by no means a new concept in the world of guides. It seems like everywhere you turn there’s a button, a toggle, or a link to let some mysterious entity somewhere know how you felt about this, that, and the other.

Does anyone ever actually use this data? I wondered. The trick is to gather enough information that you can analyze trends and respond to feedback, but not so much that wading through it is a major time-wasting chore.

When designing this feature, our aim was to gather actionable feedback from the folks we’re looking to help. Our awesome design, UX, and engineering teams built us something pretty special that we know will help us keep improving efficiently, without any extra noise.

Our new feedback buttons gather the data we need from the people we want to hear from

To leave feedback on our guides, you have to be logged in to your Moz account, so we are sure we’re helping people who engage with our tools, simple but effective. Clicking “Yes, thank you!” ends the journey there, job done, no need for more information for us to sift through. Clicking “No, not really” opens up a feedback box to let us know how we can improve.

People are already happily sending through suggestions, which we can turn into content and FAQs in a very short space of time:

Comments from visitors on how we can improve our guides

If you find yourself on a guide that helps (or not so much), then please do let us know!

The end of an article isn’t the end of the line for us — we want to keep moving forward and building on our content and features.

What did we learn?

We discovered that we’re still learning! Feedback can be tough to stomach and laborious to analyze, so spend some time figuring out who you want to hear from and how you can process that information.


If you have any other ideas about what you’d like to see on the Help Hub, whether it’s a topic, an FAQ, or snazzy feature to help you find the answers to your questions, please do let us know in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Google Plus Announces It Will Shutdown After Reportedly Compromising 500,000 User Accounts

Google recently announced that it is shutting down Google+, with the service expected to cease operating by Nov. 2019. The announcement came on the heels of a report that an API bug exposed the profile data of 500,000 Google users using 438 different apps. However, Google claims the issue had been resolved back in March.

The decision to phase out Google+ came after Google launched a review of third-party developer access at the start of the year. The review apparently proved what the company had already known—that consumers and developers are not that interested in the platform. The service reportedly has “low usage and engagement,” with the majority of user sessions lasting less than five seconds.

What Happens to Google+ Now?

Google+ users will have ample time to transition. The phase-out is expected to be completed by August 2019 and the company will be releasing additional information in the next few months on how to migrate data.

However, Google intends to keep Google+ open for enterprise customers. But it will be rolling out new features to keep its enterprise version more secure and effective.

Aside from announcing its phase-out of Google+, the company also said its other services will be receiving privacy adjustments. Some of these adjustments include changes to API that will curtail developers’ access to user data on Gmail and Android. The changes will also ensure that developers won’t be receiving call logs and SMS permissions. Contact and basic interaction data from the Android Contacts API will also be blocked.

Keeping Things Quiet

While the security vulnerability occurred several months ago, it was only revealed recently in a Wall Street Journal report which said the breach exposed information like name, age, gender, occupation, and email address of users who listed their profile as private.

In a blog post, Google explained its decision not to reveal the issue to users.

According to Ben Smith, Google’s Vice President of Engineering, the company did not find any evidence of anyone accessing the profile data. There was also no evidence that the API was abused or that any developer was aware of the bug. Google’s “Privacy & Data Protection Office” also evaluated the issue and decided that none of the “thresholds” they were looking for were met.

Experts say that there’s no legal requirement that obliges Google to reveal the security vulnerability. However, Google’s decision to keep things quiet and a memo shared to the Journal warning senior executives against disclosing the existence of the bug will undoubtedly raise privacy and security questions again.

[Feature image via Google]

The post Google Plus Announces It Will Shutdown After Reportedly Compromising 500,000 User Accounts appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Popular Google, FireFox Extension Is Secretly Tracking User Activity

A popular browser extension that helps personalize how a website looks has been found to be tracking user activity. The revelation has pushed Google and Mozilla to remove the Stylish browser extension from their app stores. However, the extension’s official website still remains active.

Software engineer Robert Heaton claimed in a blog post that the Stylish extension tool steals a user’s internet history and sends information about a person’s browsing history and distinct identifiers to SimilarWeb, the extension’s owner. According to Heaton, this will allow the company to “connect all of an individual’s actions into a single profile.”

Heaton further explained that Stylish account holders typically have a unique identifier that can be linked to a login cookie. This will then provide SimilarWeb with enough information to “theoretically tie these histories to email addresses and real-world identities.”

Stylish is an open-source browser extension that gives users the capability to change how a website appears on their browser. With it, users can make websites look brighter and campier. They can also go for a brooding, darker theme or choose popular manga or cartoon characters to add to the website.

SimilarWeb’s 2017 formal policy does indicate that the extension collates anonymous data. But what Heaton is protesting is the identifier that the extension attaches to the said information before it’s sent to the company servers. He said this leaves the account holder vulnerable to hackers.

SimilarWeb has already denied these allegations and claimed that they are “not aware of and cannot determine the identity of the users from whom the non-personal information is collected.”

Google and Mozilla have since removed the extension from its Chrome and FireFox browsers. The former has not explained its decision to cut off Stylish while the latter said that they blocked the extension due to violation of data practices.

Users utilizing Stylish on their web browsers would no longer be able to access its features. However, the extension remains active online.

[Featured image via Pixabay]

The post Popular Google, FireFox Extension Is Secretly Tracking User Activity appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

SearchCap: Google tests AMP labels, AdWords personalization & understanding user intent

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Google Tests New User Interface In Search For Google Posts

Google seems to be testing multiple variations of how Google Posts show up in the Google search results recently. In the past week or so…


Search Engine Roundtable

Posted in IM NewsComments Off

Google Chrome, Mozilla Firefox Leaked Facebook User Data Caused by Browser Vulnerability

Google Chrome and Mozilla Firefox might have inadvertently leaked the Facebook usernames, profile pictures and even the likes of their users because of a side-channel vulnerability.

A side-channel vulnerability was discovered in a CSS3 feature dubbed the “mix-blend-mode.” This allowed a hacker to discover the identity of a Facebook account holder using Chrome or Firefox by getting them to visit a specially-designed website.

This critical flaw was discovered in 2017 by security researchers Dario Weißer and Ruslan Habalov and also by independent researcher Max May.

The researchers created a proof-of-concept (POC) exploit to show how the vulnerability could be misused. Weißer and Habalov’s concept showed how they were able to visually harvest data like username, profile picture, and “like” status of a user. What’s more, this insidious hack could be accomplished in the background when the user visits a malicious website.

The visual leak could happen on sites using iFrames that connect to Facebook in via login buttons and social plugins. Due to a security feature called the “same-origin policy,” sites can’t directly access iFrame content. But the researchers were able to get the information by developing an overlay on the cross-origin iFrame in order to work with the underlying pixels.

It took Habalov and Weißer’s POC about 20 seconds to get the username and about five minutes to create a vague copy of the profile picture. The program also took about 500 milliseconds to check the “like” status. Keep in mind, however, that for this vulnerability to work, the user should be logged into their Facebook account.

Habalov and Weißer privately notified both Google and Mozilla and steps were taken to contain the threat. Google was able to fix the flaw on their end when version 63 was released last December. On Firefox’s end, a patch was made available 14 days ago with the release of the browser’s version 60. The delay was due to the researchers’ late disclosure of their findings to Mozilla.

IE and Edge browsers weren’t exposed to the side-channel exploit as they don’t support the needed feature. Safari was also safe from the flaw.

[Featured image via Pixabay]

The post Google Chrome, Mozilla Firefox Leaked Facebook User Data Caused by Browser Vulnerability appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Facebook ‘Weaponized’ User Data, Says Bikini Photo-Finding App Developer

Facebook is facing accusations of gathering more user data than disclosed. According to court filings, former start-up Six4Three claimed that the social media company conducted mass surveillance on its users and their friends alike.

Based on the lawsuit documents, Facebook reportedly had access to its users’ text messages, photos, and microphones. It can even track their locations by remotely activating the Bluetooth on mobile devices without permission. All of these accusations were detailed in Six4Three’s fifth version of the complaint, initially filed in 2015.

The court document read, in part:

“Facebook continued to explore and implement ways to track users’ location, to track and read their texts, to access and record their microphones on their phones, to track and monitor their usage of competitive apps on their phones, and to track and monitor their calls.”

In response, Facebook refuted the claims by saying that these “have no merit and we will continue to defend ourselves vigorously.” The company clarified rumors back in March that it was monitoring calls and messages of its users. Rather, they only collected call and text message history as part of its opt-in feature under Facebook Lite and Messenger on Android.

The former start-up also contended that Facebook had access to several photos on iPhones. But the social media company pointed out that users can opt-in to the photo syncing feature of the app for easier uploading.

Allegations of breaching user privacy and data collection remain touchy subjects for Facebook,  following its involvement in the Cambridge Analytica fiasco. Prior to the scandal, the social media giant has removed the access of third-party developers to personal information. This policy change reportedly led to the failure of Six4Three’s controversial paid app Pikinis, where users can find their Facebook friends’ swimsuit photos.

Along with accusations of causing its financial ruin, Six4Three claimed that Facebook ‘weaponized’ its ability to access user data, sometimes without explicit consent, to earn billions of dollars. There was also a mass surveillance scheme, details of which were redacted from the latest filings per Facebook’s request. These documents, such as email correspondence among senior executives, contain confidential business matters and were sealed from public view until further notice. 

Facebook has continued to deny the purported claims, filing a motion to have the case dismissed by invoking the free speech defense under the law in California. Six4Three, on the other hand, is trying to stop the social media giant from getting the case thrown out. As the legal battle wages on, Facebook still faces continued scrutiny over its users’ paranoia on weak data privacy and protection controls.

The post Facebook 'Weaponized' User Data, Says Bikini Photo-Finding App Developer appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Search Buzz Video Recap: Google Search Console News, Google User Interface Changes & AdWords Report Editor

This week in search, we saw a preview of the new Google Search Console that is in beta. Google also changed how they report in Search Analytics the impressions and average…


Search Engine Roundtable

Posted in IM NewsComments Off

JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience

Posted by alexis-sanders

Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.

The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?


Fundamentals

What is JavaScript?

When creating a modern web page, there are three major components:

  1. HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
  2. CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
  3. JavaScript – JavaScript is the interactivity and a core component of the dynamic web.

Learn more about webpage development and how to code basic JavaScript.

javacssseo.gif

Image sources: 1, 2, 3

JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.

JavaScript libraries and frameworks:

What is AJAX?

AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess “AJAJ” doesn’t sound as clean as “AJAX” [pun intended]).

A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.

One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).

Related image

Image source

What is the Document Object Model (DOM)?

As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.

The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.

The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.

The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.

Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.

Simple example with a <title> tag that is populated through JavaScript:

HTML source

DOM

What is headless browsing?

Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.

PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).


Why can JavaScript be challenging for SEO? (and how to fix issues)

There are three (3) primary reasons to be concerned about JavaScript on your site:

  1. Crawlability: Bots’ ability to crawl your site.
  2. Obtainability: Bots’ ability to access information and parse your content.
  3. Perceived site latency: AKA the Critical Rendering Path.

Crawlability

Are bots able to find URLs and understand your site’s architecture? There are two important elements here:

  1. Blocking search engines from your JavaScript (even accidentally).
  2. Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.

Why is blocking JavaScript such a big deal?

If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).

Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.

The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.

!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.

Internal linking

Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.

Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.

Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.

URL structure

Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.

  • Not recommended:
    • The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of “#” in URLs.
    • Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
      • Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name=”fragment” content=”!”>).
      • Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.

Image result

Image source

  • Recommended:
    • pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.
      • A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
      • Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
      • Read more: Mozilla PushState History API Documents

Obtainability

Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).

At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.

Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:

  • If the user must interact for something to fire, search engines probably aren’t seeing it.
    • Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
  • If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
    • *John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
    • *Screaming Frog tests show a correlation to five seconds to render content.
    • *The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
  • If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.

How to make sure Google and other search engines can get your content

1. TEST

The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.

Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.

adamaudette - I don't always JavaScript, but when I do, I know google can crawl the dom and dynamically generated HTML

Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).

All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:

“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”

Consider testing and reviewing through the following:

  1. Confirm that your content is appearing within the DOM.
  2. Test a subset of pages to see if Google can index content.
  • Manually check quotes from your content.
  • Fetch with Google and see if content appears.
  • Fetch with Google supposedly occurs around the load event or before timeout. It’s a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
  • Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.

After you’ve tested all this, what if something’s not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.

2. HTML SNAPSHOTS
What are HTmL snapshots?

HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).

Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they’re important to understand, because in certain situations they’re necessary.

If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.

At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).

Considerations

When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.

A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:

“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.”
Google Developer AJAX Crawling FAQs

Benefits

Despite the considerations, HTML snapshots have powerful advantages:

  1. Knowledge that search engines and crawlers will be able to understand the experience.
    • Certain types of JavaScript may be harder for Google to grasp (cough… Angular (also colloquially referred to as AngularJS 2) …cough).
  2. Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
    • Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.

"It's not just Google understanding your JavaScript. It's also about the speed." -DOM - "It's not just about Google understanding your Javascript. it's also about your perceived latency." -DOM

Site latency

When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.

The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → “get everything above-the-fold in front of the user, ASAP.”

Critical Rendering Path – Optimized Rendering Loads Progressively ASAP:

progressive page rendering

Image source

However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).

Render-blocking JavaScript – Solutions

If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:

  1. Inline: Add the JavaScript in the HTML document.
  2. Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
  3. Defer: By placing JavaScript lower within the HTML.

!!! Important note: It’s important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.

Read more: Google Developer’s Speed Documentation


TL;DR – Moral of the story

Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.

Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Google Expands User Search With ‘Personal’ Tab

Google has rolled out a new feature in their search engine portal, allowing users to track their own online footprints.

Dubbed “Personal,” the new feature will expand the user’s search to include the whole Google portfolio. So if you have an account with Gmail, Google+, or saved a photo or video on the cloud, chances are you can find them by filtering your search further.

Accessing Google Personal is quite straightforward. You just have to type your query in the search box like you ordinarily would. After the results are shown, you can scroll to the top right to find “More,” and click on the drop-down list where you can find “Personal.” You can then access your own online history.

If you search for “Kentucky” for instance, any photos, clips, or references you have made using that word will turn up in the search results page. Even your email messages that contain that particular keyword are extracted and laid out for you.

Of course, you need to be logged in to your account to do this. The message, “Only you can see these results,” is right there for you to read after accessing this feature.

Google has not really formally announced the launch of this feature. But it seems like it’s going to a be a staple in the search box. However, it’s not available for Android or iOS, although analysts think that it’s only a matter of time before you can use the feature on mobile platforms. It also doesn’t support Google Drive for now.

Google Personal is another way for the search engine company to data mine your personal information, which makes it easier for targeted ads to find you. This seems to be in line with the announcement of the company during the I/O conference for the Google Lens.

The lens converts information search from text to visual. By training the camera on an object, the user will be able to find the species of an unknown insect, for instance. They can also read up on the reviews or menu of a restaurant when they focus their camera on the establishment before going in. It’s supposed to be equipped with machine-learning that allows you to translate menus written in a foreign language.

In the same vein, Google Personal will allow users to relinquish more information about their search patterns, preferences, and biases. Again, privacy issues are being called to question, although the company seems to be simply testing the waters at this point.

The post Google Expands User Search With ‘Personal’ Tab appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Advert