Tag Archive | "Page"

Page Speed Optimization: Metrics, Tools, and How to Improve

Posted by BritneyMuller

Page speed is an important consideration for your SEO work, but it’s a complex subject that tends to be very technical. What are the most crucial things to understand about your site’s page speed, and how can you begin to improve? In this week’s edition of Whiteboard Friday, Britney Muller goes over what you need to know to get started.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we’re going over all things page speed and really getting to the bottom of why it’s so important for you to be thinking about and working on as you do your work.

At the very fundamental level I’m going to briefly explain just how a web page is loaded. That way we can sort of wrap our heads around why all this matters.

How a webpage is loaded

A user goes to a browser, puts in your website, and there is a DNS request. This points at your domain name provider, so maybe GoDaddy, and this points to your server where your files are located, and this is where it gets interesting. So the DOM starts to load all of your HTML, your CSS, and your JavaScript. But very rarely does this one pull all of the needed scripts or needed code to render or load a web page.

Typically the DOM will need to request additional resources from your server to make everything happen, and this is where things start to really slow down your site. Having that sort of background knowledge I hope will help in us being able to triage some of these issues.

Issues that could be slowing down your site

What are some of the most common culprits?

  1. First and foremost is images. Large images are the biggest culprit of slow loading web pages.
  2. Hosting can cause issues.
  3. Plugins, apps, and widgets, basically any third-party script as well can slow down load time.
  4. Your theme and any large files beyond that can really slow things down as well.
  5. Redirects, the number of hops needed to get to a web page will slow things down.
  6. Then JavaScript, which we’ll get into in a second.

But all of these things can be a culprit. So we’re going to go over some resources, some of the metrics and what they mean, and then what are some of the ways that you can improve your page speed today.

Page speed tools and resources

The primary resources I have listed here are Google tools and Google suggested insights. I think what’s really interesting about these is we get to see what their concerns are as far as page speed goes and really start to see the shift towards the user. We should be thinking about that anyway. But first and foremost, how is this affecting people that come to your site, and then secondly, how can we also get the dual benefit of Google perceiving it as higher quality?

We know that Google suggests a website to load anywhere between two to three seconds. The faster the better, obviously. But that’s sort of where the range is. I also highly suggest you take a competitive view of that. Put your competitors into some of these tools and benchmark your speed goals against what’s competitive in your industry. I think that’s a cool way to kind of go into this.

Chrome User Experience Report

This is Chrome real user metrics. Unfortunately, it’s only available for larger, popular websites, but you get some really good data out of it. It’s housed on Big ML, so some basic SQL knowledge is needed.

Lighthouse

Lighthouse, one of my favorites, is available right in Chrome Dev Tools. If you are on a web page and you click Inspect Element and you open up Chrome Dev Tools, to the far right tab where it says Audit, you can run a Lighthouse report right in your browser.

What I love about it is it gives you very specific examples and fixes that you can do. A fun fact to know is it will automatically be on the simulated fast 3G, and notice they’re focused on mobile users on 3G. I like to switch that to applied fast 3G, because it has Lighthouse do an actual run of that load. It takes a little bit longer, but it seems to be a little bit more accurate. Good to know.

Page Speed Insights

Page Speed Insights is really interesting. They’ve now incorporated Chrome User Experience Report. But if you’re not one of those large sites, it’s not even going to measure your actual page speed. It’s going to look at how your site is configured and provide feedback according to that and score it. Just something good to be aware of. It still provides good value.

Test your mobile website speed and performance

I don’t know what the title of this is. If you do, please comment down below. But it’s located on testmysite.thinkwithgoogle.com. This one is really cool because it tests the mobile speed of your site. If you scroll down, it directly ties it into ROI for your business or your website. We see Google leveraging real-world metrics, tying it back to what’s the percentage of people you’re losing because your site is this slow. It’s a brilliant way to sort of get us all on board and fighting for some of these improvements.

Pingdom and GTmetrix are non-Google products or non-Google tools, but super helpful as well.

Site speed metrics

So what are some of the metrics?

First paint

We’re going to go over first paint, which is basically just the first non-blank paint on a screen. It could be just the first pixel change. That initial change is first paint.

First contentful paint

First contentful paint is when the first content appears. This might be part of the nav or the search bar or whatever it might be. That’s the first contentful paint.

First meaningful paint

First meaningful paint is when primary content is visible. When you sort of get that reaction of, “Oh, yeah, this is what I came to this page for,” that’s first meaningful paint.

Time to interactive

Time to interactive is when it’s visually usable and engage-able. So we’ve all gone to a web page and it looks like it’s done, but we can’t quite use it yet. That’s where this metric comes in. So when is it usable for the user? Again, notice how user-centric even these metrics are. Really, really neat.

DOM content loaded

The DOM content loaded, this is when the HTML is completely loaded and parsed. So some really good ones to keep an eye on and just to be aware of in general.

Ways to improve your page speed

HTTP/2

HTTP/2 can definitely speed things up. As to what extent, you have to sort of research that and test.

Preconnect, prefetch, preload

Preconnect, prefetch, and preload really interesting and important in speeding up a site. We see Google doing this on their SERPs. If you inspect an element, you can see Google prefetching some of the URLs so that it has it faster for you if you were to click on some of those results. You can similarly do this on your site. It helps to load and speed up that process.

Enable caching & use a content delivery network (CDN)

Caching is so, so important. Definitely do your research and make sure that’s set up properly. Same with CDNs, so valuable in speeding up a site, but you want to make sure that your CDN is set up properly.

Compress images

The easiest and probably quickest way for you to speed up your site today is really just to compress those images. It’s such an easy thing to do. There are all sorts of free tools available for you to compress them. Optimizilla is one. You can even use free tools on your computer, Save for Web, and compress properly.

Minify resources

You can also minify resources. So it’s really good to be aware of what minification, bundling, and compression do so you can have some of these more technical conversations with developers or with anyone else working on the site.

So this is sort of a high-level overview of page speed. There’s a ton more to cover, but I would love to hear your input and your questions and comments down below in the comment section.

I really appreciate you checking out this edition of Whiteboard Friday, and I will see you all again soon. Thanks so much. See you.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

SearchCap: EU search results preview, Google URL inspection tool update & Bing Ads page feeds

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Bing Ads launches page feeds for easier Dynamic Search Ads management

Use page feeds to manage URL groupings for auto targets in DSA campaigns.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

12 Methods to Get from Blank Page to First Draft

If you’re like me, after taking some time off from writing, you’re refreshed and champing at the bit to translate…

The post 12 Methods to Get from Blank Page to First Draft appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

Using a New Correlation Model to Predict Future Rankings with Page Authority

Posted by rjonesx.

Correlation studies have been a staple of the search engine optimization community for many years. Each time a new study is released, a chorus of naysayers seem to come magically out of the woodwork to remind us of the one thing they remember from high school statistics — that “correlation doesn’t mean causation.” They are, of course, right in their protestations and, to their credit, an unfortunate number of times it seems that those conducting the correlation studies have forgotten this simple aphorism.

We collect a search result. We then order the results based on different metrics like the number of links. Finally, we compare the orders of the original search results with those produced by the different metrics. The closer they are, the higher the correlation between the two.

That being said, correlation studies are not altogether fruitless simply because they don’t necessarily uncover causal relationships (ie: actual ranking factors). What correlation studies discover or confirm are correlates.

Correlates are simply measurements that share some relationship with the independent variable (in this case, the order of search results on a page). For example, we know that backlink counts are correlates of rank order. We also know that social shares are correlates of rank order.

Correlation studies also provide us with direction of the relationship. For example, ice cream sales are positive correlates with temperature and winter jackets are negative correlates with temperature — that is to say, when the temperature goes up, ice cream sales go up but winter jacket sales go down.

Finally, correlation studies can help us rule out proposed ranking factors. This is often overlooked, but it is an incredibly important part of correlation studies. Research that provides a negative result is often just as valuable as research that yields a positive result. We’ve been able to rule out many types of potential factors — like keyword density and the meta keywords tag — using correlation studies.

Unfortunately, the value of correlation studies tends to end there. In particular, we still want to know whether a correlate causes the rankings or is spurious. Spurious is just a fancy sounding word for “false” or “fake.” A good example of a spurious relationship would be that ice cream sales cause an increase in drownings. In reality, the heat of the summer increases both ice cream sales and people who go for a swim. More swimming means more drownings. So while ice cream sales is a correlate of drowning, it is spurious. It does not cause the drowning.

How might we go about teasing out the difference between causal and spurious relationships? One thing we know is that a cause happens before its effect, which means that a causal variable should predict a future change. This is the foundation upon which I built the following model.

An alternative model for correlation studies

I propose an alternate methodology for conducting correlation studies. Rather than measure the correlation between a factor (like links or shares) and a SERP, we can measure the correlation between a factor and changes in the SERP over time.

The process works like this:

  1. Collect a SERP on day 1
  2. Collect the link counts for each of the URLs in that SERP
  3. Look for any URL pairs that are out of order with respect to links; for example, if position 2 has fewer links than position 3
  4. Record that anomaly
  5. Collect the same SERP 14 days later
  6. Record if the anomaly has been corrected (ie: position 3 now out-ranks position 2)
  7. Repeat across ten thousand keywords and test a variety of factors (backlinks, social shares, etc.)

So what are the benefits of this methodology? By looking at change over time, we can see whether the ranking factor (correlate) is a leading or lagging feature. A lagging feature can automatically be ruled out as causal since it happens after the rankings change. A leading factor has the potential to be a causal factor although could still be spurious for other reasons.

We collect a search result. We record where the search result differs from the expected predictions of a particular variable (like links or social shares). We then collect the same search result 2 weeks later to see if the search engine has corrected the out-of-order results.

Following this methodology, we tested 3 different common correlates produced by ranking factors studies: Facebook shares, number of root linking domains, and Page Authority. The first step involved collecting 10,000 SERPs from randomly selected keywords in our Keyword Explorer corpus. We then recorded Facebook Shares, Root Linking Domains, and Page Authority for every URL. We noted every example where 2 adjacent URLs (like positions 2 and 3 or 7 and 8) were flipped with respect to the expected order predicted by the correlating factor. For example, if the #2 position had 30 shares while the #3 position had 50 shares, we noted that pair. You would expect the page with moer shares to outrank the one with fewer. Finally, 2 weeks later, we captured the same SERPs and identified the percent of times that Google rearranged the pair of URLs to match the expected correlation. We also randomly selected pairs of URLs to get a baseline percent likelihood that any 2 adjacent URLs would switch positions. Here were the results…

The outcome

It’s important to note that it is incredibly rare to expect a leading factor to show up strongly in an analysis like this. While the experimental method is sound, it’s not as simple as a factor predicting future — it assumes that in some cases we will know about a factor before Google does. The underlying assumption is that in some cases we have seen a ranking factor (like an increase in links or social shares) before Googlebot has before, and that in the 2 week period, Google will catch up and correct the incorrectly ordered results. As you can expect, this is a rare occasion, as Google crawls the web faster than anyone else. However, with a sufficient number of observations, we should be able to see a statistically significant difference between lagging and leading results. Nevertheless, the methodology only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google.

Factor Percent Corrected P-Value 95% Min 95% Max
Control 18.93% 0
Facebook Shares Controlled for PA 18.31% 0.00001 -0.6849 -0.5551
Root Linking Domains 20.58% 0.00001 0.016268 0.016732
Page Authority 20.98% 0.00001 0.026202 0.026398

Control:

In order to create a control, we randomly selected adjacent URL pairs in the first SERP collection and determined the likelihood that the second will outrank the first in the final SERP collection. Approximately 18.93% of the time the worse ranking URL would overtake the better ranking URL. By setting this control, we can determine if any of the potential correlates are leading factors – that is to say that they are potential causes of improved rankings because they better predict future changes than a random selection.

Facebook Shares:

Facebook Shares performed the worst of the three tested variables. Facebook Shares actually performed worse than random (18.31% vs 18.93%), meaning that randomly selected pairs would be more likely to switch than those where shares of the second were higher than the first. This is not altogether surprising as it is the general industry consensus that social signals are lagging factors — that is to say the traffic from higher rankings drives higher social shares, not social shares drive higher rankings. Subsequently, we would expect to see the ranking change first before we would see the increase in social shares.

RLDs

Raw root linking domain counts performed substantially better than shares and the control at ~20.5%. As I indicated before, this type of analysis is incredibly subtle because it only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google. Nevertheless, this result was statistically significant with a P value <0.0001 and a 95% confidence interval that RLDs will predict future ranking changes around 1.5% greater than random.

Page Authority

By far, the highest performing factor was Page Authority. At 21.5%, PA correctly predicted changes in SERPs 2.6% better than random. This is a strong indication of a leading factor, greatly outperforming social shares and outperforming the best predictive raw metric, root linking domains.This is not unsurprising. Page Authority is built to predict rankings, so we should expect that it would outperform raw metrics in identifying when a shift in rankings might occur. Now, this is not to say that Google uses Moz Page Authority to rank sites, but rather that Moz Page Authority is a relatively good approximation of whatever link metrics Google is using to determine ranking sites.

Concluding thoughts

There are so many different experimental designs we can use to help improve our research industry-wide, and this is just one of the methods that can help us tease out the differences between causal ranking factors and lagging correlates. Experimental design does not need to be elaborate and the statistics to determine reliability do not need to be cutting edge. While machine learning offers much promise for improving our predictive models, simple statistics can do the trick when we’re establishing the fundamentals.

Now, get out there and do some great research!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Are You Making These 7 Mistakes with Your About Page?

Good old Google. They do like to keep life interesting for web publishers. You may have heard rumblings about a recent update that wreaked havoc on a lot of “your money or your life” sites — the ones that talk about health, fitness, finances, or happiness. That update appeared to look at the credibility of
Read More…

The post Are You Making These 7 Mistakes with Your About Page? appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

SearchCap: Google update still rolling out, Bing adds hotel booking, page speed performance & more

Here’s our recap of what happened in online marketing today, as reported on Marketing Land and other places across the web



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Fourth of July Google & Bing Home Page For 2018

Happy Fourth of July everyone! As normal, I won’t be posting stories here on independence day mostly because most of the readers here are off and don’t want….


Search Engine Roundtable

Posted in IM NewsComments Off

4 Reasons Why People Stop Reading Before the End of a Page

Every page you create has a purpose. It doesn’t matter whether it’s a sales page, a subscription page, an about page, a blog post, or any other kind of page. You publish it for a reason. You want something to happen. Maybe you want someone to share the page on social media. Or you want
Read More…

The post 4 Reasons Why People Stop Reading Before the End of a Page appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

Google: There Is No Fixed Timeout For JavaScript Page Rendering

So with all this JavaScript code and high end JS based web development going on, pages need to render and the heavier the code and the slower the server and computer accessing the page, the longer the wait time for the page to render…


Search Engine Roundtable

Posted in IM NewsComments Off

Advert