Tag Archive | "Data"

Roughly 100 Developers May Have Improperly Accessed FaceBook Groups Data

The last few weeks have seen the news go from bad to worse for Facebook, especially on the privacy front. Now the company is admitting that roughly 100 developers may have improperly accessed Groups member data.

In April 2018, Facebook made changes to the Groups API to limit what information administrators could access. Prior to the change, admins could see identifiable information, such as member names and profile pictures. Following the change, group members would have to opt-in for an admin to see that information—at least in theory.

According to Konstantinos Papamiltiadis, Facebook’s Platform Partnerships Head, an ongoing review discovered that some 100 developers had retained access to member information. Papamiltiadis said the company had taken steps to address the issues.

“We have since removed their access. Today we are also reaching out to roughly 100 partners who may have accessed this information since we announced restrictions to the Groups API, although it’s likely that the number that actually did is smaller and decreased over time. We know at least 11 partners accessed group members’ information in the last 60 days. Although we’ve seen no evidence of abuse, we will ask them to delete any member data they may have retained and we will conduct audits to confirm that it has been deleted.”

The post also made a point of promising that the company would continue to improve moving forward.

“We aim to maintain a high standard of security on our platform and to treat our developers fairly. As we’ve said in the past, the new framework under our agreement with the FTC means more accountability and transparency into how we build and maintain products. As we continue to work through this process we expect to find more examples of where we can improve, either through our products or changing how data is accessed. We are committed to this work and supporting the people on our platform.”

Given the current political climate, with politicians on both sides of the aisle increasingly looking at Facebook as a threat to privacy—and some even calling for its breakup—the company will need to do better to convince authorities and users alike that it can be trusted.

The post Roughly 100 Developers May Have Improperly Accessed FaceBook Groups Data appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Toyota and Weathernew Partner to Use IoT and Big Data to Increase Driver Safety

Toyota and Weathernews have announced a partnership to improve weather forecast accuracy and driver safety.

The two companies will use data derived from Toyota’s connected vehicles, such as windshield wiper operations. The data will then be visualized as a map, showing where there are pockets of vehicles with running wipers. These areas of activity can be compared with data provided by Weathernews. Since precipitation does not always show up on radar, this will help Weathernews improve the accuracy of the data it has.

“It is said that the rate of accidents during rainy weather is four times that of sunny days, so the presence of precipitation has a large effect on safe driving for vehicles. However, raincloud radar, which is often used to detect and predict rainy areas, has the disadvantage of being unable to detect rain resulting from rainclouds in the lower layer of the troposphere (an altitude of 2 km or below) or small raindrops such as those that form during drizzles. In such cases, it has been difficult to accurately determine areas where it is raining.

“In the verification test that begins on November 1, as part of Toyota’s and Weathernews’ joint research initiatives, the wiper operating status of Toyota’s connected vehicles being driven in the designated regions is visualized as a map and compared with actual weather data. In past cases where low rainclouds produced rain that did not show up on raincloud radar in the Kanto area, rain was instead reported by users of the ‘Weathernews’ app. The user reports received matched closely with the areas where wipers were being operated, showing that wiper data can detect rain that cannot be detected using raincloud radar. We will also perform a detailed analysis of the relationship between the wiper data and weather data, and in addition to accurate detection of rainy areas, we plan to make efforts to estimate the strength of the precipitation based on the power at which the wipers are being operated, and consider using wiper data for weather forecasting.

“By using wiper data to accurately ascertain the conditions of roads and their surroundings, such as precipitation and actual precipitation strength, undetectable using radar, we aim to contribute to driver safety by issuing warnings to drivers according to the situation.”

The post Toyota and Weathernew Partner to Use IoT and Big Data to Increase Driver Safety appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Why businesses should implement structured data

It goes without saying that the world of SEO is becoming ever more technical, and over the past decade, webmasters, SEOs, and in-house teams have been widening their knowledge and skillsets to help their sites compete in search engine results pages.

One of these areas, which has seen the most development since its launch in 2011, is, of course, schema.org markup.

Although it has been eight years since the data schema was introduced, whether due to lack of development capability or technical knowledge, many popular brands are still to implement structured data to their websites.

In this article, we’re going to take a look at what structured data is, and the benefits that the markup can provide for websites.

A brief introduction to structured data

Put simply, structured data is a form of markup that is implemented in the code of a website and provides search engines with specific pieces of information about a page, site, or organization.

By improving the knowledge that a search engine has about a particular page or site, it can, therefore, provide users with the information that they need when conducting a search.

It also means that if a business invests in structured data throughout its site, it could enjoy higher and more relevant levels of traffic.

But how does this happen?

Structured data can enhance AMP pages

Despite structured data not being a direct ranking factor, it can, however, influence other elements of your website which are ranking factors.

In a world where a lot of searches (even the biggest part) are made through mobile devices, site speed has never been more important, especially when you consider that users will leave a page that takes longer than three seconds to load.

For this reason, many businesses have implemented Accelerated Mobile Pages (AMP) on their site (read more about them here), which can help overcome critical mobile speed issues and improve the usability of pages.

But most people don’t realize that AMPs can actually be enhanced via structured data markup.

Google states that by implementing structured data to AMPs, they can enhance the appearance of the page in mobile search results while offering the ability to appear within rich results.

If a site gains the opportunity to appear within rich results for an important search term, the site could gain a great amount of search traffic as a result.

You can learn a little more about how structured data enhances AMP pages in this handy Google guide.

Structured data helps sites appear in Google’s Knowledge Graph

For sites that appear in highly competitive verticals, getting the edge over your competition is critical, and one way to do this is by establishing your site presence with Google and appearing in the Knowledge Graph.

Knowledge Graph cards appear on the right-hand side of search results and they provide users with functional and visual elements of your site; making it far easier for users to familiarise themselves with it.

To enable your business Knowledge Graph card, you need to add the necessary Corporate Contact markup on the homepage of your website.

Structured data knowledge graph

Like all types of markup however, there are important guidelines and rules that you must follow, such as ensuring that markup is not blocked from crawling by robots.txt directives.

You can find more information on how to properly implement Corporate Contact markup in this Google Developers Guide.

Structured data can be vital for improving a site’s click-through rates

CTR of a website is rather important for its rankings. And according to Neil Patel, the best way to increase it is to research and use keywords, especially long-tail keywords. Serpstat can help you make deep and useful keyword research and improve your rankings as well.

Also, the whole point of structured data is to provide clean and concise parcels of information to search engines so that you can clarify the purpose of your site and its pages to quickly provide users with the accurate information that they require.

This means that by implementing well-written and relevant structured data into your pages, your site should be shown to a more relevant audience base, meaning that your click-through rates will inevitably improve.

In fact, sites that implemented structured data found that their CTRs improved by at least 10%.

How to implement structured data

We’ve already learned the meaning and value of structured data on the site. Now, we’ll explore two of the main approaches for adding schema markup to your website.

How to add Schema.org micro-markup with Schema plugin

The easiest way to add a micro-markup to the site is to use the Schema plugin. It works with any available schema options and is embedded in the Yoast SEO plugin.

To install, go to Plugins – Add New in the WordPress console and find “Schema.” Activate it and go to Settings.

Structured data schema plugin

 

Fill in basic information, such as the location of your About Us page, Contacts, upload your website logo.

By filling out additional information, content, knowledge graph, and search results, you can optimize your site for each of the areas.

Then, you can go to Schema – Types and add the selected schema type or publication category.

Types of schema plugins

If the above-mentioned plugin doesn’t suit you, you can choose from a large number of WordPress plugins alternatives for schema markup. Here are some of them:

How to add Schema.org markup manually

Here, you should work more with the code, but you can add your schema markup individually to any page or post.

With arbitrary schema markup, you can include several different types of markup on the same page. For example, if you have an event page, and you also want to place a feedback schema on it, you can easily do it.

The most efficient way to manually add schema to your site is JSON-LD. This method is also recommended by Google. It’s based on JavaScript. You’ll add schema markup to your site as a script, so it’ll be much easier to read and debug.

Remember to follow all Google structured data guidelines while creating the code for your markup.

If you don’t know how to write markup code, you can use the Structured Data Markup Wizard from Google or JSON-LD Generator to create your code.

To use this approach, go to any post or page where you want to put the markup. Click Screen Settings at the top of the page and check the “Custom Fields” box. Now, scroll down to the “Custom Fields” settings and press “Enter new” to create a new field. Name it “Schema” and enter the code. For example, local businesses data type:

Custom fields box

 

Please provide the source and a possible caption for the above image

Next, you’ll need to edit your header.php file. Open it and paste the following code before the closing </head> tag:

 

Header.php file

Thanks to these actions, your schema code will load separately with metadata. You can add any kind of custom schema markup to your WordPress website with the above-described approach.

Just remember to run your page or post in the Google structured data testing tool to check your markup for errors. This validator understands the following formats:

  • Schema.org
  • Microdata
  • RDFa
  • JSON-LD

Using it, you can check the page in two ways:

  • Copy in HTML format
  • Specify a link to the page

If the site is being developed on a PC or if you need to test some options, you need to use the first method. The second one is suitable for the final verification of the finished markup. Also, here you can check the site pages when using ready-made CMS templates. They may contain some errors in markups.

For example, let’s check the Phase 5 Analytics page. After copying the URL and clicking the “Run test” button, the result of the verification appeared on the screen. There was the HTML code on the left, and markup on the right with errors if they were found.

Google structured data testing tool

 

Final word

Adding structured data to the site will not take a lot of time. This action will help improve the look of the snippet in the search engine and increase traffic to the site.

The process may seem a little technically complicated, but you’ll discover that even the option to manually add it is not as hard as you’d assume. In addition, many available plugins will make developing structured data very simple.

Inna Yatsyna is a Brand and Community Development Specialist at Serpstat. She can be found on Twitter @erin_yat.

The post Why businesses should implement structured data appeared first on Search Engine Watch.

Search Engine Watch

Posted in IM NewsComments Off

Google rolling out ‘Incognito Mode’ for Maps, activity data deletion via Assistant

These privacy controls promote trust and won’t impact targeting and analytics.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

The Data You’re Using to Calculate CTR is Wrong and Here’s Why

Posted by Luca-Bares

Click Through Rate (CTR) is an important metric that’s useful for making a lot of calculations about your site’s SEO performance, from estimating revenue opportunity, prioritize keyword optimization, to the impact of SERP changes within the market. Most SEOs know the value of creating custom CTR curves for their sites to make those projections more accurate. The only problem with custom CTR curves from Google Search Console (GSC) data is that GSC is known to be a flawed tool that can give out inaccurate data. This convolutes the data we get from GSC and can make it difficult to accurately interpret the CTR curves we create from this tool. Fortunately, there are ways to help control for these inaccuracies so you get a much clearer picture of what your data says.

By carefully cleaning your data and thoughtfully implementing an analysis methodology, you can calculate CTR for your site much more accurately using 4 basic steps:

  1. Extract your sites keyword data from GSC — the more data you can get, the better.
  2. Remove biased keywords — Branded search terms can throw off your CTR curves so they should be removed.
  3. Find the optimal impression level for your data set — Google samples data at low impression levels so it’s important to remove keywords that Google may be inaccurately reporting at these lower levels.
  4. Choose your rank position methodology — No data set is perfect, so you may want to change your rank classification methodology depending on the size of your keyword set.

Let’s take a quick step back

Before getting into the nitty gritty of calculating CTR curves, it’s useful to briefly cover the simplest way to calculate CTR since we’ll still be using this principle. 

To calculate CTR, download the keywords your site ranks for with click, impression, and position data. Then take the sum of clicks divided by the sum of impressions at each rank level from GSC data you’ll come out with a custom CTR curve. For more detail on actually crunching the numbers for CTR curves, you can check out this article by SEER if you’re not familiar with the process.

Where this calculation gets tricky is when you start to try to control for the bias that inherently comes with CTR data. However, even though we know it gives bad data we don’t really have many other options, so our only option is to try to eliminate as much bias as possible in our data set and be aware of some of the problems that come from using that data.

Without controlling and manipulating the data that comes from GSC, you can get results that seem illogical. For instance, you may find your curves show position 2 and 3 CTR’s having wildly larger averages than position 1. If you don’t know that data that you’re using from Search Console is flawed you might accept that data as truth and a) try to come up with hypotheses as to why the CTR curves look that way based on incorrect data, and b) create inaccurate estimates and projections based on those CTR curves.

Step 1: Pull your data

The first part of any analysis is actually pulling the data. This data ultimately comes from GSC, but there are many platforms that you can pull this data from that are better than GSC’s web extraction.

Google Search Console — The easiest platform to get the data from is from GSC itself. You can go into GSC and pull all your keyword data for the last three months. Google will automatically download a csv. file for you. The downside to this method is that GSC only exports 1,000 keywords at a time making your data size much too small for analysis. You can try to get around this by using the keyword filter for the head terms that you rank for and downloading multiple 1k files to get more data, but this process is an arduous one. Besides the other methods listed below are better and easier.

Google Data Studio — For any non-programmer looking for an easy way to get much more data from Search Console for free, this is definitely your best option. Google Data Studio connects directly to your GSC account data, but there are no limitations on the data size you can pull. For the same three month period trying to pull data from GSC where I would get 1k keywords (the max in GSC), Data Studio would give me back 200k keywords!

Google Search Console API — This takes some programming know-how, but one of the best ways to get the data you’re looking for is to connect directly to the source using their API. You’ll have much more control over the data you’re pulling and get a fairly large data set. The main setback here is you need to have the programming knowledge or resources to do so.

Keylime SEO Toolbox — If you don’t know how to program but still want access to Google’s impression and click data, then this is a great option to consider. Keylime stores historical Search Console data directly from the Search Console API so it’s as good (if not better) of an option than directly connecting to the API. It does cost $ 49/mo, but that’s pretty affordable considering the value of the data you’re getting.

The reason it’s important what platform you get your data from is that each one listed gives out different amounts of data. I’ve listed them here in the order of which tool gives the most data from least to most. Using GSC’s UI directly gives by far the least data, while Keylime can connect to GSC and Google Analytics to combine data to actually give you more information than the Search Console API would give you. This is good because whenever you can get more data, the more likely that the CTR averages you’re going to make for your site are going to be accurate.

Step 2: Remove keyword bias

Once you’ve pulled the data, you have to clean it. Because this data ultimately comes from Search Console we have to make sure we clean the data as best we can.

Remove branded search & knowledge graph keywords

When you create general CTR curves for non-branded search it’s important to remove all branded keywords from your data. These keywords should have high CTR’s which will throw off the averages of your non-branded searches which is why they should be removed. In addition, if you’re aware of any SERP features like knowledge graph you rank for consistently, you should try to remove those as well since we’re only calculating CTR for positions 1–10 and SERP feature keywords could throw off your averages.

Step 3: Find the optimal impression level in GSC for your data

The largest bias from Search Console data appears to come from data with low search impressions which is the data we need to try and remove. It’s not surprising that Google doesn’t accurately report low impression data since we know that Google doesn’t even include data with very low searches in GSC. For some reason Google decides to drastically over report CTR for these low impression terms. As an example, here’s an impression distribution graph I made with data from GSC for keywords that have only 1 impression and the CTR for every position.

If that doesn’t make a lot of sense to you, I’m right there with you. This graph says a majority of the keywords with only one impression has 100 percent CTR. It’s extremely unlikely, no matter how good your site’s CTR is, that one impression keywords are going to get a majority of 100 percent CTR. This is especially true for keywords that rank below #1. This gives us pretty solid evidence low impression data is not to be trusted, and we should limit the number of keywords in our data with low impressions.

Step 3 a): Use normal curves to help calculate CTR

For more evidence of Google giving us biased data we can look at the distribution of CTR for all the keywords in our data set. Since we’re calculating CTR averages, the data should adhere to a Normal Bell Curve. In most cases CTR curves from GSC are highly skewed to the left with long tails which again indicates that Google reports very high CTR at low impression volumes.

If we change the minimum number of impressions for the keyword sets that we’re analyzing we end up getting closer and closer to the center of the graph. Here’s an example, below is the distribution of a site CTR in CTR increments of .001.

The graph above shows the impressions at a very low impression level, around 25 impressions. The distribution of data is mostly on the right side of this graph with a small, high concentration on the left implies that this site has a very high click-through rate. However, by increasing the impression filter to 5,000 impressions per keyword the distribution of keywords gets much much closer to the center.

This graph most likely would never be centered around 50% CTR because that’d be a very high average CTR to have, so the graph should be skewed to the left. The main issue is we don’t know how much because Google gives us sampled data. The best we can do is guess. But this raises the question, what’s the right impression level to filter my keywords out to get rid of faulty data?

One way to find the right impression level to create CTR curves is to use the above method to get a feel for when your CTR distribution is getting close to a normal distribution. A Normally Distributed set of CTR data has fewer outliers and is less likely to have a high number of misreported pieces of data from Google.

3 b): Finding the best impression level to calculate CTR for your site

You can also create impression tiers to see where there’s less variability in the data you’re analyzing instead of Normal Curves. The less variability in your estimates, the closer you’re getting to an accurate CTR curve.

Tiered CTR tables

Creating tiered CTR needs to be done for every site because the sampling from GSC for every site is different depending on the keywords you rank for. I’ve seen CTR curves vary as much as 30 percent without the proper controls added to CTR estimates. This step is important because using all of the data points in your CTR calculation can wildly offset your results. And using too few data points gives you too small of a sample size to get an accurate idea of what your CTR actually is. The key is to find that happy medium between the two.

In the tiered table above, there’s huge variability from All Impressions to >250 impressions. After that point though, the change per tier is fairly small. Greater than 750 impressions are the right level for this site because the variability among curves is fairly small as we increase impression levels in the other tiers and >750 impressions still gives us plenty of keywords in each ranking level of our data set.

When creating tiered CTR curves, it’s important to also count how much data is used to build each data point throughout the tiers. For smaller sites, you may find that you don’t have enough data to reliably calculate CTR curves, but that won’t be apparent from just looking at your tiered curves. So knowing the size of your data at each stage is important when deciding what impression level is the most accurate for your site.

Step 4: Decide which position methodology to analyze your data

Once you’ve figured out the correct impression-level you want to filter your data by you can start actually calculating CTR curves using impression, click, and position data. The problem with position data is that it’s often inaccurate, so if you have great keyword tracking it’s far better to use the data from your own tracking numbers than Google’s. Most people can’t track that many keyword positions so it’s necessary to use Google’s position data. That’s certainly possible, but it’s important to be careful with how we use their data.

How to use GSC position

One question that may come up when calculating CTR curves using GSC average positions is whether to use rounded positions or exact positions (i.e. only positions from GSC that rank exactly 1. So, ranks 1.0 or 2.0 are exact positions instead of 1.3 or 2.1 for example).

Exact position vs. rounded position

The reasoning behind using exact position is we want data that’s most likely to have been ranking in position 1 for the time period we’re measuring. Using exact position will give us the best idea of what CTR is at position 1. Exact rank keywords are more likely to have been ranking in that position for the duration of the time period you pulled keywords from. The problem is that Average Rank is an average so there’s no way to know if a keyword has ranked solidly in one place for a full time period or the average just happens to show an exact rank.

Fortunately, if we compare exact position CTR vs rounded position CTR, they’re directionally similar in terms of actual CTR estimations with enough data. The problem is that exact position can be volatile when you don’t have enough data. By using rounded positions we get much more data, so it makes sense to use rounded position when not enough data is available for exact position.

The one caveat is for position 1 CTR estimates. For every other position average rankings can pull up on a keywords average ranking position and at the same time they can pull down the average. Meaning that if a keyword has an average ranking of 3. It could have ranked #1 and #5 at some point and the average was 3. However, for #1 ranks, the average can only be brought down which means that the CTR for a keyword is always going to be reported lower than reality if you use rounded position.

A rank position hybrid: Adjusted exact position

So if you have enough data, only use exact position for position 1. For smaller sites, you can use adjusted exact position. Since Google gives averages up to two decimal points, one way to get more “exact position” #1s is to include all keywords which rank below position 1.1. I find this gets a couple hundred extra keywords which makes my data more reliable.

And this also shouldn’t pull down our average much at all, since GSC is somewhat inaccurate with how it reports Average Ranking. At Wayfair, we use STAT as our keyword rank tracking tool and after comparing the difference between GSC average rankings with average rankings from STAT the rankings near #1 position are close, but not 100 percent accurate. Once you start going farther down in rankings the difference between STAT and GSC become larger, so watch out how far down in the rankings you go to include more keywords in your data set.

I’ve done this analysis for all the rankings tracked on Wayfair and I found the lower the position, the less closely rankings matched between the two tools. So Google isn’t giving great rankings data, but it’s close enough near the #1 position, that I’m comfortable using adjusted exact position to increase my data set without worrying about sacrificing data quality within reason.

Conclusion

GSC is an imperfect tool, but it gives SEOs the best information we have to understand an individual site’s click performance in the SERPs. Since we know that GSC is going to throw us a few curveballs with the data it provides its important to control as many pieces of that data as possible. The main ways to do so is to choose your ideal data extraction source, get rid of low impression keywords, and use the right rank rounding methods. If you do all of these things you’re much more likely to get more accurate, consistent CTR curves on your own site.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Yes, you can add JSON structured data to the body of your pages

Head or body, Mueller explains that Google can read both just fine.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

How to improve SEO using data science

Gone are the days when a single tweak in the content or the title tag was able to get your site to the top of the search results. 

Google algorithm is now much harder to crack than before. Besides, 75 percent of online users do not scroll past the first page of the search engine results.

As you can imagine, this makes the SEO space highly competitive right now and companies can no longer rely on basic techniques.

However, you can always make sure that the odds are in your favor by using data science.

What is data science?

A combination of various tools, algorithms, and machine learning principles designed to unveil hidden patterns using the raw data is referred to as data science.

Data science is creating its impression across every domain. As cited by Maryville University, around 1.7 megabytes of data will be generated every second for everyone on the planet by the end of 2020.

Why do you need it?

Data science provides valuable insights about a website’s performance and these insights can help you improve your SEO campaigns.

Data science is used to make predictions about upcoming trends and customer behavior using analytics and machine learning. For example, Netflix uses insight from data science to produce its original series that drives user interest.

Apart from identifying opportunities, data science also handles high voluminous data and helps in making better decisions. Businesses can easily gauge the effectiveness of a marketing campaign with the help of data science.

How does data science help SEO?

Data science helps you make concrete decisions by letting you:

  • Visualize which combinations have the potential to make the biggest impact
  • Create marketing campaigns aligned with the needs of their audience
  • Understand buyer’s preferences and identify pain points
  • Identify referral sources of converting traffic
  • Verify loading time, indexing, bounce rate, response errors, and redirects
  • Verify the most and least crawled URLs
  • Identify pages that crawlers aren’t supposed to index
  • Identify sources of unusual traffic

How do you apply data science to your SEO data?

Follow the below ways to apply data science to your SEO campaigns:

1. Select your data sources

Understand that the quality of your data sources directly impacts your data insights. You need the right tools to track important metrics more precisely. The top four tools that can help you gather the right data and make better decisions are Google Analytics, SEMrush, and Ahrefs.

2. Think “ecosystem” instead of “data” and “tools” 

Do not rely on one solution if your SEO is complex and integrates with various other digital marketing areas like content marketing, CX management, CRO, and sales. The “data science” approach to SEO is about integrating methods, tools, and practices in a way that draws deep and accurate insights from the cumulative data mix. Consider the SEMRush console we discussed above. The traffic stats it presents work on the assumption that all traffic is genuine. What if there are bad bots at play here? It makes a lot of sense to bring in a traffic quality checking tool into the mix, something like what Finteza does.

Example of using Finteza to improve SEO using data science

Source: Finteza

It offers you advanced bot detection tech, along with a whole suite of conversion funnel optimization modules, to help you not only make more sense of your data but also to put the insight into action, to drive business KPI scores.

3. Align SEO with marketing initiatives 

Backing your SEO with other marketing initiatives makes it stronger. Collaborate with sales, developers, UX designers, and customer support teams to optimize for all search ranking factors.

Use data science to determine a universal set of SEO best practices each team can follow to achieve your goal. Try tracking the evolving relationships between independent and dependent variables to get a better idea of what actions are important to your business. To fully understand how your SEO affects other channels, capture and analyze data from:

  • Top conversion paths
  • Conversions and assisted conversions

Gain a clear understanding of your customers’ journeys to establish a stronger alignment between various marketing activities and attribute the outcomes to separate campaigns easily.

4. Visualize with data science

Find it hard to digest numbers piled onto a spreadsheet? Taking a hierarchical approach to your data can cause you to miss out on important hidden between the lines. On the other hand, draw different benefits from data visualizations like:

  • Compare and contrast
  • Process large data volumes at scale
  • Accelerate knowledge discovery
  • Reveal hidden questions
  • Spot common patterns and trends

Test it out yourself. Leverage data science during an SEO technical audit and receive insights about your site’s health and performance. Use that data to know more about your page authority, rankings, number of outbound/inbound links per page, and other factors. However, you won’t find a proper answer about why some pages perform better in the search results, while the others lag behind. Visualizing the site’s internal link structure and figuring out the domain authority of individual pages on a scale of one to ten (like Google) allows you to see the areas for improvement and adopt proactive measures.

On-page SEO optimization is just a single example of how SEO experts combine visualizations with data science to provide better results to clients. Make your SEO data more actionable with visualizations.

5. Take help of A/B testing

LinkedIn carried out an experiment using the XNLT platform. The experiment was focused on the redesign of the “Premium Subscription” payment flow. The LinkedIn UX team reduced the number of payment checkout pages and added a FAQ. The results were impressive with an increase in the number of annual bookings which was worth millions of dollars, a 30% reduction in refund orders and a 10% increase in free trial subscriptions.

Concluding remarks

Data science focuses on eliminating guesswork from SEO. Rather than presuming what works and how a specific action affects your goals, use data science to know what’s bringing you the desired results and how you’re able to quantify your success. Brands like Airbnb are already doing it and so can you.

The post How to improve SEO using data science appeared first on Search Engine Watch.

Search Engine Watch

Posted in IM NewsComments Off

The convergence of data quality to consumer discoverability

Data cleanliness and accuracy must be a priority for brands because it will affect marketing performance at the local level.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

We Built the Data Platform For AI To Enable Safe Self-Driving Cars, Says Scale AI CEO

“What we’ve done at Scale is built the data platform for AI,” says Scale AI’s 22-year-old CEO, Alexandr Wang. “AI is really built on top of data and these algorithms require billions and billions of examples of labeled data to be able to perform in a safe or reliable way. What we’ve done is built a platform that allows these companies to get the data they need to be able to build these algorithms in a safe and reliable way. Then they use the data to build their self-driving cars.”

Alexandr Wang, Scale AI co-founder and CEO, discusses how his company has built the data platform for AI that enables safe and reliable autonomous vehicles. Wang was interviewed on Bloomberg Technology.

We Built the Data Platform For AI To Enable Safe Autonomous Vehicles

What we’ve done at Scale is built the data platform for AI. AI is really built on top of data and these algorithms require billions and billions of examples of labeled data to be able to perform in a safe or reliable way. What we’ve done is built a platform that allows these companies to get the data they need to be able to build these algorithms in a safe and reliable way. Then they use the data to build their self-driving cars. I think it’s very exciting that all these companies have really incredible technology and it’s getting better and better every single year. We’re really getting closer and closer to solving the problem. 

One of the big problems in machine learning is perception, being able to fully understand the environment around you using machine learning. So we process a lot of image data, LIDAR data, radar data, map data, etc. for some of these companies. Then for other companies, we process tax data or tabular data or speech data. The work we do is critical to building safe autonomous vehicles, for example, because without the data that we’re able to provide to these companies they actually wouldn’t even be able to build algorithms that could perform in any manner that is safe and reliable. 

AI Is Really About Augmenting Humans With Technology

AI is really about augmenting humans with technology and making them more effective and more efficient using technology. In particular, I think for a lot of the problems that we work on where AI plays a really critical role in self-driving or medical imagery, etc., you really want to make sure that humans are a part of the process to ensure that these systems are performing very safely and reliably. 

One view that we really take in is, how do we solve this in the most tech-enabled way as possible? How do we use as much machine learning and technology on our side to make the process as efficient and high quality as possible? That’s a very differentiated view actually. Many of these other efforts are much more human-powered than technology powered.

You Don’t Need a Degree To Be Able To Accomplish Your Goals

I was really lucky I grew up in Los Alamos, New Mexico, but after high school, I was lucky to be able to come out here to the Valley to work as a software engineer. That really exposed me to a lot of these problems where AI and machine learning are really core. I went back to school for a year and then after that year at school, I dropped out and started this company.

I think if you know what you want to do, more and more these days, you don’t need a degree to be able to accomplish what you need to do. I think people care a lot more about what can you accomplish and what are your skills.

We Built the Data Platform For AI To Enable Safe Self-Driving Cars, Says Scale AI CEO Alexandr Wang

The post We Built the Data Platform For AI To Enable Safe Self-Driving Cars, Says Scale AI CEO appeared first on WebProNews.


WebProNews

Posted in IM NewsComments Off

Using STAT: How to Uncover Additional Value in Your Keyword Data

Posted by TheMozTeam

Changing SERP features and near-daily Google updates mean that single keyword strategies are no longer viable. Brands have a lot to keep tabs on if they want to stay visible and keep that coveted top spot on the SERP.

That’s why we asked Laura Hampton, Head of Marketing at Impressionto share some of the ways her award-winning team leverages STAT to surface all kinds of insights to make informed decisions.

Snag her expert tips on how to uncover additional value in your keyword data — including how Impression’s web team uses STAT’s API to improve client reporting, how to spot quick wins with dynamic tags, and what new projects they have up their sleeves. Take it away, Laura!

Spotting quick wins 

We all remember the traditional CTR chart. It suggests that websites ranking in position one on the SERPs can expect roughly 30 percent of the clicks available, with position two getting around 12 percent, position three seeing six percent, and so on (disclaimer: these may not be the actual numbers but, let’s face it, this formula is way outdated at this point anyway).

Today, the SERP landscape has changed, so we know that the chances of any of the above-suggested numbers being correct are minimal — especially when you consider the influence of elements like featured snippets on click-through rates.

But the practical reality remains that if you can improve your ranking position, it’s highly likely you’ll get at least some uplift in traffic for that term. This is where STAT’s dynamic tags can really help. Dynamic tags are a special kind of tag that automatically populates keywords based on changeable filter criteria.

We like to set up dynamic tags based on ranking position. We use this to flag keywords which are sitting just outside of the top three, top five, or top 10 positions. Layer into this some form of traffic benchmark, and you can easily uncover keywords with decent traffic potential that just need an extra bit of work to tip them into a better position.

Chasing position zero with featured snippets and PAAs 

There’s been a lot of chat in our industry about the growing prevalence of SERP features like featured snippets and “People also ask” (PAA) boxes. In fact, STAT has been instrumental in leading much of the research into the influence of these two SERP features on brand visibility and CTRs.

If your strategy includes a hunt for the coveted position zero, you’re in luck. We like to use STAT’s dynamic tagging feature to monitor the keywords that result in featured snippets. This way, we can track keywords where our client owns the snippet and where they don’t. We can also highlight new opportunities to create optimized content and attempt to capture the spot from their competitors.

This also really helps guide our overall content strategy, since STAT is able to provide quick feedback on the type of content (and, therefore, the assumed intent) that will perform best amongst a keyword set.

Making use of data views 

Data views are one of the most fundamental elements of STAT. They are tools that allow you to organize your data in ways that are meaningful to you. Holding multiple keyword segments (tags) and producing aggregate metrics, they make it possible for us to dissect keyword information and then implement strategically driven decisions.

For us at Impression, data views are essential. They reflect the tactical aspirations of the client. While you could create a single templated dashboard for all your clients with the same data views, our strategists will often set up data views that mirror the way each client and account work.

Even if we’re not yet actively working on a keyword set, we usually create data views to enable us to quickly spot opportunities and report back on the strategic progression.

Here are just some of the data views we’ve grouped our keyword segments into:

The conversion funnel

Segmenting keywords into the stages of the conversion funnel is a fairly common strategy for search marketers — it makes it possible to focus in on and prioritize higher intent queries and then extrapolate out.

Many of our data views are set up to monitor keywords tagged as “conversion,” “education,” and “awareness.”

Client goals

Because we believe successful search marketing is only possible when it integrates with wider business goals, we like to spend time getting to know our clients’ audiences, as well as their specific niches and characteristics.

This way, we can split our keywords into those which reflect the segments that our clients wish to target. For example, in some cases, this is based on sectors, such as our telecommunications client who targets audiences in finance, marketing, IT, and general business. In others, it’s based on locations, in which case we’ll leverage STAT’s location capabilities to track the visibility of our clients to different locales.

Services and/or categories

For those clients who sell online — whether it’s products or services — data views are a great way to track their visibility within each service area or product category.

Our own dashboard (for Impression) uses this approach to split out our service-based keywords, so our data view is marked “Services” and the tags we track within are “SEO,” “PPC,” “web,” and so on. For one of our fashion clients, the data view relates to product categories, where the tracked tags include “footwear,” “accessories,” and “dresses.”

At-a-glance health monitoring

A relatively new feature in STAT allows us to see the performance of tags compared to one another: the Tags tab.

Because we use data views and tags a lot, this has been a neat addition for us. The ability to quickly view those tags and how the keywords within are progressing is immensely valuable.

Let’s use an example from above. For Impression’s own keyword set, one data view contains tags that represent different service offerings. When we click on that data view and choose “Tags” in the tabbed options, we can see how well each service area is performing in terms of its visibility online.

This means we can get very quick strategic insights that say our ranking positions for SEO are consistently pretty awesome, while those around CRO (which we are arguably less well known for), tend to fluctuate more. We can also make a quick comparison between them thanks to the layout of the tab.

Identifying keyword cannibalization risk through duplicate landing pages 

While we certainly don’t subscribe to any notion of a content cannibalization penalty per se, we do believe that having multiple landing pages for one keyword or keyword set is problematic.

That’s where STAT can help. We simply filter the keywords table to show a given landing page and we’re able to track instances where it’s ranking for multiple keywords.

By exporting that information, we can then compare the best and worst ranking URLs. We can also highlight where the ranking URL for a single keyword has changed, signaling internal conflict and, therefore, an opportunity to streamline and improve.

Monitoring the competitive landscape 

No search strategy is complete without an understanding of the wider search landscape. Specifically, this means keeping track of your and/or your client’s rankings when compared to others ranking around them.

We like to use STAT’s Competitive Landscape tab to view this information for a specific data view, or across the whole account. In particular, the Share of Voice: Current Leaders board tells us very quickly who we’re up against for a keyword set.

This leads to insights such as the competitiveness of the keyword set, which makes it easier to set client expectations. It also surfaces relevance of the keywords tracked, where, if the share of voice is going to brands that aren’t your own, it may indicate the keywords you’re targeting are not that relevant to your own audience.

You can also take a look at the Share of Voice: Top 10 Trending to see where competitors are increasing or decreasing their visibility. This can be indicative of changes on the SERPs for that industry, or in the industry as a whole.

Creating a custom connector for GDS 

Reporting is a fundamental part of agency life. Our clients appreciate formalized insights into campaign progression (on top of regular communications throughout the month, of course) and one of our main challenges in growing our agency lies in identifying the best way to display reports.

We’ll be honest here: There was a point where we had started to invest in building our own platform, with all sorts of aspirations of bespoke builds and highly branded experiences that could tie into a plethora of other UX considerations for our clients.

But at the same time, we’re also big believers that there’s no point in trying to reinvent the wheel if an appropriate solution already exists. So, we decided to use Google Data Studio (GDS) as it was released in Beta and moved onto the platform in 2017.

Of course, ranking data — while we’d all like to reserve it for internal insight to drive bigger goals — is always of interest to clients. At the time, the STAT API was publicly available, but there was no way to pull data into GDS.

That’s why we decided to put some of our own time into creating a GDS connector for STAT. Through this connector, we’re able to pull in live data to our GDS reports, which can be easily shared with our clients. It was a relatively straightforward process and, because GDS caches the data for a short amount of time, it doesn’t hammer the STAT API for every request.

Though our clients do have access to STAT (made possible through their granular user permissions), the GDS integration is a simpler way for them to see top-level stats at a glance.

We’re in the process of building pipelines through BigQuery to feed into this and facilitate date specific tracking in GDS too — keep an eye out for more info and get access to the STAT GDS connector here.

Want more? 

Ready to learn how to get cracking and tracking some more? Reach out to our rad team and request a demo to get your very own tailored walkthrough of STAT. 

If you’re attending MozCon this year, you can see the ins and outs of STAT in person — grab your ticket before they’re all gone! 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Advert