Tag Archive | "press"

How to Set Up Metrics to Optimize Your Digital PR Team’s Press Coverage

Posted by Ashley.Carlisle

Over the past six years, our team at Fractl has studied the art of mastering content marketing press coverage. Before moving into Agency Operations, I on-boarded and trained over a dozen new associates for our digital PR team within a year as the Media Relations Manager. Scaling a team of that size in a such a short period of time required hands-on training and a clear communication of goals and expectations within the role — but what metrics are indicative of success in digital PR?

As a data-driven content marketing agency, we turned to the numbers for something a little different than our usual data-heavy campaigns — we used our own historical data to analyze and optimize our digital PR team’s outreach.

This post aims to provide better insight in defining measurable variables as key performance indicators, or KPIs, for digital PR teams and understanding the implications and relationships of those KPIs. We’ll also go into the rationale for establishing baselines for these KPIs, which indicate the quality, efficiency, and efficacy of a team’s outreach efforts.

As a guide for defining success by analyzing your own metrics for your team (digital PR or otherwise), we’ll provide the framework for the research design, which helped us establish a threshold for the single variable we identified to best measure our efforts and be the most significantly correlated with the KPIs indicative of success of a digital PR team.

Determining the key performance indicators for digital PR outreach

The influx of available data for marketers and PR professionals to measure the impact of their work allows us to stray away from vague metrics like “reach” and the even more vague goal of “more publicity.” Instead, we are able to focus on the metrics most indicative of what we’re actually trying to measure: the effect of digital PR efforts.

We all have our theories and educated guesses about which metrics are most important and how each are related, but without researching further, theories remain theories (or expert opinions, at best). Operational research allows businesses to use the scientific method as a way to provide managers and their teams with a quantitative basis for decision making. Operationalization is the process of strictly defining variables to turn nebulous concepts (in this case, the effort and success of your digital PR team) into variables that can be measured, empirically and quantitatively.

There is one indicator identified to best measure your effort into a campaign’s outreach. It is a precursor to all of the indicators below: the volume of pitch emails sent for each campaign.

Because all pitches are not created equal, the indicators below gauge which factors best define the success of outreach, such as the quality of outreach correspondence, the efficiency of time to secure press, the efficacy of the campaign, and media mentions secured. Each multi-faceted metric can be described by a variety of measurements, and all are encompassed by the independent variable of the volume of pitch emails sent for each campaign.

Some indicators may be better measured by using more than a single metric, so for the purposes of this post, here are the three metrics to illustrate each of these three KPIs to offer a more holistic picture of your team’s performance:

Pitch quality and efficacy

  • Placement Rate: The percentage of placements (i.e., media mentions) secured per the number of total pitches sent.
  • Interest Rate: The percentage of interested publisher replies to pitches per the number of total pitches sent.
  • Decline Rate: The percentage of declining publisher replies to pitches per the number of total pitches sent.

Efficiency and capacity

  • Total days of outreach: The number of business days between the first and last pitch sent for a campaign, which is the sum of the two metrics below.
  • Days to first placement: The number of business days between the first pitch sent and first placement to be published for a campaign.
  • Days to syndication: The number of business days between the first placement to be published and the last pitch to be sent for a campaign.

Placement quality and efficacy

  • Total Links: The total number of backlinks from external linking domains of any attribution type (e.g. DoFollow, NoFollow) for a campaign’s landing page.
  • Total DoFollow Links: The total number of DoFollow backlinks from external linking domains for a campaign’s landing page.
  • Total Domain Authority of Links: The total domain authority of all backlinks from external linking domains of any attribution type (e.g. DoFollow, NoFollow,) for a campaign’s landing page.

Optimizing effort to yield the best KPIs

After identifying the metrics, we need to solve the next challenge: What are the relationships between your efforts and your KPIs? The practical application of these answers can help you establish a threshold or range for the input metric that is correlated with the highest KPIs. We’ll discuss that in a bit.

After identifying metrics to analyze, define the nature of their relationships to one another. Use a hypothesis test to verify an effect; in this case, we’re interested to find the relationship between pitch count and each of the metrics we defined above as being KPIs of successful outreach. This study hypothesizes that campaigns closed out in 70 pitches or less will have better KPIs than campaigns closed out with over 71 pitches.

Analyzing the relationship and determining significance of the data

Next, determine if the relationship is significant; when the relationship is stated as statistically significant, the relationship observed has a high likelihood of happening in the future. When it comes to claiming statistical significance, some may assume there must be a complex formula that only seasoned statisticians can calculate. In reality, determining statistical significance is done via a t-test, a simple statistical test that compares two samples to help us infer a correlation of the same relationships in future samples.

In this case, campaigns with pitch counts below 70 are one group and campaigns above 71 are a second group. The findings below define the percentage difference between the means of both groups (i.e., the campaigns from Q2 and Q3) to determine if lower pitch counts do have a desired effect for each metric; those that are asterisked are statistically significant, meaning there is a less than a 5 percent chance that the observed results are due to chance.

How our analysis can optimize your digital PR team’s efforts

In practice, the relationships between these metrics help you establish a better standard of practice for your team’s outreach with realistic expectations and goals. Further, the correlation between the specified range of pitch counts and all other KPIs give you a reliable range of what values you can expect when it comes to the metrics for pitch quality, timelines, and campaign performance when adhering to the range of pitches.

The original theory — that a threshold for pitch counts exists when the relationship between pitch count and all other metrics of performance were compared — is confirmed by the data. The sample with lower pitch counts (less than 70) sees a positive relationship with the KPIs we want to decrease (e.g. decline rates, total days) and negative relationship with the KPIs we want to increase (e.g. placement rates, link counts). The sample with higher pitch counts (greater than 71) saw the inverse — a negative relationship with the KPIs we want to decrease and a positive relationship with the KPIs we want to increase. Essentially, when campaigns with less than 70 pitches sent were isolated, the numbers improved in nearly every metric.

When this analysis is applied to each of the 74 campaigns from Q3, you’ll see nearly consistent results, with the exception again being Total Domain Authority. Campaigns with up to 70 pitches are correlated with better KPIs when compared to campaigns with over 71 pitches.

Vague or unrealistic expectations and goals will sabotage the success of any team and any project. When it comes to the effort put into each campaign, having objective, optimized procedures allows your team to work smarter, not harder.

So, what does that baseline range look like, and how do you calculate it?

Establishing realistic baseline metrics

A simple question helps answer what the baseline should be in this instance: What was the average of each KPI of the campaigns with fewer than 70 pitches?

We gathered all 70 campaigns closed out of our digital PR team’s pipelines in the second and third quarters of 2018 with pitch counts below 70 and determined the average of each metric. Then, we calculated the standard deviation from the mean, which defines the spread of the data to establish a range for each KPI — and that became our baseline range.

Examining historical data is among the best methods for determining realistic baselines. By gathering a broad, sizeable sample (usually more than 30 is ideal) that represents the full scope of projects your team works on, you can determine the average for each metric and deviation from the average to establish a range.

These reliable ranges allow your digital PR team to understand the baselines they must strive for during active outreach when in compliance with the standard of practice for pitch counts established from our research. Further, these baseline ranges allow you to set more realistic goals for future performance by increasing each range by a realistic percentage.

Deviations from that range act as indicators of potential issues related to the quality, efficiency, or efficacy of their outreach, with each of the metrics implying what specifically may be array. We offer context into each of those metrics defining our three KPIs in terms of their implications and limitations.

Understanding how each metric can influence the productivity of your team

Pitch quality and efficacy

The purpose of a pitch is to tell a compelling and succinct story of why the campaign you’re pitching is newsworthy and fits the beat of the individual writer you’re pitching. Help your team succeed by enforcing tried and true best practices to enable them to craft each pitch with personalization and compelling narratives at the top of mind. The placements act as a conversion rate to measure the efficacy of your team’s outreach while interests and declines act as a combined response rate to measure the quality of outreach.

To help your team avoid the “spray and pray” mentality of blasting out as many pitches as possible and hoping one will yield a media mention, which ultimately jeopardizes publisher relationships and are an inefficient use of time, focus on the rates our teams secure responses and placements from publishers in relation to the total volume of pitches sent. Prioritize this interpretation of the data rather than just the individual counts to help add context to the pitch count.

Campaigns with a high-ratio of interest and placements to pitches from publishers imply the quality of the pitch was sufficient, meaning it encompassed one or more of the factors known to be important in securing press coverage. This includes, but is not limited to, compelling and newsworthy narratives, personalized details, and/or relevancy to the writer. In some cases, campaigns may have a low-ratio of interest but high-ratio of placements as a result of a nonresponse bias — the occurrence where publishers will not respond to a pitch but will still cover the campaign in a future article, yielding a placement. These “ghost posts” can skew interest rates, illustrating why three metrics compose this KPI.

Campaigns with a high-ratio of declines to pitches imply the quality of the pitch may be subpar, which signals to the associate to re-evaluate their outreach strategy. Again, the inverse may not always be true, as campaigns with a low ratio of declines may be a result of non-response bias. In this case, if publishers do not respond at all, we can either infer they did not open the email or they opened the email and were not interested, therefore declining by default.

While confounding variables (such as the quality of the content itself, not just the quality of the pitch) may skew these metrics in either direction and remain the greatest limitation, holistically, these three metrics offer actionable insights during active outreach.

Efficiency and capacity

Similarly, ranges for timeline metrics can give your associates context of when they should be achieving milestones (i.e., the first placement) as well as the total length of outreach. Deviating beyond the standard timeline to secure the first placement often indicates the outreach strategy needs re-evaluating, while extending beyond the range for total days of outreach indicates a campaign should be closed out soon.

Efficiency metrics help beyond advising the strategy for outreach, informing operations from a capacity standpoint. Toggling between tens and sometimes hundreds of active campaigns at any given point relies on consistency for capacity — reducing variance between the volume of campaigns entering production to campaigns being closed out of the pipeline by staggering campaigns based on their average duration. This allows for more robust planning and reliable forecasting.

Awareness of the baselines for time to secure press enables you and your team to not just plan strategies and capacities, but also the content of your campaigns. You can ensure timely content by allowing for sufficient time for outreach when ideating your campaigns so the content does not become stale or outdated.

The biggest limitation of these metrics is a looming external variable often beyond our control — the editorial calendars and agendas of the publishers. Publishers have their own deadlines and priorities to fill, so we can not always plan for delays in publishing dates or worse yet, scrapping coverage altogether.

Placement quality and efficacy

Ultimately, your efforts are intended to yield placements to gain brand awareness and voice, as well as build a diverse link portfolio; the latter is arguably easier to quantify. Total external links pointing to the campaign’s landing page or client homepage along with the total Domain Authority of those links allow you to track both the quantity and quality of links.

Higher link counts built from your placements allow you to infer the syndication networks of the placements your outreach secured, while higher total Domain Authority measures the relative value of those linking domains to measure quality. Along with further specifying the types of links (specifically Dofollow links, arguably the most valuable link type), these metrics have the potential to forecast the impact of the campaign on the website’s own overall authority.

Replicating our analysis to optimize your team’s press coverage

Often times, historical research designs such as this one can have limitations in their cause and effect implications. This collection of data offers valuable insight into correlations to help us infer patterns and trends.

Our analysis utilized historical data representative of our entire agency in terms of scope of clients, campaign types, and associates, strengthening internal validity. So while the specific baseline metrics are tailored to our team, the framework we offer for establishing those baselines is transferable to any team.

Apply these methods with your digital PR team to help define KPIs, establish baselines, and test your own theories:

  • Track the ten metrics that compose the KPIs of digital PR outreach for each campaign or initiative to keep a running historical record.
  • Determine the average spread via the mean and standard deviation for each metric from a sizeable, representative sample of campaigns to establish your team’s baseline metrics.
  • Test any theories of trends in your team’s effort (i.e., pitch counts) in relation to KPIs with a simple hypothesis test to optimize your team and resources.

How does your team approach defining the most important metrics and establishing baseline ranges? How do you approach optimizing those efforts to yield the best press coverage? Uncovering these answers will help your team synergize more effectively and establish productive foundations for future outreach efforts.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

How to Set Up Metrics to Optimize Your Digital PR Team’s Press Coverage

Posted by Ashley.Carlisle

Over the past six years, our team at Fractl has studied the art of mastering content marketing press coverage. Before moving into Agency Operations, I on-boarded and trained over a dozen new associates for our digital PR team within a year as the Media Relations Manager. Scaling a team of that size in a such a short period of time required hands-on training and a clear communication of goals and expectations within the role — but what metrics are indicative of success in digital PR?

As a data-driven content marketing agency, we turned to the numbers for something a little different than our usual data-heavy campaigns — we used our own historical data to analyze and optimize our digital PR team’s outreach.

This post aims to provide better insight in defining measurable variables as key performance indicators, or KPIs, for digital PR teams and understanding the implications and relationships of those KPIs. We’ll also go into the rationale for establishing baselines for these KPIs, which indicate the quality, efficiency, and efficacy of a team’s outreach efforts.

As a guide for defining success by analyzing your own metrics for your team (digital PR or otherwise), we’ll provide the framework for the research design, which helped us establish a threshold for the single variable we identified to best measure our efforts and be the most significantly correlated with the KPIs indicative of success of a digital PR team.

Determining the key performance indicators for digital PR outreach

The influx of available data for marketers and PR professionals to measure the impact of their work allows us to stray away from vague metrics like “reach” and the even more vague goal of “more publicity.” Instead, we are able to focus on the metrics most indicative of what we’re actually trying to measure: the effect of digital PR efforts.

We all have our theories and educated guesses about which metrics are most important and how each are related, but without researching further, theories remain theories (or expert opinions, at best). Operational research allows businesses to use the scientific method as a way to provide managers and their teams with a quantitative basis for decision making. Operationalization is the process of strictly defining variables to turn nebulous concepts (in this case, the effort and success of your digital PR team) into variables that can be measured, empirically and quantitatively.

There is one indicator identified to best measure your effort into a campaign’s outreach. It is a precursor to all of the indicators below: the volume of pitch emails sent for each campaign.

Because all pitches are not created equal, the indicators below gauge which factors best define the success of outreach, such as the quality of outreach correspondence, the efficiency of time to secure press, the efficacy of the campaign, and media mentions secured. Each multi-faceted metric can be described by a variety of measurements, and all are encompassed by the independent variable of the volume of pitch emails sent for each campaign.

Some indicators may be better measured by using more than a single metric, so for the purposes of this post, here are the three metrics to illustrate each of these three KPIs to offer a more holistic picture of your team’s performance:

Pitch quality and efficacy

  • Placement Rate: The percentage of placements (i.e., media mentions) secured per the number of total pitches sent.
  • Interest Rate: The percentage of interested publisher replies to pitches per the number of total pitches sent.
  • Decline Rate: The percentage of declining publisher replies to pitches per the number of total pitches sent.

Efficiency and capacity

  • Total days of outreach: The number of business days between the first and last pitch sent for a campaign, which is the sum of the two metrics below.
  • Days to first placement: The number of business days between the first pitch sent and first placement to be published for a campaign.
  • Days to syndication: The number of business days between the first placement to be published and the last pitch to be sent for a campaign.

Placement quality and efficacy

  • Total Links: The total number of backlinks from external linking domains of any attribution type (e.g. DoFollow, NoFollow) for a campaign’s landing page.
  • Total DoFollow Links: The total number of DoFollow backlinks from external linking domains for a campaign’s landing page.
  • Total Domain Authority of Links: The total domain authority of all backlinks from external linking domains of any attribution type (e.g. DoFollow, NoFollow,) for a campaign’s landing page.

Optimizing effort to yield the best KPIs

After identifying the metrics, we need to solve the next challenge: What are the relationships between your efforts and your KPIs? The practical application of these answers can help you establish a threshold or range for the input metric that is correlated with the highest KPIs. We’ll discuss that in a bit.

After identifying metrics to analyze, define the nature of their relationships to one another. Use a hypothesis test to verify an effect; in this case, we’re interested to find the relationship between pitch count and each of the metrics we defined above as being KPIs of successful outreach. This study hypothesizes that campaigns closed out in 70 pitches or less will have better KPIs than campaigns closed out with over 71 pitches.

Analyzing the relationship and determining significance of the data

Next, determine if the relationship is significant; when the relationship is stated as statistically significant, the relationship observed has a high likelihood of happening in the future. When it comes to claiming statistical significance, some may assume there must be a complex formula that only seasoned statisticians can calculate. In reality, determining statistical significance is done via a t-test, a simple statistical test that compares two samples to help us infer a correlation of the same relationships in future samples.

In this case, campaigns with pitch counts below 70 are one group and campaigns above 71 are a second group. The findings below define the percentage difference between the means of both groups (i.e., the campaigns from Q2 and Q3) to determine if lower pitch counts do have a desired effect for each metric; those that are asterisked are statistically significant, meaning there is a less than a 5 percent chance that the observed results are due to chance.

How our analysis can optimize your digital PR team’s efforts

In practice, the relationships between these metrics help you establish a better standard of practice for your team’s outreach with realistic expectations and goals. Further, the correlation between the specified range of pitch counts and all other KPIs give you a reliable range of what values you can expect when it comes to the metrics for pitch quality, timelines, and campaign performance when adhering to the range of pitches.

The original theory — that a threshold for pitch counts exists when the relationship between pitch count and all other metrics of performance were compared — is confirmed by the data. The sample with lower pitch counts (less than 70) sees a positive relationship with the KPIs we want to decrease (e.g. decline rates, total days) and negative relationship with the KPIs we want to increase (e.g. placement rates, link counts). The sample with higher pitch counts (greater than 71) saw the inverse — a negative relationship with the KPIs we want to decrease and a positive relationship with the KPIs we want to increase. Essentially, when campaigns with less than 70 pitches sent were isolated, the numbers improved in nearly every metric.

When this analysis is applied to each of the 74 campaigns from Q3, you’ll see nearly consistent results, with the exception again being Total Domain Authority. Campaigns with up to 70 pitches are correlated with better KPIs when compared to campaigns with over 71 pitches.

Vague or unrealistic expectations and goals will sabotage the success of any team and any project. When it comes to the effort put into each campaign, having objective, optimized procedures allows your team to work smarter, not harder.

So, what does that baseline range look like, and how do you calculate it?

Establishing realistic baseline metrics

A simple question helps answer what the baseline should be in this instance: What was the average of each KPI of the campaigns with fewer than 70 pitches?

We gathered all 70 campaigns closed out of our digital PR team’s pipelines in the second and third quarters of 2018 with pitch counts below 70 and determined the average of each metric. Then, we calculated the standard deviation from the mean, which defines the spread of the data to establish a range for each KPI — and that became our baseline range.

Examining historical data is among the best methods for determining realistic baselines. By gathering a broad, sizeable sample (usually more than 30 is ideal) that represents the full scope of projects your team works on, you can determine the average for each metric and deviation from the average to establish a range.

These reliable ranges allow your digital PR team to understand the baselines they must strive for during active outreach when in compliance with the standard of practice for pitch counts established from our research. Further, these baseline ranges allow you to set more realistic goals for future performance by increasing each range by a realistic percentage.

Deviations from that range act as indicators of potential issues related to the quality, efficiency, or efficacy of their outreach, with each of the metrics implying what specifically may be array. We offer context into each of those metrics defining our three KPIs in terms of their implications and limitations.

Understanding how each metric can influence the productivity of your team

Pitch quality and efficacy

The purpose of a pitch is to tell a compelling and succinct story of why the campaign you’re pitching is newsworthy and fits the beat of the individual writer you’re pitching. Help your team succeed by enforcing tried and true best practices to enable them to craft each pitch with personalization and compelling narratives at the top of mind. The placements act as a conversion rate to measure the efficacy of your team’s outreach while interests and declines act as a combined response rate to measure the quality of outreach.

To help your team avoid the “spray and pray” mentality of blasting out as many pitches as possible and hoping one will yield a media mention, which ultimately jeopardizes publisher relationships and are an inefficient use of time, focus on the rates our teams secure responses and placements from publishers in relation to the total volume of pitches sent. Prioritize this interpretation of the data rather than just the individual counts to help add context to the pitch count.

Campaigns with a high-ratio of interest and placements to pitches from publishers imply the quality of the pitch was sufficient, meaning it encompassed one or more of the factors known to be important in securing press coverage. This includes, but is not limited to, compelling and newsworthy narratives, personalized details, and/or relevancy to the writer. In some cases, campaigns may have a low-ratio of interest but high-ratio of placements as a result of a nonresponse bias — the occurrence where publishers will not respond to a pitch but will still cover the campaign in a future article, yielding a placement. These “ghost posts” can skew interest rates, illustrating why three metrics compose this KPI.

Campaigns with a high-ratio of declines to pitches imply the quality of the pitch may be subpar, which signals to the associate to re-evaluate their outreach strategy. Again, the inverse may not always be true, as campaigns with a low ratio of declines may be a result of non-response bias. In this case, if publishers do not respond at all, we can either infer they did not open the email or they opened the email and were not interested, therefore declining by default.

While confounding variables (such as the quality of the content itself, not just the quality of the pitch) may skew these metrics in either direction and remain the greatest limitation, holistically, these three metrics offer actionable insights during active outreach.

Efficiency and capacity

Similarly, ranges for timeline metrics can give your associates context of when they should be achieving milestones (i.e., the first placement) as well as the total length of outreach. Deviating beyond the standard timeline to secure the first placement often indicates the outreach strategy needs re-evaluating, while extending beyond the range for total days of outreach indicates a campaign should be closed out soon.

Efficiency metrics help beyond advising the strategy for outreach, informing operations from a capacity standpoint. Toggling between tens and sometimes hundreds of active campaigns at any given point relies on consistency for capacity — reducing variance between the volume of campaigns entering production to campaigns being closed out of the pipeline by staggering campaigns based on their average duration. This allows for more robust planning and reliable forecasting.

Awareness of the baselines for time to secure press enables you and your team to not just plan strategies and capacities, but also the content of your campaigns. You can ensure timely content by allowing for sufficient time for outreach when ideating your campaigns so the content does not become stale or outdated.

The biggest limitation of these metrics is a looming external variable often beyond our control — the editorial calendars and agendas of the publishers. Publishers have their own deadlines and priorities to fill, so we can not always plan for delays in publishing dates or worse yet, scrapping coverage altogether.

Placement quality and efficacy

Ultimately, your efforts are intended to yield placements to gain brand awareness and voice, as well as build a diverse link portfolio; the latter is arguably easier to quantify. Total external links pointing to the campaign’s landing page or client homepage along with the total Domain Authority of those links allow you to track both the quantity and quality of links.

Higher link counts built from your placements allow you to infer the syndication networks of the placements your outreach secured, while higher total Domain Authority measures the relative value of those linking domains to measure quality. Along with further specifying the types of links (specifically Dofollow links, arguably the most valuable link type), these metrics have the potential to forecast the impact of the campaign on the website’s own overall authority.

Replicating our analysis to optimize your team’s press coverage

Often times, historical research designs such as this one can have limitations in their cause and effect implications. This collection of data offers valuable insight into correlations to help us infer patterns and trends.

Our analysis utilized historical data representative of our entire agency in terms of scope of clients, campaign types, and associates, strengthening internal validity. So while the specific baseline metrics are tailored to our team, the framework we offer for establishing those baselines is transferable to any team.

Apply these methods with your digital PR team to help define KPIs, establish baselines, and test your own theories:

  • Track the ten metrics that compose the KPIs of digital PR outreach for each campaign or initiative to keep a running historical record.
  • Determine the average spread via the mean and standard deviation for each metric from a sizeable, representative sample of campaigns to establish your team’s baseline metrics.
  • Test any theories of trends in your team’s effort (i.e., pitch counts) in relation to KPIs with a simple hypothesis test to optimize your team and resources.

How does your team approach defining the most important metrics and establishing baseline ranges? How do you approach optimizing those efforts to yield the best press coverage? Uncovering these answers will help your team synergize more effectively and establish productive foundations for future outreach efforts.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Panda Pummels Press Release Websites: The Road to Recovery

Posted by russvirante

Many of us in the search industry were caught off guard by the release of Panda 4.0. It had become common knowledge that Panda was essentially “baked into” the algorithm now several times a month, so a pronounced refresh was a surprise. While the impact seemed reduced given that it coincided with other releases including a
payday loans update and a potential manual penalty on Ebay, there were notable victims of the Panda 4.0 update which included major press release sites. Both Search Engine Land and Seer Interactive independently verified a profound traffic loss on major press release sites following the Panda 4.0 update. While we can’t be certain that Google did not, perhaps, roll out a handful of simultaneous manual actions or perhaps these sites were impacted by the payday loans algo update, Panda remains the inference to the best explanation for their traffic losses.


So, what happened?
Can we tease out why Press Release sites were seemingly singled out? Are they really that bad? And why are they particularly susceptible to the Panda algorithm? To answer this question, we must first address the main question: what is the Panda algorithm?

Briefly: What is the Panda Algorithm?

The Panda algorithm was a ground-breaking shift in Google’s methodology for addressing certain search quality issues. Using patented machine learning techniques, Google used real, human reviewers to determine the quality of a sample set of websites. We call this sample the “training set”. Examples of the
questions they were asked are below:

  1. Would you trust the information presented in this article?
  2. Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?
  3. Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?
  4. Would you be comfortable giving your credit card information to this site?
  5. Does this article have spelling, stylistic, or factual errors?
  6. Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?
  7. Does the article provide original content or information, original reporting, original research, or original analysis?
  8. Does the page provide substantial value when compared to other pages in search results?
  9. How much quality control is done on content?
  10. Does the article describe both sides of a story?
  11. Is the site a recognized authority on its topic?
  12. Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?
  13. Was the article edited well, or does it appear sloppy or hastily produced?
  14. For a health related query, would you trust information from this site?
  15. Would you recognize this site as an authoritative source when mentioned by name?
  16. Does this article provide a complete or comprehensive description of the topic?
  17. Does this article contain insightful analysis or interesting information that is beyond obvious?
  18. Is this the sort of page you’d want to bookmark, share with a friend, or recommend?
  19. Does this article have an excessive amount of ads that distract from or interfere with the main content?
  20. Would you expect to see this article in a printed magazine, encyclopedia or book?
  21. Are the articles short, unsubstantial, or otherwise lacking in helpful specifics?
  22. Are the pages produced with great care and attention to detail vs. less attention to detail?
  23. Would users complain when they see pages from this site?

Once Google had these answers from real users, they built a list of variables that might potentially predict these answers, and applied their machine learning techniques to build a model of predicting low performance on these questions. For example, having an HTTPS version of your site might predict a high performance on the “trust with a credit card” question. This model could then be applied across their index as a whole, filtering out sites that would likely perform poorly on the questionnaire. This filter became known as the Panda algorithm.

How do press release sites perform on these questions?

First, Moz has a great
tutorial on running your own Panda questionnaire on your own website, which is useful not just for Panda but really any kind of user survey. The graphs and data in my analysis come from PandaRisk.com, though. Full disclosure, Virante, Inc., the company for which I work, owns PandaRisk. The graphs were built by averaging the results from several pages on each press release site, so they represent a sample of pages from each PR distributor.

So, let’s dig in. In the interest of brevity, I have chosen to highlight just four of the major concerns that came from the surveys, question-by-question.

Q1. Does this site contain insightful analysis?

Google wants to send users to web pages that are uniquely useful, not just unique and not just useful. Unfortunately, press release sites uniformly fail on this front. On average, only 50% of reviewers found that BusinessWire.com content contained insightful analysis. Compare this to Wikipedia, EDU and Government websites which, on average, score 84%, 79% and 94% respectively, and you can see why Google might choose not to favor their content.

But does this have to be the case? Of course not. Press release websites like BusinessWire.com have first mover status on important industry information.
They should be the first to release insightful analysis. Now, press release sites do have to be careful about editorializing the content of their users, but there are clearly improvements that could be made. For example, we know that use of structured data and visual aids improves performance on this question (ie: graphs and charts). BusinessWire could extract stock exchange symbols from press releases and include graphs and data related to the business right in the post. This would separate their content from other press release sites that simply reproduce the content verbatim. There are dozens of other potential improvements that can be added either programmatically or by an editor. So, what exactly would these kinds of changes look like?

In this case, we simply inserted a graph from stock exchange data and included on the right-hand side some data from 
Freebase on the Securities and Exchange Commission, which could easily be extracted as an entity from the documentation using, for example, Alchemy API. These modest improvements to the page increased the “insightful analysis” review score by 15%. 

Q2. Would you trust this site with your credit card?

This is one of the most difficult ideals to measure up to. E-Commerce sites, in general, perform better automatically, but there are clear distinctions between sites people trust and don’t trust. Press release websites do have an e-commerce component, so one would expect them to fare comparatively well to non-commercial sites. Unfortunately, this is just not the case. PR.com failed this question in what can only be described as
epic fashion. 91% of users said they would not trust the site with their credit card details. This isn’t just a Panda issue for PR.com, this is a survival-of-the-business issue. 

Luckily, there are some really clear, straight-forward solutions to this address this problem. 

  • Extend HTTPS/SSL Sitewide
    Not every site needs to have HTTPS enabled, but if you have a 600,000+ page site with e-commerce functionality, let’s just go ahead and assume you do. Users will immediately trust your site more if they see that pretty little lock icon in their browser. 
  • Site Security Solutions
    Take advantage of solutions like Comodo Hacker Proof or McAfee SiteAdvisor to verify that your site is safe and secure. Include the badges and link to them so that both users and the bots know that you have a safe site.
  • Business Reputation Badges
    Use at least one trade group or business reputation group (like the better business bureau) or, at minimum, employ some form of schema review markup that makes it clear to your users that at least some person or group of persons out there trusts your site. If you use a trade group membership or the BBB, make sure you link to them so that, once again, it is clear to the bots as well as your users.
  • Up-to-date Design
    This is a clear issue time and time again. In the technology world,
    old means insecure. The site PR.com looks old-fashioned by all measures of the word, especially in comparison to the other press release websites. It is no wonder that it performs so horribly.

It is worth pointing out here that Google doesn’t need to find markup on your site to come to the conclusion that your site is untrustworthy. Because the Panda algorithm likely takes into account engagement metrics and behaviors (like pogo sticking), Google can use the behavior of users to predict the performance on these questions. So, even if there isn’t a clear path between a change you make on your site and Googlebot’s ability to identify that change doesn’t mean the change cannot and will not have an impact on site performance in the search results.
The days of thinking about your users and the bots as separate audiences are gone. The bots now measure both your site and your audience. Your impact on users can and will have an impact on search performance.

Q3. Do you consider this site an authority?

This question is particularly difficult for sites that both don’t control the content they create and have a wide variety of content. This places press release websites squarely in the bullseye of the Panda algorithm. How does a website that accepts thousands of press releases on nearly any topic dare claim to be an authority? Well, it generally doesn’t, and the numbers bear that out. 75% of respondents wouldn’t consider PRNewswire an authority. 

Notice, though, that Wikipedia performs poorly on this metric as well (at least compared to EDUs and GOVs). So what exactly is going on here? How can a press release site hope to escape from this authority vacuum? 

  • Topically Segment Content
    This was one of the very first reactions to Panda. Many of the sites that were hit with Panda 1.0 sub-domained their content into particular topic areas. This seemed to provide some relief but was never a complete or permanent solution. Whether you segment your content into sub-directories or sub-domains, what you are really doing here is helping make clear to your users that the specific content your users are reading is part of a bigger piece of the pie. It isn’t some random page on your site, it fits in nicely with your website’s stated aims. 
  • Create an Authority
    Just because you don’t write the content for your site doesn’t mean you can’t be authoritative. In fact, most major press release websites have some degree of editorial oversight sitting between the author and the website. That editorial layer needs to be bolstered and exposed to the end user, making it obvious that the website does more than simply regurgitate the writing of anyone with a few bucks. 

So, what exactly would this look like? Let’s return to the Businesswire press release we were looking at earlier. We started with a bland page comprised of almost nothing but the press release. We then added a graph and some structured data automagically. Now, we want to add in some editor creds and topic segmentation.

Notice in the new design that we have created the “Securities & Investment Division”, added an editor with a fancy title “Business Desk Editor” and a credentialed by-line. You could even use authorship publisher markup. The page no longer looks like a sparse press release but an editorially managed piece of news content in a news division dedicated to this subject matter. Authority done.

Q4. Would you consider bookmarking/sharing this site?

When I look at this question, I am baffled. Seriously, how do you make a site in which you don’t control the content worth bookmarking or sharing? Furthermore, how do you do this with overtly commercial, boring content like press releases? As you could imagine, press release sites fair quite poorly on this. Over 85% of respondents said they weren’t interested at all in bookmarking or sharing content from PRWeb.com. And why should they? 

So, how exactly does a press release website encourage users to share? The most common recommendations are already in place on PRWeb. They are quite overt with the usage of social sharing and bookmarking buttons (placed right at the top of the content). Their content is constantly fresh because new press releases come out every day. If these techniques aren’t working, then what will?

The problem with bookmarking and sharing on press release websites is two-fold. First, the content is overtly commercial so users don’t want to share it unless the press release is about something truly interesting. Secondly, the content is ephemeral so users don’t want to return to it. We have to solve both of these problems.

Unfortunately, I think the answer to this question is some tough medicine for press release websites. The solution is multi-faceted. It starts with putting a 
meta expires tag on press releases. Sorry, but there is no reason for PRWeb to maintain a 2009 press release about a business competition in the search results. In its place, though, should be company and/or categorical pages which thoughtfully index and organize archived content. While LumaDerm may lose their press release from 2009, they would instead have a page on the site dedicated to their press releases so that the content is still accessible, albeit one click away, and the search engines know to ignore it. With this solution, the pages that end up ranking in the long run for valuable words and phrases are the aggregate pages that truly do offer authoritative information on what is up-and-coming with the business. The page is sticky because it is updated as often as the business releases new information, you still get some of the shares out of new releases but you don’t risk the problems of PR sprawl and crawl prioritization. Aside from the initial bump of fresh content, there is no good SEO reason to keep old press releases in the index.

So, I don’t own a press release site…

Most of us don’t run sites with thousands of pages of low quality content. But that doesn’t mean we shouldn’t be cognizant of Panda. Of all of Google’s search updates, Panda is the one I respect the most. I respect it because it is an honest attempt to measure quality. It doesn’t ask how you got to your current position in the search results (a classic genetic fallacy problem), it simply asks whether the page and site itself deserve that ranking based on human quality measures (as imperfect as it may be at doing so). Most importantly,
even if Google didn’t exist at all, you should aspire to have a website that scores well on all of these metrics. Having a site that performs well on the Panda questions means more than insulation from a particular algorithm update, it means having a site that performs well for your users. That is a site you want to have.

Take a look again at the questionnaire. Does your site honestly meet these standards? Ask someone unbiased. If your site does, then congratulations – you have an amazing site. But if not, it is time to get to work building the site that you were meant to build.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

Google vs the Press Release: Manipulation or Good Business?

mgylInqGoogle thought it was time to remind everyone that link schemes are a violation of their webmaster guidelines and anyone caught trying to game the system will be severely dealt with.

Of course. We get it. We all know about black hat SEO and about Panda and Penguin and the rest of the zoo. We got the message and most of the marketers online have fallen in line and have nothing to worry about.

Or not.

Google’s latest update contains a few paragraphs that will make even the most by-the-book marketer lose a little sleep. For example:

Additionally, creating links that weren’t editorially placed or vouched for by the site’s owner on a page, otherwise known as unnatural links, can be considered a violation of our guidelines. Here are a few common examples of unnatural links that violate our guidelines:

    • Text advertisements that pass PageRank
    • Advertorials or native advertising where payment is received for articles that include links that pass PageRank
    • Links with optimized anchor text in articles or press releases distributed on other sites. For example: [I removed Google’s links, so imagine a link on every underlined phrase)
      There are many wedding rings on the market. If you want to have a wedding, you will have to pick the best ring. You will also need to buy flowers and a wedding dress.

Yes, they just took a shot at press releases – the workhorse of the business world. Now, I understand that a press release with twenty linked keywords is probably nothing more than an attempt to scam the system but this paragraph makes it sound like linked text must be completely banned.

And can we talk about the phrase “distributed on other sites?” So if you post a press release to your own company blog, that’s okay but if you release it through a wire service you’re going to get dinged?

Even if we agree that the rules leave room for interpretation, Google ends their lesson on Link Schemes with this:

The best way to get other sites to create high-quality, relevant links to yours is to create unique, relevant content that can naturally gain popularity in the Internet community. Creating good content pays off: Links are usually editorial votes given by choice, and the more useful content you have, the greater the chances someone else will find that content valuable to their readers and link to it.

That’s pretty clear. It says, post your own content and hope that someone with influence comes along, finds it and decides to share it. Imagine if I, as an author, took that advice. I write a book and I publish it on Amazon. Now, I just sit and wait for someone to find it and talk about it to their friends. I’m not supposed to put out a press release or give a copy to a blogger to read and review. That would violate this policy:

Buying or selling links that pass PageRank. This includes exchanging money for links, or posts that contain links; exchanging goods or services for links; or sending someone a “free” product in exchange for them writing about it and including a link

Am I reading that wrong? I can’t ask a blogger to link to my website or my book when they review it? That’s crazy.

I understand that Google is trying to weed out the spammers from the people with helpful content but even good content needs a shove to get started. And as a reader, a blog post or press release without links is useless. I guess I’m supposed to copy and paste the relevant phrases into Google and search for the link myself. Bizarre.

I’m hoping that Google’s real intent here is to stop people from trying to cheat the system. That’s fine. But if they knock down my page rank because I link to the Amazon page for the DVD I just reviewed, I’m going to get really angry.

I’m not trying to manipulate the system, Google, I’m just trying to make a living.

What do you think? Is this anything to worry about?

Marketing Pilgrim – Internet News and Opinion

Posted in IM NewsComments Off

Google: Press Release Links

So, Google have updated their Webmaster Guidelines.

Here are a few common examples of unnatural links that violate our guidelines:….Links with optimized anchor text in articles or press releases distributed on other sites.

For example: There are many wedding rings on the market. If you want to have a wedding, you will have to pick the best ring. You will also need to buy flowers and a wedding dress.

In particular, they have focused on links with optimized anchor text in articles or press releases distributed on other sites. Google being Google, these rules are somewhat ambiguous. “Optimized anchor text”? The example they provide includes keywords in the anchor text, so keywords in the anchor text is “optimized” and therefore a violation of Google’s guidelines.

Ambiguously speaking, of course.

To put the press release change in context, Google’s guidelines state:

Any links intended to manipulate PageRank or a site’s ranking in Google search results may be considered part of a link scheme and a violation of Google’s Webmaster Guidelines. This includes any behavior that manipulates links to your site or outgoing links from your site

So, links gained, for SEO purposes – intended to manipulate ranking – are against Google Guidelines.

Google vs Webmasters

Here’s a chat

In this chat, Google’s John Muller says that, if the webmaster initiated it, then it isn’t a natural link. If you want to be on the safe side, John suggests to use no-follow on links.

Google are being consistent, but what’s amusing is the complete disconnect on display from a few of the webmasters. Google have no problem with press releases, but if a webmaster wants to be on the safe side in terms of Google’s guidelines, the webmaster should no-follow the link.

Simple, right. If it really is a press release, and not an attempt to link build for SEO purposes, then why would a webmaster have any issue with adding a no-follow to a link?

He/she wouldn’t.

But because some webmasters appear to lack self-awareness about what it is they are actually doing, they persist with their line of questioning. I suspect what they really want to hear is “keyword links in press releases are okay.” Then, webmasters can continue to issue pretend press releases as a link building exercise.

They’re missing the point.

Am I Taking Google’s Side?

Not taking sides.

Just hoping to shine some light on a wider issue.

If webmasters continue to let themselves be defined by Google, they are going to get defined out of the game entirely. It should be an obvious truth – but sadly lacking in much SEO punditry – that Google is not on the webmasters side. Google is on Google’s side. Google often say they are on the users side, and there is certainly some truth in that.

However,when it comes to the webmaster, the webmaster is a dime-a-dozen content supplier who must be managed, weeded out, sorted and categorized. When it comes to the more “aggressive” webmasters, Google’s behaviour could be characterized as “keep your friends close, and your enemies closer”.

This is because some webmasters, namely SEOs, don’t just publish content for users, they compete with Google’s revenue stream. SEOs offer a competing service to click based advertising that provides exactly the same benefit as Google’s golden goose, namely qualified click traffic.

If SEOs get too good at what they do, then why would people pay Google so much money per click? They wouldn’t – they would pay it to SEOs, instead. So, if I were Google, I would see SEO as a business threat, and manage it – down – accordingly. In practice, I’d be trying to redefine SEO as “quality content provision”.

Why don’t Google simply ignore press release links? Easy enough to do. Why go this route of making it public? After all, Google are typically very secret about algorithmic topics, unless the topic is something they want you to hear. And why do they want you to hear this? An obvious guess would be that it is done to undermine link building, and SEOs.

Big missiles heading your way.

Guideline Followers

The problem in letting Google define the rules of engagement is they can define you out of the SEO game, if you let them.

If an SEO is not following the guidelines – guidelines that are always shifting – yet claim they do, then they may be opening themselves up to legal liability. In one recent example, a case is underway alleging lack of performance:

Last week, the legal marketing industry was aTwitter (and aFacebook and even aPlus) with news that law firm Seikaly & Stewart had filed a lawsuit against The Rainmaker Institute seeking a return of their $ 49,000 in SEO fees and punitive damages under civil RICO

…..but it’s not unreasonable to expect a somewhat easier route for litigants in the future might be “not complying with Google’s guidelines”, unless the SEO agency disclosed it.

SEO is not the easiest career choice, huh.

One group that is likely to be happy about this latest Google push is legitimate PR agencies, media-relations departments, and publicists. As a commenter on WMW pointed out:

I suspect that most legitimate PR agencies, media-relations departments, and publicists will be happy to comply with Google’s guidelines. Why? Because, if the term “press release” becomes a synonym for “SEO spam,” one of the important tools in their toolboxes will become useless.

Just as real advertisers don’t expect their ads to pass PageRank, real PR people don’t expect their press releases to pass PageRank. Public relations is about planting a message in the media, not about manipulating search results

However, I’m not sure that will mean press releases are seen as any more credible, as press releases have never enjoyed a stellar reputation pre-SEO, but it may thin the crowd somewhat, which increases an agencies chances of getting their client seen.

Guidelines Honing In On Target

One resource referred to in the video above was this article, written by Amit Singhal, who is head of Google’s core ranking team. Note that it was written in 2011, so it’s nothing new. Here’s how Google say they determine quality:

we aren’t disclosing the actual ranking signals used in our algorithms because we don’t want folks to game our search results; but if you want to step into Google’s mindset, the questions below provide some guidance on how we’ve been looking at the issue:

  • Would you trust the information presented in this article?
  • Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?
  • Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?
  • Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?
  • Does the article provide original content or information, original reporting, original research, or original analysis?
  • Does the page provide substantial value when compared to other pages in search results?
  • How much quality control is done on content?

….and so on. Google’s rhetoric is almost always about “producing high quality content”, because this is what Google’s users want, and what Google’s users want, Google’s shareholders want.

It’s not a bad thing to want, of course. Who would want poor quality content? But as most of us know, producing high quality content is no guarantee of anything. Great for Google, great for users, but often not so good for publishers as the publisher carries all the risk.

Take a look at the Boston Globe, sold along with a boatload of content for a 93% decline. Quality content sure, but is it a profitable business? Emphasis on content without adequate marketing is not a sure-fire strategy. Bezos has just bought the Washington Post, of course, and we’re pretty sure that isn’t a content play, either.

High quality content often has a high upfront production cost attached to it, and given measly web advertising rates, the high possibility of invisibility, getting content scrapped and ripped off, then it is no wonder webmasters also push their high quality content in order to ensure it ranks. What other choice have they got?

To not do so is also risky.

Even eHow, well known for cheap factory line content, is moving toward subscription membership revenues.

The Somewhat Bigger Question

Google can move the goal- posts whenever they like. What you’re doing today might be frowned upon tomorrow. One day, your content may be made invisible, and there will be nothing you can do about it, other than start again.

Do you have a contingency plan for such an eventuality?

Johnon puts it well:

The only thing that matters is how much traffic you are getting from search engines today, and how prepared you are for when some (insert adjective here) Googler shuts off that flow of traffic”

To ask about the minuate of Google’s policies and guidelines is to miss the point. The real question is how prepared are you when Google shuts off you flow of traffic because they’ve reset the goal posts?

Focusing on the minuate of Google’s policies is, indeed, to miss the point.

This is a question of risk management. What happens if your main site, or your clients site, runs foul of a Google policy change and gets trashed? Do you run multiple sites? Run one site with no SEO strategy at all, whilst you run other sites that push hard? Do you stay well within the guidelines and trust that will always be good enough? If you stay well within the guidelines, but don’t rank, isn’t that effectively the same as a ban i.e. you’re invisible? Do you treat search traffic as a bonus, rather than the main course?

Be careful about putting Google’s needs before your own. And manage your risk, on your own terms.

Categories: 

SEO Book

Posted in IM NewsComments Off

Google: Links In Press Releases Are Unnatural Links & Should Be Nofollowed

Yesterday we reported about the link schemes Google update and later on I had the opportunity to ask Google’s John Mueller specific questions about this update and what it means.
My main concern was how specific the example was given on this one…


Search Engine Roundtable

Posted in IM NewsComments Off

PRWeb Lead To Fake Press Release. How Should Google React?

Earlier this week, I spotted a press release pushed out by PRWeb about Google acquiring WiFi provider ICOA for $ 400 million.

I sent it to the Search Engine Land’s editors to review but then after a second glance it seemed weird…




Search Engine Roundtable

Posted in IM NewsComments Off

Does free press release distribution help link building?

Author (displayed on the page): 

Conducting primary research in your market and sharing the results can be a very effective link building tactic. Especially, if the market your research is actually 'link building' itself. On Friday evening, I picked up a retweet from @wilreynolds which pointed me to a detailed review of free press release sites from Vitispr.com.

The study set out to answer six questions that are important to any link builder using free press release distribution sites as part of their link building efforts. Namely:

1. Do journalists and bloggers actually use these press releases for stories?

2. Would the release appear on Google News?

3. Would the release appear in a Google web search?

4. How easy are the free release sites to use?

5. Do free release sites help with link building?

6. Could the sites be used to help 'own' the search results for a targeted phrase?

online-pr

There are a myriad of free press release distribution services: I've tried some before but haven't been at all impressed with the results. As Vitispr says, "what's the use of writing and issuing news if no-one covers it?". The all important part of media relations is "talking to journalists, editors and bloggers to understand what kinds of story would interest them" – and then gearing your pitches to what they want. This is obviously something they feel strongly about as they spent a month testing 60 sites with four real press releases and carefully monitoring the results. 

Here's what they found:

  • None of the reviewed sites succeeded in reaching key influencers.
  • Three out of 60 sites managed to get the story on Google News – Online PR News was the best.
  • Some press releases did appear on page one of a Google search on a targeted phrase – PR Fire was most successful.
  • None of the releases were picked up by a source of what might be called a valuable link.
  • However, three of the sites did result in 'low value' links.

The results show that most free press release distribution sites provide little value. However, Vitispr did not test the paid-for versions of these services. If you want to check these out for yourselves, Vitispr provide details on all 60 sites tested

Public relations and link building

I think the processes of traditional PR and link building have much in common:

  • both depend on building good relationships
  • success cannot be guaranteed for each individual pitch – it's a percentage game – you've got to target a list of quality targets to get x% success
  • persistence and polite follow-up pays off
  • once you've been successful with one journalist or editor or site, then you can build on that relationship in the future.

I don't use free distribution stories when I release stories, and though I do use premium paid-for services, it's not the main focus of our campaigns, but more a 'sweeping up' process.

The main focus of our campaigns in using online PR for publicity and link building is to build our own lists of individual journalists, editors, bloggers and experts and strengthen our relationships with them. You can read more about our approach in Drive sales and link for SEO through online PR

It's a lot of work but it pays dividends long term. This study shows that there is no easy way to be successful in online PR – it takes creative thinking, preparation and execution.

This is a great piece of work from Vitispr because they really put the effort into a survey that was of great interest and benefit to link builders. Here's the link to the survey again – a detailed review of free press release sites

Wordtracker Blog

Posted in IM NewsComments Off


Advert