Is social media A/B testing worth it?

Is social media A/B testing worth it?

Case Studies

Over the last two years, data-driven social media strategy has taken off. Upworthy collects data to track the virality of their videos and chooses their Facebook headline to maximize sharing. Advocacy groups like CREDO Action, the Sierra Club, and Planned Parenthood run A/B tests on their social media language to increase the number of people signing their petitions. And in some cases, these optimizations have lead to massive returns.

But is social media A/B testing really worth it for the average organization? You hear stories about epic successes, but are those just the outliers, with the average test yielding paltry results? And can you ever hope to collect enough data if you don’t have a huge audience like Upworthy?

To help answer these questions, we’ve compiled data from 373 social media A/B tests of Facebook posts that different organizations have run with ShareProgress share pages. Here’s what we found.

Does social media A/B testing make a difference?

The first thing to look at is whether the typical social media A/B test actually yielded useful results. To gauge this, we looked at the average difference in success rate* between the original version of the Facebook post (the “control version”) and the other version that it was tested against (the “test version”).

The average difference between control and test versions across Facebook A/B tests was 33.1%, and the median difference was 20.8% (meaning that half of the tests had a difference of 20.8% or higher). To provide some context, a difference of 20.8% means that if the losing version would have engaged 10,000 people, the winning version would engage 12,080 people. That’s a pretty sizable improvement from a single A/B test.

But wait a minute–maybe the difference was only this large because people did a lousy job when writing the alternate test version of the Facebook post. This would cause the control version to always do much better. But, as it turns out, the split of which version won the test was almost even; the control version in Facebook A/B tests performed better than the test version just 55% of the time. That’s slightly more than half, but it lends credence to the theory that people who claim they can consistently figure out what will do best on social media are either naive or lying.

The control version in Facebook A/B tests performed better than the test version just 55% of the time.

So there are some serious gains to be had from social media A/B testing. But that brings us to our next question…

Do I have a large enough audience for it to work?

Generally, an A/B test is only useful if you can collect enough data to see a statistically-significant difference between your control and test versions. If you don’t have enough data, one version may look like it’s doing better, but will actually be worse a lot of the time. It’s easy to get the data you need if you have tens of thousands of people sharing your content, but what if you only have a few thousand, or even just a few hundred?

To answer that, we looked at how many shares were needed to obtain results that were statistically significant with 95% confidence in past Facebook A/B tests run with ShareProgress. In order to reach statistical significance half of the time, you’d need to have 1,883 shares each for the control and test versions. And to reach statistical significance 10% of the time, only 203 shares were needed for each Facebook post variant.

To reach statistical significance 10% of the time, only 203 shares were needed for each Facebook post variant.

So, if you expect only 100 people to share your content, you’re probably out of luck. But if can get 500 people or more to share, there’s a decent chance you’ll be able to get conclusive results and substantial gains from running a social media A/B test. sasaq

Want to start doing social media A/B testing at your organization? Sign up for a free trial of ShareProgress today.

* “Success rate” is defined as the percentage of user Facebook posts that result in at least one person clicking the link and engaging on the organization’s website.

Written By

Jim Pugh

Comments