Tuesday, 28 May 2013

More Thoughts on Split Testing - or Why I Think Internet Marketing is an Art , Not a Science

A month or so ago I wrote about a split test that I'd conducted with my newsletter and how it had left me feeling rather confused.

I was talking about this recently with a friend and he told me a similar story.  This man, like me, is into internet marketing.  He has several Twitter accounts that he uses to try to get people onto his mailing list.  Like many internet marketers, when someone follows him on Twitter, he sends a direct message offering them some e-books.  The message gives a link to his website where the visitor is invited to give his or her name and email address in exchange for the books.

Because the website link is quite long, he uses to shorten it.  A couple of months ago, it occurred to him that, if he were to use a different shortened link for each Twitter account, he could test different messages and see which got the most conversions.

He arbitrarily divided his accounts into three groups and wrote three different messages.  After a few weeks, he checked how many direct messages he'd sent out with each link and how many clicks each one had received.  The results were interesting, but not in the way he had been hoping.

In the first group, account A had had a larger percentage of clicks than account B or account C and the difference was statistically significant.  Similarly, in the second group, there was a statistically significant difference between the results of two of the accounts.  And yet, comparing the top converters in each group with each other, the difference was not significant.  And this was the same for the lowest converters.

So what are we to conclude from this?  We could use the quotation frequently attributed to Mark Twain about 'lies, damn lies and statistics' and say that 'statistically significant' doesn't necessarily prove anything.  Or we could say - as is certainly the case - that every single person who was sent a direct message was an individual and, as such, could not logically be expected to act in the same way as anyone else.

All we know is that the people who clicked the links were interested in internet marketing.  And we have to assume that a fair proportion of the people to whom the messages were sent were also interested, because they had decided to follow my friend.  But here we run into the first difficulty.  My friend follows about 30 new people a day.  He tries to pick people with similar interests but it's impossible to be sure.  Some of the people who follow him may just be following back because that's what they do, rather than out of interest.  We can assume that anyone who decides to follow him (without first having been followed) must do so because of interest in his tweets.  But, even here, we cannot be sure that they would want the e-books.  They may, themselves, be established marketers or they may have downloaded a large number of e-books and not have time or energy to read any more.  Or, of course, they may not read their direct messages.

Looked at like this, it's little wonder that the results were confusing.  And it's the same - indeed, it must be the same - in any situation where individual choice is concerned.  If 100 people watch a movie, it's unlikely that all of them will enjoy it equally.  If 90 per cent love it, it will be a runaway success . . . but the other 10 per cent may hate it. 

What all this has proved to me is the vital importance of not losing sight of our target audience.  What is it that they (or, rather, most of them) want?  Can we understand their motivation?  If they're not buying our products is it necessarily because we've written a bad sales letter?  Or is it because they've already got a similar product, or can't see the need for the product - or even that they've not got the money available to spend on it?  Whether they open our emails and whether they buy our products depend on many factors and I think we need to keep this in mind.  What works one day may not work the next.  Obsessing overmuch about split testing may, in the end, be counterproductive. 

Photo:, designer:

No comments:

Post a Comment