4 Tips for Guinea Pig Marketing*

by Michelle McGinnis on Wednesday, September 29, 2010 in Lead Nurturing

* No animals were harmed in the making of this blog post.

Guinea Pig Marketing (as I like to call it) is commonly referred to as multivariate (A/B) testing. It's something I've always hailed as the holy grail of online marketing but only ever appreciated from afar...until recently.

Why? Automated Nurturing.

In my experience, lead nurturing is unique from programs, events, email communications and direct mail. It is built once, automated, and lives in infamy as the defacto set of communications for nearly EVERY person that our company reaches. In the last six months alone we've automated 1.18 million emails and generated 30% of marketing sourced deals! When you build something that this many of the worlds smartest marketers are going to see, you want to get it right. So, it was time for this self proclaimed "creative/analytical" marketer to learn about multivariate testing (see wikipedia's definition) and apply the methods that are considered to be best practice for website and SEO guru's to Eloqua's lead nurturing programs.

I googled how to calculate relevant sample sizes, questioned how to set controls, agonized over what to test, bothered co-workers, and struggled over how to pick my guinea pigs. The biggest challenge, however, was the lack of useful guidelines for marketers (you know, non-PHD types) so I wanted to share a few tips that I developed to help build and prioritize an email testing plan.

4 Tips for Guinea Pig Marketing:

1) Use marketing data to prioritize the elements you will test. Response Reports showed that although we provided webinars and whiteboards for download in one offer, our audience was almost 2x more likely to view only the whiteboard. So, I'll test the whiteboard as the ONLY call to action. And although I eventually want to test things like email design, the data tells me that I can have an immediate impact on response by understanding which offers and calls to action increase conversions.

2) Set a control group and a test group. One challenge I had was that each nurturing program has a different set of profile criteria for entry so I wasn't sure how to replicate this audience outside of the program itself. I got around this by creating a control and test group for each test variable. For example, in the test above, I took 10,000 contacts, split them into two groups and will run the offer exactly the way it appears in the automated program to group A, and then run the variable test to Group B. This way I can compare responses between these two like groups instead of comparing against the program members with different profiles.

3) Make sure your control and test groups are similar to eachother. At Eloqua our audience is sales and marketing professionals, but as you may imagine, CMO's respond much differently than Marketing Managers so I wanted to make sure that my sample group of 10,000 contacts wasn't skewed in favor of any specific job title or role level. Luckily, our data can be easily profiled by Normalized Title which allowed me to spot check my sample group and verify that there was an even distribution of job titles and role levels.

4) Don't overthink it. At the end of the day - as with most things in marketing - it's always better to get started somewhere and learn as you go. I was initially so overwhelmed with the idea of making sure everything was done scientifically that I found it hard to make decisions. When I gave myself permission to not become multivariate tester of the year, I found I was able to build the plan much faster. I know I'll learn from each test I run so I'm executing 1-2 a week in order to be flexible and adjust based on my learnings.

I begin my testing scenarios next week and am looking forward to the results - wish me luck! I'll share my findings in a sequel blog post. And, if you have any great testing tips or resources, please do share.


comments powered by Disqus