4 Tips for Guinea Pig Marketing*

by Eloqua on September 29, 2010 in Lead Nurturing

* No animals were harmed in the making of this blog post.

Guinea Pig Marketing (as I like to call it) is commonly referred to as multivariate (A/B) testing. It’s something I’ve always hailed as the holy grail of online marketing but only ever appreciated from afar…until recently.

Why? Automated Nurturing.

In my experience, lead nurturing is unique from programs, events, email communications and direct mail. It is built once, automated, and lives in infamy as the defacto set of communications for nearly EVERY person that our company reaches. In the last six months alone we’ve automated 1.18 million emails and generated 30% of marketing sourced deals! When you build something that this many of the worlds smartest marketers are going to see, you want to get it right. So, it was time for this self proclaimed “creative/analytical” marketer to learn about multivariate testing (see wikipedia‘s definition) and apply the methods that are considered to be best practice for website and SEO guru’s to Eloqua’s lead nurturing programs.

I googled how to calculate relevant sample sizes, questioned how to set controls, agonized over what to test, bothered co-workers, and struggled over how to pick my guinea pigs. The biggest challenge, however, was the lack of useful guidelines for marketers (you know, non-PHD types) so I wanted to share a few tips that I developed to help build and prioritize an email testing plan.

4 Tips for Guinea Pig Marketing:

1) Use marketing data to prioritize the elements you will test. Response Reports showed that although we provided webinars and whiteboards for download in one offer, our audience was almost 2x more likely to view only the whiteboard. So, I’ll test the whiteboard as the ONLY call to action. And although I eventually want to test things like email design, the data tells me that I can have an immediate impact on response by understanding which offers and calls to action increase conversions.

2) Set a control group and a test group. One challenge I had was that each nurturing program has a different set of profile criteria for entry so I wasn’t sure how to replicate this audience outside of the program itself. I got around this by creating a control and test group for each test variable. For example, in the test above, I took 10,000 contacts, split them into two groups and will run the offer exactly the way it appears in the automated program to group A, and then run the variable test to Group B. This way I can compare responses between these two like groups instead of comparing against the program members with different profiles.

3) Make sure your control and test groups are similar to eachother. At Eloqua our audience is sales and marketing professionals, but as you may imagine, CMO’s respond much differently than Marketing Managers so I wanted to make sure that my sample group of 10,000 contacts wasn’t skewed in favor of any specific job title or role level. Luckily, our data can be easily profiled by Normalized Title which allowed me to spot check my sample group and verify that there was an even distribution of job titles and role levels.

4) Don’t overthink it. At the end of the day – as with most things in marketing – it’s always better to get started somewhere and learn as you go. I was initially so overwhelmed with the idea of making sure everything was done scientifically that I found it hard to make decisions. When I gave myself permission to not become multivariate tester of the year, I found I was able to build the plan much faster. I know I’ll learn from each test I run so I’m executing 1-2 a week in order to be flexible and adjust based on my learnings.

I begin my testing scenarios next week and am looking forward to the results – wish me luck! I’ll share my findings in a sequel blog post. And, if you have any great testing tips or resources, please do share.


Recent Comments

Leave a comment
  • Pingback: Tweets that mention 4 Tips for Guinea Pig Marketing* — It's All About Revenue -- Topsy.com

  • http://www.Zephyr47.com Brian Hansford

    Excellent blog Amber!

    2 Questions – based on your experience or research, what % of an overall database would you recommend for your test and control groups?

    Secondly, with your testing how much do you tweak layouts, design, graphics? I run into some agencies that seem to focus almost entirely on the graphical layout and less so on the content. Like you said, don’t overthink it. But I was wondering if there 2 or 3 factors you recommend focusing on in each test. (i.e. offers, call to action, layout, etc.)

    -Brian Hansford
    Zephyr 47

    • http://www.eloqua.com/ Amber Stevens

      Hi Brian –

      Those are two great questions – thanks for posting.

      Calculating the sample size can be pretty loaded – and it really depends on what you’re testing and testing for.

      The mathematical formula for calculating sample size looks a little something like this:

      Sample Sized Needed = Z² — P (1 — P) / I²

      I approached it in a slightly less scientific manner. In my situation, I was testing a specific OFFER against a specific AUDIENCE. For the most part, my audience was being defined by thier job title and my goal was to understand how other people in the same “job title buckets” would respond to the same offer (and then a variation of the offer.) To set my sample size, I identified the total audience in our contact database of any given job title – say CMO’s for example. Once I had that number, I then pulled a group of CMO’s that represented ~10% of the total CMO population that I had access to. I beleive that typical sample sizes are smaller than 10%, but my guideline was whether or not I’d feel confident in making a change across the board based on the number of anticipated responses I’d receive. For CMO’s, it’s a pretty small, elite segment of our database, so a 10% sample size I felt would reflect pretty accurately the entire population. The bigger the total audience gets, the more variables are likely introduced into their profiles (type of company/industry, size, etc) that could impace their behavior. For a broader role like “Marketing Manager”, I used a larger sample group.

      I think of testing in 3 categories: Creative, Content, Audience. I think all 3 can yield significant lift and they’re all equally important. I think agencies tend to focus on creative because that’s the part of the process they tend to own. Here are a couple of area’s I’d suggest testing in each of these categories.

      1) CREATIVE
      - Text only versus Graphic heavy HTML. (we’ve found for webinars, sending the second email as text only from the sales rep has a positive impact.)
      - Button vs. text link only.
      - Sender – should it come from sales, marketing or the CEO? Consider the tone and whether or not a peer to peer focus would benefit the message.

      2) CONTENT
      - Subject lines! We’ve found including the type of offer (webinar, video, ebook) in the subject line can increase open rates significantly.
      - Length of copy – how much must you explain? In this testing scenario, I realized that when offering a demo, too much explanation of what they’re going to learn is click-through killer!

      3) AUDIENCE – Think of engagement level, how long they’ve been in your database, job titles, industries, company size, etc.

      Hope this helps and if you ever want to swap ideas direclty, I’m at amber.stevens@eloqua.com


Previous post:

Next post: