If you've tried your hand at conversion rate optimization, you're probably familiar with many of the possible mistakes you could make.

Plenty of marketers have written dire warnings about the tragic oversights and costly blunders that demolished their conversion rates.

It might leave you feeling a bit nervous, especially if you're new at conversion rate optimization.

Sure, there are plenty of mistakes you can make in CRO. That's true in any area of marketing.

But there are some mistakes that are totally fine to make.

Yep, you heard me right. I'm giving you a free pass on a handful of mistakes.

What kinds of mistakes? Next time you're feeling jittery about your conversion rate efforts, or maybe you just need a little encouragement, here's some friendly advice--three mistakes that are actually just fine.

1. You are taking "way too much time" to set up your test.

Many CROs are known to repeat the mantra, "Always be testing."

It's great advice, but it might make you feel guilty if you're not testing for some reason.

I think it's much better to plan and prepare for a test than to rush heedlessly into it.

Here's how to prepare for an accurate and reliable split test.

  • Make sure you've run an A/A test.
  • Decide exactly where you will test (e.g. landing page, home page, CTA, etc.)
  • Determine exactly what you will test (e.g., copy, image, etc.)
  • Develop a good variation and enlist the help of designer, UX, or copywriter, if necessary.
  • Create a hypothesis of what will happen.
  • Determine the start date/time and the end/date time.

That can take a lot of time. But it's time well spent. As important as it is to be testing on a regular basis, it is more important to be running the right tests on a regular basis -- tests that are reliable, careful and intentional.

Take as much time as you need to.

2. You ended a test too early.

If you're familiar with the CRO literature, then you've read this: "Don't stop your A/B test part-way through!"

Now, this advice is both good and bad.

It's good, because stopping your A/B test part-way through will give you skewed results. In order for a test to be reliable -- that is, you've called a winner and are changing your website -- the test must be statistically significant.

How do you achieve statistical significance? By using the right baselines, by testing the right sample size and by allowing your test to run long enough.

Butsometimes, you've come up with a useless test and just want to scrap it. Early. Quick.

So, I'm telling you to go ahead and end the test as early as you realize it's screwed up. Wny? Because the less time you spend running a useless test, the more time you have to run a useful test.

And then what?

Throw out your results. They are inaccurate, irrelevant and misleading.

3. You are only running one test at a time.

The split testing "experts" might claim to be running "dozens of tests at a time!"

But what about little ol' you? You're only running one test at a time.

Should you be running multiple tests simultaneously? Is it important to have dozens of tests at a time? Are you making a mistake?

Running one test at a time is just fine!

In fact, as active and as high-traffic as my own sites are, I try to keep my testing to a minimum of one at a time.

  • Running one test at a time helps me to focus my efforts.
  • Running one test at a time prevents confusion from multiple variables.
  • Running one test at a time prevents lost revenue/money from lowered conversion rates due to several losing tests
  • Running one test at a time allows you to keep that test running long enough for it to achieve statistical significance.
  • Running one test at a time allows you to improve the quality of each individual test, not simply the random chance of quantity of overall tests.
  • Running one test at a time gives you the opportunity to learn as much as you can about your website, your audience and your conversion potential from each test.

Obviously, I'm not a big fan of the throw-enough-mud-on-the-wall approach to split testing.

Slow and steady wins the optimization race, so don't at all feel bad if you're making the "mistake" of running a single test at a time.


Split testing is one of the best things that you can be doing to your website.

  • Split tests allow you to learn about your audience.
  • Split tests allow you to learn about your website.
  • Split tests allow you to uncover gems of knowledge in customer psychology.
  • Split tests allow you to iteratively improve your web experience
  • Split tests allow you to improve your website using data.

Split testing furnishes you with benefits and tangible improvements that you could achieve through no other means.

I wouldn't advise you to purposely make stupid mistakes. But if you're making the three "mistakes" I listed above, you're probably doing just fine.

Don't sweat it. Just keep testing.

What "mistakes" -- good or bad -- have you made with your split testing?

Published on: Mar 31, 2016
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.