August 11, 2008

Why Copy Testing Is Bogus

"My friend shot a 158," Joe said.
“That’s the worst round of golf I’ve ever heard of,” said Mike.

“No it’s not. It’s a great round of golf,” said Joe.

“You’re crazy,” Mike said. “How can you say that?”

“He only has one arm.”


I made up that little story to make a point. The point is about copy testing, and we'll get back to it in a minute.

Copy testing is despised in the ad industry for a number of good reasons. The MBA's hate it because it's unreliable and doesn't often correlate with real-world effectiveness. The creatives hate it because it rarely rewards non-linear thinking.

I hate it for a whole other reason. Having once been a science teacher, I have a healthy regard for the difference between science and bullshit. And copy testing ain't science.

Copy tests almost always fail to include controls. Controls are the one essential of all experimentation. Research without controls is like baseball without an umpire.

A control serves two purposes:
  1. To make sure you’re studying what you think you’re studying.
  2. To give you a factual basis for comparison
A perfect example of providing controls in research is the placebo in medical research. When studying a new drug, scientists always give some of the subjects sugar pills. Sugar pills have no effect. If a drug they are testing seems to provide benefits, but those benefits are no greater than the benefits of taking the sugar pills, they know that the new drug is worthless. The apparent benefits are not the result of the drug, but simply of giving the subject a pill—any pill.

By testing a known quantity (the control/placebo) along with the unknown quantity (the test drug), you have a factual basis for comparison. A scientist who did a study without controls would be laughed out of any lab in the world.

Most copy tests do not use controls. They use norms. It’s important to understand that norms are not controls. Advertising norms do not tell you about ads in your category, at this time, with these people, under these circumstances.

In order to evaluate a new toothpaste campaign, for example, an advertiser needs to test it along with a campaign in the toothpaste category that is known to have done well in the real world. This known campaign should be tested with the same people, at the same time, under the same circumstances as the campaign in question. Only then can you know what success will look like.

Without that, if you get a lousy result, you don’t know why. Is the new campaign lousy? Or was the tester irritating? Or were the M & Ms stale? Or would any ad about toothpaste do poorly at this time, with these people, under these circumstances?

If the control campaign did well at the same time under the same conditions, then you know the new campaign must be lousy. But if the control campaign also tested poorly, then maybe ads about toothpaste only have one arm.


By The Way:
One reason direct marketing people think of us as idiots is that when they test they always use controls.

No comments: