Three Mini-Rules of Thumb
In prior columns I’ve looked at several of the “big” rules of thumb in cataloging—truths large enough to fill a whole column with their ins and outs.
But this month, as a change of pace, I thought we’d take a look at three “mini” rules of thumb, each too small for a whole column, yet each important in its area, and well worth knowing for the cautious cataloger. So let’s get started.
Mini Rule of Thumb #1:
“List rollouts never perform as well as list tests.”
This little rule sounds like pure pessimism at first (“Ahhh, nothin’ ever works right.”)But it’s far more than that: List brokers have long been familiar with this frustrating rule, and so have careful catalogers.
In fact, this rule comes as a surprise primarily to new catalogers. It’s so non-intuitive that many newcomers simply refuse to accept it, until black and white results force them to rethink. (And if they don’t track their results very well, as many newcomers don’t, it can often elude them for years.)
It’s an easy rule to apply in practice: If a list earns a 2-percent response on a test, you should project a lower response rate on rollout, perhaps 1.75 percent, perhaps less.
What’s harder is to understand why it works this way. Nobody knows the exact reason, but here are two possibilities:
1. “Evil list managers.” This explanation darkly suggests that list managers provide “better” names on list tests, to encourage rental income, then deliver the “real” names on rollout. This explanation has a certain paranoid appeal to it, but it lacks credibility in practice—even if a list manager wanted to do this, he/she would have a hard time getting the data processing house to figure out how to make it happen.
2. “It’s all luck.” This explanation says that all lists are actually about average, so any unusually good (or bad) performance by a specific list on a specific mailing is mostly just a fluke, which on subsequent mailings will go away, letting the list’s true averageness shine through.