Strategy: Find the Right Lists
So, maximize prospecting results during the holiday season. For example, assume a particular prospect list generates a response rate of 1.88 percent and $1.17 per catalog mailed (your revenue per catalog, or RPC) during the holiday season. Also, assume your incremental break-even point is $1 per book. The response rate for this same list mailed during the summer will be approximately 1.13 percent with an expected RPC of $.70 per book, which is way below your incremental break-even point. So you can prospect to this list above the incremental break-even point during the holiday season. But if you use this same list in summer, expect an incremental loss.
Should you test new lists or continue to use the proven winners? Always test new lists; continuously plant seeds to expand your prospecting universe. Out of 10 “new” test lists, two or three will be worthy of continuation. It’s really a matter of when to test, not necessarily what to test. I recommend testing new lists during your “best” season. If holiday, for example, represents 100 percent of the results you’ll achieve, test during holiday.
If you test new lists during the off season, chances are you’ll never roll out a single list because the results will not justify doing so. If response rates and the revenue per catalog mailed are maximized in October, test new lists in October. This becomes a true test and will yield the kind of results you can read and roll out with confidence.
Effective Rollout: An Example
Say you tested a list of 10,000 names for the first time and it generated $1.50 per catalog mailed, way above your $1 incremental break-even point. Assume the universe for this list is 100,000. On a remail, how many names should you take the next time? 50,000? 75,000? Or the full universe of 100,000? My rule of thumb is to double the quantity per reuse. For example, if 10,000 did well, retest 20,000 names, then go to 40,000 and so on. Mind you, the rollout will never perform at the same level as the initial test. There’s always some fallout as a result of statistical differences, e.g., sample sizes.