What to Consider Before Finalizing Your 2011 Catalog Circulation Plan
Direct marketers love data, and direct marketing is a statistician's dream. Mailing millions of catalogs yields a rich pool of data to drive decision making. Direct marketers must learn how to use statistics to properly analyze data and to learn the pitfalls and shortcomings of that data.
Statistics are the foundation of circulation planning. Knowing the truths and fallacies of your statistics can show you how to plan circulation. Circulation planning is based on these simple statistical truths:
- Past results from mailings are the best predictor of future results. How a specific mailing list or list segment performed in the past gives an accurate prediction of how it will respond the next time you mail it.
- The more recent the previous mailing, the more reliable the data. That's why a record of how a list has performed over time is the basic metric for planning future circulation.
- Mail lists that responded above breakeven in the past; don't mail lists which responded below breakeven in the past. When you prospect for new customers at or above breakeven, try to find ways to make your profitable lists more profitable and mail to those rewarding lists more frequently. Mail deeper into your profitable lists so zero month to 12 month buyers respond above breakeven. Test mailing to those buyers older than 12 months, as well as mailing stronger offers to your more lucrative lists.
- Identify households that aren't responsive and suppress them. Catalogers rely on co-op databases to gather together all the transactions from thousands of catalogs. The databases can tell you the households that have stopped buying. If a household isn't buying from any other catalogs, it's not going to buy from yours. Suppressing nonresponsive households saves the cost of printing, paper and postage (roughly 50 cents to $1.00 for each household you don’t mail).
These statistics drive circulation planning. However, you can easily get tripped up in analyzing the vast sea of data that's available. Here are some things to consider when analyzing and testing catalog data:
What sample size do you need, and can you afford to test whether a list will yield a profitable response? Mailers typically test with a sample size of 5,000 or 10,000 mailing pieces. With response rates averaging 1 percent, a test of 5,000 pieces would yield 50 orders. If a piece costs 50 cents to print and mail, a test of 5,000 pieces costs $2,500. Tests with smaller sample sizes can yield fewer orders, so it's difficult to know how a bigger test of the same list will perform. List brokers will often suggest much larger tests of unproven mailing lists, but a test of 50,000 names costs $25,000. One of the keys to profitable mailing is learning as much information as cheaply as possible.
What should the sample size be? One consideration is the cost of running a test. Most mailers can’t afford to bet a Volvo to test a single mailing list. Figure out the smallest test possible that will yield good data on whether a list will respond profitably. If a catalog costs $1.00 for print and postage, mailing 30,000 may be a great sample size, but it costs $30,000. Would a test of 5,000 be adequate?
You might be able to test a mailing list using a sample mailing of 2,500 names, knowing that you'll be able to read if the list has any potential, but also knowing the test is too small to gauge how responsive a list is going to be.
Catalogers use “stage prove methodology” to test larger and larger quantities. The rule of thumb is if you have a successful test of 5,000 names, then you can increase your test quantity to 15,000. If the test of 15,000 proves profitable, increase the test size again by a factor of three to, say, 45,000 or 50,000. What you should never do is take a small successful test of 5,000 names and roll it out to 100,000 names or 250,000 names without an intermediate test. The increase in quantity is too risky. It's always preferable to have several data points when you want to increase the size of a successful mailing list.
Why a list is only as good as its last mailing. First, lists get fatigued as they're mailed over and over because you've successfully harvested buyers from the lists. Second, economic conditions were much different in the past. Mailers have had to either heavily discount or throw out results from old tests conducted during the downturn in response that was caused by the recession. The economic downturn rendered many old tests almost meaningless.
Why future tests get worse. With initial and small tests, the list owners providing the names may give you the best names possible so that you’ll continue to use the names. You may not get the very best names when you recycle a list. Lists rarely, if ever, do better than they have responded in the past.
If a list has performed poorly in the past, it will probably perform poorly in the future. The list industry is designed to sell mailing lists. List owners and brokers will often encourage mailers to retest a list because “it should work.” Be very conservative in testing lists that have responded poorly in past tests. There's a bias toward mailing more rather than mailing less because all suppliers (e.g., printers, list brokers, company management) want growth.
When a promotion ends, expect your sales from that catalog to also end. Catalogs deliver sales over a long period of time, and this order curve is usually a very stable number. Today's catalogs, however, rely increasingly on promotions to drive sales. If a promotion expires before the end of a catalog's natural order curve, sales will also be cut off and the normal order curve won’t apply.
Don’t use an early expiration date of a promotion to pull sales forward. Testing shows that while you can pull some sales forward from customers wanting to use a promotion, you’ll also lose sales that you would have normally gotten. The net effect of an early expiration of a promotional offer will be fewer sales than if the promotion ended at the close of a catalog’s normal order curve.
So, how do you get all this experience to know the flaws in statistics that bring down mailing results? Marketers usually learn the flaws and blind spots in their data by making mistakes and learning from those mistakes. Learn from the leaders in the catalog industry who have already made these mistakes.
Jim Coogan is president of Catalog Marketing Economics, a Santa Fe, N.M.-based consulting firm focused on catalog circulation planning. Reach Jim at (505) 986-9902 or jcoogan@earthlink.net.
- Places:
- America