Are Matchbacks Accurate?
Matchbacks are the established best practice in direct marketing, and for good reason. The concept is straightforward. You mail a catalog, or you send an email marketing message to a customer, say on Monday. On Friday, the customer randomly appears on your website and purchases merchandise.
Twelve years ago, we would credit the website with generating the order. Six years ago, we used matchback routines to credit a catalog or email campaign with the order.
What should we be doing today?
Increasingly, direct marketers are employing three-month, six-month and 12-month holdout groups.
Here’s how the strategy works. A catalog brand will select 40,000 customers who have purchased in the past year and have valid email addresses. Customers will be randomly placed into one of four test segments. The first test segment is allowed to receive both catalog marketing and email marketing if the customer qualifies for a marketing campaign. The second test segment is allowed to only receive catalog marketing campaigns, if the customer qualifies for the campaign. The third test segment is allowed to only receive email marketing campaigns, if the customer qualifies for email marketing campaigns. The fourth segment is not allowed to receive catalog marketing or email marketing campaigns.
At any point in the test, your analytics staff can measure the performance of the four test segments. At the end of the test, results are formally written up.
Here’s the interesting thing, folks. Companies that employ this style of testing discover that matchback analytics clearly overstate catalog and email marketing performance. Many customers planned to order merchandise because they loved your brand; it just so happened that you mailed a catalog to the customer at the time when the customer was about to place an order.
The matchback program allocates this order, an order that was going to happen no matter what, back to the catalog you just mailed. At this point, the matchback program artificially overstated the performance of the catalog. Next year, when you are making circulation decisions, you are led to believe that the catalog performed better than it actually performed. As a result, you will overcirculate in next year’s catalog.
I have observed instances where circulation is twice as deep as it should be, simply because the matchback algorithm is overstating actual results as measured by mail/holdout tests.
The impact on profitability can be significant. I’ve seen instances where a $50 million catalog brand loses $1.5 million of profit due to overcirculation caused by matchback analytics. For some catalog brands, that’s as much profit as the brand generates in an entire year!
Please consider a three-month, six-month or annual holdout test segment, so you may accurately measure the true impact of your catalog and email marketing activities. Considering that mail/holdout tests almost always indicate more conservative results than matchback analytics, your profit-and-loss statement is likely to improve if you employ this style of testing!