For example, your outbound sales team may be organized geographically or by type of industry, and sales reps often intuitively manage their own contact strategies for their customers. Your catalog manager probably uses a more traditional segmentation, either a response model or an RFM variation. Your email team may have access to the same segmentation as your catalog marketing manager, but limit its criteria to promoting by product purchase, segmenting buyer vs. nonbuyer, or just blasting every name that hasn't opted out.
Allowing various teams to use different segmentation strategies can work. But for the test, separate your file into statistically similar segments for your control and test panels, and use those segments companywide.
In addition, every sales channel must use the same offers for the testing period. A rogue free-shipping offer in one channel will siphon orders and nullify the purpose of the test, which is to establish a baseline lift for a given promotional channel.
If you have a large housefile, you can simultaneously test multiple marketing channels during the same season. But if your file is smaller — say 20,000 buyers — you'll likely have to limit your test to evaluating the effect of only one channel.
Methodology
If you experience a significant spike in sales with every catalog drop, you can use that promotional activity as the basis for your test and determine the incremental lift for another activity, such as phone efforts. Your control panel receives only the catalog, and your test panel receives both the catalog and phone calls. Regardless of channel, you can measure the lift in response from your phone effort, determine its return on investment and strategically budget for the program.
But if you have an aggressive outbound phone program, you can test the relative effectiveness of your catalog by not mailing it to one panel. Instead of referring customers to the catalog, your sales team tests the effectiveness of driving those customers online.