A/B testing is one of the most powerful tools in digital marketing—when done correctly.
But here’s the reality: Most A/B tests are completely useless.
Testing subject lines that barely impact bottom-line revenue.
Changing button colors and celebrating a 0.5% increase in click-through rate.
Running random experiments with no clear strategy.

At Dispatch, we take a different approach... Every single A/B test must be directly tied to revenue.
We don’t run tests just to check a box. We run them to make more money.
Let’s break down why most A/B testing strategies fail, how to structure high-impact tests, and the only A/B tests that actually move the needle for your business.
Why Most A/B Testing Strategies Are Flawed
Many email marketers follow this logic:
🧠“More opens = more clicks = more revenue.”
🎨 “This button color might increase CTR by 0.5%.”
But here’s the problem…
🚫 That subject line test? The revenue impact is probably a rounding error.
🚫 That button color test? It’s not tied to any meaningful purchase behavior.
🚫 That minor layout change? It won’t move the needle on conversion rates.
Yes, these email marketing a/b tests might lead to small increases in engagement. But engagement alone doesn’t pay the bills, revenue does.
If a test isn’t driving more purchases, increasing AOV, or improving retention, it’s just busy work.
How We Approach Email Marketing A/B Testing at Dispatch
At Dispatch, we don’t test for the sake of testing.
Before we run any experiment, our team must answer three critical questions:
Why are we running this test? (What’s the hypothesis?)
If this test wins, how does it translate to more revenue?
Is this the highest-leverage test we could be running right now?
If we can’t justify a test’s impact on revenue, we don’t run it.
Instead, we focus on high-impact experiments that directly influence:
Conversions (More people buying)
Average Order Value (AOV) (More Revenue per Purchase)
Here’s how we do it.
The A/B Tests That Actually Matter
If your goal is to increase revenue, these are the only tests worth prioritizing:
1️⃣ Offer Optimization: Finding the Most Profitable Discount & Incentive Structure
What we test:
Percentage-off vs. dollar-off discounts
Free shipping vs. a percentage discount
Gift-with-purchase vs. price reductions
Tiered discounts (spend more, save more)
Why it matters: The right offer can significantly increase conversion rates and AOV—without unnecessarily cutting into margins.
2️⃣ Marketing Messaging & Emotional Triggers
What we test:
Different storytelling angles in email & SMS
Emotional vs. logical appeals
Social proof-heavy messaging vs. authority-driven messaging
Problem-focused vs. benefit-focused copy
Why it matters: The way you frame your product can be the difference between a sale and a lost customer. Messaging tests help uncover what resonates most with your audience.
3️⃣ CTA Destination: Where Are You Sending Traffic?
What we test:
Product page vs. landing page vs. homepage
Quizzes vs. direct-to-checkout links
Collection pages vs. single-product pages
Why it matters: The page a customer lands on directly impacts conversion rates. Small changes in where traffic is sent can lead to big revenue lifts.
4️⃣ Short vs. Long-Form Emails: Balancing Engagement & Conversions
What we test:
Short, punchy email vs. long, detailed email
Promotional emails vs. educational emails
Story-driven copy vs. straight-to-the-point messaging
Why it matters:Some audiences need more context before they buy, while others convert better with fewer distractions. Testing email length helps determine what drives the most purchases.
The A/B Tests We Don’t Waste Time On
Some tests might sound valuable, but in reality, they don’t move the needle on revenue.
Subject line tests that don’t translate to sales – A slight open rate bump means nothing if conversions don’t increase.
Button color tests – The impact is so minimal it’s not worth the effort.
Micro-optimizations that don’t affect purchasing behavior – Layout tweaks, minor font changes, or small UX tweaks rarely impact revenue significantly.
We test what matters. We optimize for revenue.And we don’t play the same game as everyone else.
How to Run A/B Tests That Actually Increase Revenue
If you’re serious about testing the right way, follow this framework:
Step 1: Identify the Business Impact
Ask: Will this test lead to more purchases, higher AOV, or better retention?
If the answer is “no,” don’t run it.
Step 2: Set Up a Clear Hypothesis
Example: "If we switch from a 10% discount to a free gift, we predict a higher AOV because customers will feel they’re getting more value."
Step 3: Run the Test with a Large Enough Sample Size
Ensure you have enough data to make an informed decision—not just a gut reaction.
Step 4: Measure Revenue, Not Just Clicks
Track:
Conversion rate
Average order value
Revenue per email sent
Ignore:
Vanity metrics like open rates and clicks if they don’t result in sales.
Step 5: Implement & Scale What Works
Once a test proves successful, roll out the winning variation across all relevant campaigns and automations.
Is Your Agency or Marketing Team Running the Right A/B Tests?
Many agencies love showing off test results that don’t actually matter.
“Look! This button color increased CTR by 1%!”
“This subject line got 5% more opens!”
Sounds great—until you realize it made zero difference in revenue.
If your agency is running tests with no clear revenue impact, they’re just keeping themselves busy—not making you more money.
Want to optimize for real revenue?
At Dispatch, we don’t test for fun—we test to increase conversions, AOV, and long-term customer value.
🔹 We prioritize high-impact A/B tests that actually move the needle.
🔹 We ignore the noise and focus on what drives purchases.
🔹 We help brands scale their retention marketing into a true revenue engine.
Get in touch with Dispatch and let’s build an A/B testing strategy that actually makes you more money.
Comments