Most small businesses set up their Google Ads, write the ad copy once, and never touch it again. That's a missed opportunity. The difference between a good ad and a great ad can be a 30-50% improvement in click-through rate, which means more leads from the same budget without spending an extra dollar.
A/B testing your ads isn't complicated. It just requires a bit of structure and patience. Let me walk you through how we do it.
What should you test first?
Don't try to test everything at once. Start with your headlines, specifically Headline 1. That's the first thing people see, and it has the biggest impact on whether someone clicks your ad or scrolls past it.
Here's an example from a real campaign. We were running ads for a pressure washing business and tested two approaches for the primary headline. Version A was "Professional Pressure Washing." Version B was "Pressure Washing From $149." Same ad, same description, same landing page. The only difference was that first headline.
Version B won by a mile. Click-through rate jumped from 4.2% to 6.8%, and cost per lead dropped by about 25%. People responded to the specific pricing because it set expectations and qualified the click. They knew what they were getting into before they even visited the site.
How do you set up a proper A/B test?
With responsive search ads, Google is already mixing and matching your headlines and descriptions. That's useful, but it's not a controlled A/B test. For a true test, you want to pin specific elements so you can isolate what's making the difference.
Create two responsive search ads in the same ad group. In Ad A, pin your control headline to Position 1. In Ad B, pin your test headline to Position 1. Keep everything else the same. Same descriptions, same display URL, same extensions. Change one variable at a time, otherwise you won't know what caused the difference.
Set your ad rotation to "Do not optimise" in the campaign settings. If you leave it on the default, Google will quickly start favouring whichever ad gets early clicks, which might not be the actual winner. You want even distribution so both ads get a fair shot.
How long should you run a test?
This is where most people get impatient. You need statistical significance before declaring a winner, not just a gut feeling after three days.
Spending money on ads but not sure what's working? We'll review your account and tell you straight.
Get a free reviewAs a rule of thumb, each ad variation needs at least 100 clicks before you can draw meaningful conclusions. For most small business accounts, that means running a test for two to four weeks. If your daily budget is smaller, it might take longer. That's fine. A reliable result after four weeks is infinitely more valuable than a hasty decision after four days.
Don't peek at the results every hour and panic when one ad is ahead. Small sample sizes produce wild swings. An ad that looks like a loser after 20 clicks might turn out to be the winner after 200. Give it time.
What else is worth testing beyond headlines?
Once you've found your winning headline approach, move on to call-to-action testing. The CTA in your ad description can significantly impact conversion rates. "Call Now for a Free Quote" versus "Get Your Free Quote Online" versus "Book Your Free Assessment Today" are all subtly different and will appeal to different people.
We've found that CTAs with a specific next step tend to outperform vague ones. "Get Your Free Quote in 60 Seconds" beats "Contact Us Today" almost every time. People want to know exactly what happens when they click.
After CTAs, test your description copy. Try different angles. One version might focus on speed and convenience. Another might focus on trust signals like years of experience or number of five-star reviews. A third might focus on the specific outcome the customer gets.
When do you declare a winner?
Look at three metrics: click-through rate, conversion rate, and cost per conversion. A higher CTR is great, but not if those extra clicks don't convert. The winning ad is the one that delivers the lowest cost per conversion, full stop.
Once you have a winner with at least 100 clicks per variation, pause the loser and create a new challenger to test against the winner. This creates a continuous improvement cycle. Your best ad today becomes the control, and you try to beat it with something new next month.
Can I share something with you? This process never really ends, and that's a good thing. We're constantly testing ad copy across all the accounts we manage. The improvements compound over time. A 10% improvement this month, another 15% next month, and suddenly you're getting twice the results you started with from the same budget.
The businesses that treat their Google Ads as a set-and-forget system always plateau. The ones that commit to regular testing keep getting better results month after month. It doesn't take much time, maybe an hour or two per month, but the impact on your bottom line can be significant.




