Max Clicks for Budget-Limited PPC Campaigns Case Study


What bidding strategy will perform best for a given campaign? This can be a tricky question. While aggregate data as well as common sense can provide some answers, absent a crystal ball no one can say how any given campaign will perform with a particular automation strategy until it is actually tried. As such, I designed an experiment to run for 8 lead generation campaigns for a client in the apartment rental industry.

The Hypothesis And Set-Up

The 8 campaigns I selected for this experiment were each limited by budget, and had fairly limited conversion data, having produced a total of 36 conversions against 2,561 clicks in the 30 days previous to the experiment. Each campaign was set to ECPC as its bid strategy. My hypothesis was thus: given that the campaigns were limited by budget, we were likely missing on clicks due to high bids; and because the conversion data was relatively limited, we would be best served by a Maximize Clicks strategy that would produce as much traffic as possible within our constrained budget. I utilized the AdWords experiment functionality to create A/B tests for each of those campaigns, with the experiment set to maximize clicks and the control set to remain ECPC.

The Aggregate Results

The table below shows the aggregate results from the experiments, after 30 days of data had been collected.

I’ll post the results from the individual campaigns’ experiments a bit later, but let’s begin by digging into the aggregate data. First, the good news: Maximize Clicks did produce more clicks, and at a slightly lower cost-per-click as well. That said, neither one of those metrics is lower to a statistically significant degree. Additionally, we’ll see when we look into the campaign data that not all experiments produced lower CPCs, or even more clicks.

What’s more, the Maximize Clicks experiments produced fewer conversions at a higher cost-per-lead.The sample size was small, and even a few conversions more or less for a control or experimental campaign would have swung the data significantly. However, the conclusion was unfavorable enough to disprove the original hypothesis

Things get even grimmer for the experiment when we view a couple of lagging-indicators for conversions, Click-Assisted Conversions, and Impression-Assisted Conversions:

Small sample-size caveats aside, this data suggests to me that even in cases where there is limited conversion data, ECPC did a better job putting the ads in front of users that had a higher propensity to convert. In other words, it appears that the algorithm behind ECPC was more effective than a strategy that sought raw traffic at bidding up and down according to its predictions of user behavior.

The Campaign-By-Campaign Results

Unfortunately, the campaign-by-campaign results don’t do much to clarify the results. We see mixed results across the board except for raw clicks; by the metric, Maximize Clicks was universally successful. Other than that, the experiments tended to (but not always) produce lower CPCs, and tended to (but not always) produce fewer conversions at a higher cost.

There is no clear pattern that speaks to whether bid strategy was more or less successful. Experiments seem to have won and lost in cases where campaigns had relatively had relatively more conversion data and overall traffic, or less of both.

Conclusions

Experiments can’t always be successful. This is true whether you’re setting up A/B tests in Google Ads, launching a pilot program in the policy world, or combing beakers of… chemicals in a chemistry lab (clearly, I have no idea what chemists actually do). In fact, the scientific community is paying more attention to what has been termed “publication bias”. Essentially, “publication bias” refers to the tendency of studies that seem to have big and bold conclusions get published, while studies that “fail” to produce statistically significant results get little attention at all. Many scientists consider this a shame, because when failed studies and experiments get little attention, other scientists might unnecessarily follow the same hypothesis again and again. Furthermore, even when an experiment fails to produce significant results, there may still be insights that can contribute to future research.

It is in that spirit of publishing “failures” that I shared this case study. At this point, you may have guessed that I won’t be concluding this post with a statement like “YES! You should always definitely use Maximize Clicks for campaigns that are limited by budget”, or “NO! Never set a campaign to Maximize Clicks!” As is the case for the hard sciences though, I think there is some value in sharing the results.

For one thing, I’m more convinced than I was before running these experiments that Google can be successful predicting conversion behavior even when there’s very limited conversion data. I’ve heard digital marketers express skepticism that Google’s algorithms can choose winners and losers (whether that be ad copy or higher or lower bid for a particular user) based on what seems like a small amount of data. However, in the case of the above experiments, it seems like the algorithm that sought to improve conversion outcomes was able to do so relatively well, even when it seemed to me that there wasn’t a significant amount of conversion data.

Another more broad conclusion that I think is worth carrying forward – when you have an opportunity to test an automated bid strategy, do so. I thought I had a solid hypothesis, and that I would see clear improvements in performance were I to adopt Maximize Clicks for my budget constrained campaigns. That turned out not to be the case. Most of my experiments to produce more efficient conversions, and the benefits I got in terms of raw traffic did not offset those losses. When digital marketers have the opportunity to test their strategies before full implementation, they are wise to take those opportunities.

Questions? Feedback? Tweet @ppchero!





Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version