Any experienced digital marketer knows that there’s a constant flow of new opportunities, ideas and tactics that emerge as ad platforms change and best practice develops. They also know that with a constant flow of new information they’re bound to make mistakes and focus energy on things that just don’t work as expected.
Ultimately this is a good thing. Without calculated risk taking, accounts start to stagnate and competitors pull ahead. Even if a test doesn’t work, there’s still opportunity to learn from it and gather valuable information for the next roll of the dice.
In the spirit of learning from our mistakes, here’s a list of some of the ideas I’ve had that didn’t quite pan out as expected. More importantly, I’ll also share what I learned from each of them and what I’d do differently next time.
Partial credit for this one - the general idea was OK but it came at the wrong time. Smart Campaigns were a precursor to automated campaign types like Performance Max and Demand Gen where broad targeting was used to serve a mix of creative types to a wider audience than pure Search campaigns.
The problem? The technology wasn’t quite there yet.
Ad platform machine learning was still in the early stages which resulted in less relevant traffic, overlap with existing campaigns and poor results. Now that these campaigns have been replaced with PMax / Demand Gen and there’s been years of development in AI powered campaigns, it’s a much safer time to hand over control to Google.
With the benefit of hindsight, I would have run Display & Video campaigns rather than trying to make Smart Campaigns work.
Ironic given I have an article specifically talking about this but, like all marketers, I’ve been guilty of moving too quickly and calling results prematurely rather than making sure there’s been enough time allocated for a definitive answer. There’s a couple of reasons this can happen.
The first is to do with the context that we operate in. Everyone is under time pressure in this industry from the marketers running campaigns through to our clients who are facing internal questions about performance. When everything needs to be done yesterday and results are viewed at a daily, weekly or monthly level, sticking to a long term roadmap and emphasising patience can be a hard sell.
As marketers, we need to clearly communicate what our strategy is and the timeframe that we need to see results. To make life easier, we can identify milestones throughout this period where we can provide updates on performance and answer any questions that are coming in. Hopefully, this will help to give us the time we need and ease the pressure on everyone involved with the account.
The second reason is slightly more positive. As marketers, we should be excited about our accounts. We have so many things that we want to try as we learn more about our accounts and spot new opportunities. The problem with this comes when those things overlap with existing tests and strategies.
Imagine this - you’ve built a new test and everything seems to be going well. Results are starting to pick up and you’re learning about a new way that you can drive performance for your client. All of a sudden, Google releases a new campaign type that you think would be absolutely perfect for your account goals.
In this situation, the temptation is to call your test early and go all in on the exciting new thing. After all, you’ve seen some positive early results so surely that’s enough to call the test a success, right? Not quite.
Take the time to learn more about new developments and build them into your roadmap. There’s nothing wrong with changing a roadmap to accommodate a new development but we (hopefully) have a good reason for every test that we run and it’s important to see things through to their conclusion. Who knows - sticking with your current plan could give you some extra insights that help to push that next test even further.
3) Broad Match
Another controversial one here and evidence of the sometimes strained relationship between agencies and their counterparts at the ad platforms. From 2021 onwards there was a major push to reintroduce Broad Match keyword targeting into accounts and move away from the classic Exact Match / Phrase Match split.
Marketers were rightly skeptical of this. It’s rare to find someone in Paid Search who hasn’t been mystified by the search term report and some of the phrases that their Broad Match keywords have matched to. This led to significant pushback and frustration as marketers stuck to what they know and the platforms pushed even harder.
Like automated campaigns, this was a case of technology catching up to the sales pitch. In the early days of Paid Search, Broad Match was exactly that - a very broad interpretation of keywords with wildly varying relevance depending on the account and targeting options at play.
However, targeting signals have improved dramatically since then and the volume of available data means that platforms can better understand the intent and motivation behind a search which has led to Broad Match becoming an integral part of the modern search approach.
Like anything digital, this comes with some caveats. It’s still on the marketer to make sure that their account is well set up and targeting is correct. Without audiences and negative keywords, Broad Match will still throw up some unusual matches so getting the basics right is crucial for success as we move into an audience first period of Paid Search.
As marketers, we’re never going to get everything right. The goal isn’t to be perfect - it’s to make new mistakes as we learn from the old ones and to try approaches based on where digital is at that moment. These are just a few of the things that I’ve learned and I know there’s plenty more to come in the future.
For forward facing Paid Search, get in touch.