Artificial IntelligenceBusinessDigital MarketingNewswireTechnology

Dean Kadi on Clients Who Ignore Performance Data

▼ Summary

– A client insisted on pausing high-performing UGC Meta ads for heavily branded creatives, despite the existing strategy achieving a 3x–4x ROAS.
– The client’s decision was based on a survey assumption about color preferences, but agency testing had already proven that the one-coat benefit drove purchases.
– Dean Kadi advised agencies to stay calm, communicate risks, and document recommendations, letting performance data guide decisions rather than emotion.
– The new branded ads tanked performance within eight weeks, and returning to UGC quickly restored campaign efficiency.
– The article also highlights common PPC mistakes like poor tracking setups and warns that AI cannot fix a flawed strategy.

In a recent episode of the PPC Live podcast, Dean Kadi, Head of Paid Growth at One Link Media, opened up about a frustrating yet instructive experience where a client insisted on swapping high-performing Meta ads for heavily branded creative, even though the existing strategy was clearly driving results. The story underscores a familiar tension between agency expertise and client preference, while offering practical takeaways on communication, testing, tracking, and why data must lead decision-making in PPC.

The campaign was thriving. Kadi and his team had built a robust Meta advertising strategy for Rubio Monocoat, a premium woodworking brand, leveraging user-generated content (UGC). Through rigorous testing of creators, hooks, formats, and messaging angles, they boosted the account’s ROAS from roughly 2.1x to a consistent 3x to 4x range. Their tests revealed a surprising insight: the primary purchase driver wasn’t the product’s array of colors, but the fact that customers needed only one coat, saving considerable time and effort.

Then the client asked to pause all winning ads. Despite the strong performance, they wanted to replace the UGC with polished but heavily branded static and video creatives. These new ads looked professional but failed to feel native to Meta’s platform, a critical factor for engagement and conversion. The decision wasn’t driven by performance issues, but by the client’s preference for more traditional branding.

The client’s new direction rested on a dangerous assumption. They had conducted a survey suggesting customers appreciated the brand’s color range, and they assumed this was the main reason people bought. But the agency’s testing data already contradicted that. This is a classic marketing pitfall: letting internal assumptions or isolated feedback override broader performance data and real-world customer behavior.

One moment from the conversation stands out. The client admitted, “We’d prefer this to be a winner.” Kadi pointed out that paid media doesn’t operate on preference or hope. Audiences decide what resonates. No matter how strongly stakeholders feel about a direction, performance data ultimately determines success.

So what should agencies do in such situations? Kadi advised staying calm, professional, and evidence-led. Instead of arguing emotionally, marketers should clearly communicate risks, explain their reasoning, and document recommendations in writing. By maintaining professionalism and letting the data speak, agencies can protect relationships while standing behind their expertise.

The results tanked, exactly as expected. The new branded creatives quickly underperformed, with rising acquisition costs and declining efficiency across Meta campaigns. The agency continued testing audiences and optimization strategies, but the core issue was the creative itself. After about eight weeks of poor results, it became obvious the client’s direction wasn’t working.

Once the client agreed to reintroduce the original UGC ads, performance rebounded within just a couple of weeks. The return of native-looking content and proven messaging angles restored the account’s efficiency and validated the agency’s original strategy. Interestingly, Google Ads remained relatively stable because those campaigns relied more on branded search activity.

The bigger lesson? Let data tell the story. Kadi’s key takeaway is that agencies should rely on data rather than emotion when navigating difficult client situations. Sometimes clients need to see underperformance firsthand before accepting recommendations. By consistently presenting clear reporting and measurable outcomes, marketers can use evidence to guide conversations and rebuild trust.

Beyond this specific story, Kadi highlighted poor tracking setup as one of the most common PPC mistakes agencies still encounter. Missing server-side tracking, incorrect event configurations, and weak conversion tracking setups can severely impact optimization and reporting. Even the strongest campaigns struggle if the underlying data infrastructure is flawed.

He also warned against overreliance on AI tools. While AI can improve efficiency and speed up workflows, it cannot compensate for weak strategy or poor thinking. Marketers still need to critically evaluate outputs, refine prompts, and apply human judgment, because clients ultimately hold people accountable, not AI systems.

This story serves as a reminder that successful PPC campaigns rely on testing, data, and strategic discipline rather than internal opinions or branding preferences alone. Agencies must balance professionalism with confidence in their expertise, document their recommendations carefully, and trust performance metrics over assumptions. In the end, audiences decide what works, and the data almost always reveals the truth.

(Source: Search Engine Land)

Topics

client-agency tension 95% data-driven decisions 93% meta advertising strategy 91% User-Generated Content 89% creative testing 87% roas optimization 85% client communication 83% branding vs performance 81% assumption risks 79% campaign recovery 77%