The Rise of AI Washing in Media Agencies and Its Dangerous Fallout
AI washing” describes the deceptive practice of overstating or fabricating AI capabilities in products or services – much like “greenwashing” refers to false environmental claims. In essence, it’s calling something “AI-powered” when it really isn’t, or when only a trivial automation is involved
NOTE: This article is the summary of the findings from am extensive investigation done by LA PIPA into the media sector, reflecting the voices and opinions of dozens of credible sources, which may or may not be aligned with our own vision and opinions. For this reason we have quoted and referenced all sources
What is Artificial Intelligence (AI)?
Artificial Intelligence (AI) refers to computer systems that perform tasks normally requiring human intelligence – such as learning from data, making decisions, or understanding language. In advertising and media, AI can power tools that analyze vast datasets, predict consumer behavior, optimize ad placements, and even generate marketing content. For example, AI-driven ad-buying algorithms can instantly decide the best ad placement for a given target audience, adjusting bids and budgets in real time based on performance data (loungelizard.comloungelizard.com). Unlike traditional manual processes, these AI systems aim to automate and enhance marketing tasks, from hyper-targeting audiences to dynamically tweaking creative elements (loungelizard.com).
However, true AI is more than just any software or algorithm – it involves advanced techniques like machine learning (systems that improve with experience) and generative models (which can create new content). Properly implemented, AI can indeed bring efficiency and insight to media planning and buying: freeing humans from drudgery, reducing errors, and uncovering patterns in consumer data that would be impossible to see otherwise (bionic-ads.combionic-ads.com). That potential has set off a frenzy of interest across the marketing world. Yet it has also led to a problematic trend: companies exaggerating their use of AI to ride the hype.
What is “AI Washing”?
“AI washing” describes the deceptive practice of overstating or fabricating AI capabilities in products or services – much like “greenwashing” refers to false environmental claims. In essence, it’s calling something “AI-powered” when it really isn’t, or when only a trivial automation is involved (fierce-network.com). This often means taking old or basic software tools and rebranding them as “AI” to make them sound cutting-edge (fierce-network.com). As Gartner analyst Sid Nag explains, AI washing turns artificial intelligence into “just another marketing term rather than a technology with concrete parameters and measurable – dare we say transformative – results.” (fierce-network.com) In other words, the label “AI” gets slapped onto anything to capitalize on the buzz, even if the solution doesn’t truly employ intelligent algorithms.
An “AI-powered” pitch with little substance – a satirical look at how the buzzword gets tossed around. In reality, AI washing means making vague or inflated AI claims that often don’t hold up under scrutiny.
This phenomenon has become pervasive in recent years. Tech industry observers note that “anything that uses an algorithm is being rebranded as ‘AI’” amid the hype – at CES 2024, dubbed “the year AI ate Vegas,” products ranged from “AI-powered pillows to vacuum cleaners to toothbrushes.” As one journalist wryly observed, “no product is too boring or humdrum, it seems, to escape an AI makeover.” (marketoonist.com). The advertising world is no exception: from analytics dashboards to simple chatbots, suddenly every agency tool is touted as “AI-driven”. This liberal use of AI buzzwords allows companies to ride the trend without necessarily having developed true AI solutions (monetate.com). In fact, marketers sometimes sprinkle terms like “machine learning,” “deep learning,” or “generative AI” into sales pitches for tech that may not use them at all (monetate.com).
Why AI Washing Is So Dangerous
Eroding Trust and Undermining Real Innovation: AI washing is not just a harmless marketing tactic – it carries serious risks for businesses, their clients, and society. One major danger is the breach of trust that occurs when reality doesn’t live up to the hype. Gartner’s Nag warns that overselling “fake AI” can sour customers on a technology that actually does have real potential (fierce-network.comfierce-network.com). When buyers are promised “AI magic” but get a “warmed-over version of the same old thing,” disappointment sets in (marketoonist.com). Over time, this can lead to widespread disillusionment, cooling the interest in AI solutions overall (fierce-network.com). In fact, industry analysts note that exaggerated claims have pushed generative AI into the “Trough of Disillusionment” on Gartner’s hype cycle as of 2024 (emergingtechbrew.com). Legitimate AI innovators may then struggle to gain adoption if customers become jaded by previous bad experiences with overhyped “AI” products.
Wasted Investments and Business Harm: Another consequence is direct business harm. Companies that fall for AI washing may waste substantial investments on tools that don’t deliver. As Bret Greenstein of PwC points out, buying a “fake AI” solution often means it “fall[s] short” of promised outcomes, forcing the company to spend even more later to get a functional solution (fierce-network.comfierce-network.com). This diversion of resources can hurt a client’s performance and delay genuine digital transformation. For advertisers and brands, choosing an agency or platform based on inflated AI claims could mean missed opportunities, poor campaign results, or even strategic missteps if the touted “AI” doesn’t actually optimize anything. In media buying, for example, trusting a black-box “AI platform” that isn’t truly intelligent might lead to misallocated ad budgets or missed audiences – ultimately impacting sales and ROI.
Ethical, Legal, and Societal Risks: AI washing also masks the real ethical and legal issues that accompany AI technologies. By pretending to have AI capabilities, organizations may not put proper safeguards in place for things like bias, privacy, or intellectual property – yet they give stakeholders a false sense of security. Gartner’s Nag cautions that AI washing brings “issues related to copyright, ethics and legal” to the forefront (fierce-network.com). For instance, a company might claim its targeting platform uses AI to avoid biased ad placements, when in truth it does nothing of the sort – potentially leading to discriminatory outcomes or regulatory violations. On a societal level, overhyping AI can fuel public misconceptions and fears. It might encourage over-reliance on automated systems that aren’t actually proven safe or effective, affecting people’s lives (consider an “AI” hiring tool that’s really just a random filter – it could unfairly cost someone a job). In short, AI washing dilutes accountability: organizations get credit for “innovation” without undergoing the rigorous testing and responsibility that true AI deployment requires.
Three Risky Outcomes of Negligent AI Washing
In light of those dangers, what are the possible outcomes if companies – particularly in media and advertising – continue this negligent behavior of AI washing? Here are three scenarios already unfolding:
1. Regulatory Crackdown and Legal Fallout: Perhaps the most immediate consequence is attracting the ire of regulators. Deceptive AI claims are now squarely on the radar of enforcement agencies. In the U.S., the Federal Trade Commission (FTC) has explicitly warned companies about overstating AI in marketing, making clear that “there is no AI exemption from the laws on the books.” (ftc.gov). In late 2024 the FTC launched “Operation AI Comply,” a sweep targeting firms that used AI hype to defraud or mislead consumers (ftc.gov). Among the cases were a company peddling an “AI lawyer” that didn’t work (resulting in fines and a ban on such claims) and a scheme touting “cutting-edge AI” for e-commerce profits that swindled people out of millions (ftc.govftc.gov). The U.S. Securities and Exchange Commission (SEC) is similarly cracking down on public companies that overhype AI to investors. SEC Chair Gary Gensler has likened “AI washing” to other securities fraud and signaled the agency will “police the markets” against it (emergingtechbrew.comemergingtechbrew.com). In mid-2024, the SEC charged the CEO of an AI-themed startup for essentially using “buzzwords like ‘AI’ and ‘automation’” to dupe investors in an old-fashioned fraud (emergingtechbrew.com). Even a mainstream brand like Oddity Tech (a cosmetics company) was hit with a shareholder lawsuit alleging it overstated its AI capabilities driving sales (emergingtechbrew.com). The message is clear: companies (and their agency partners) face fines, lawsuits, and reputational damage if they make false AI promises (emergingtechbrew.com). In Europe, regulators are also sharpening their focus – the upcoming EU AI Act will impose transparency and accountability on AI systems, and consumer protection authorities can pursue misleading AI-based marketing under existing laws. In sum, AI washing can quickly move from a marketing ploy to a legal liability.
2. Loss of Client Trust and “Techlash”: A more gradual but equally damaging outcome is the erosion of trust among clients, consumers, and business partners. Brands hire media agencies and ad-tech vendors based on trust in their expertise – if an agency claims to have an AI-powered optimization engine and it fails to deliver results, the client’s confidence takes a hit. Executives from Gartner and PwC have warned that disappointment in “fake AI solutions” leads to “a loss of trust” and a chilling of future investments in true AI (fierce-network.com). Over time, a pattern of overpromising and underdelivering creates a “techlash” – a backlash against technology hype. We’re already seeing early signs: surveys show rising skepticism as marketers realize some AI tools don’t live up to sales pitches. If enough people feel burned, the whole industry’s progress with AI could slow, as decision-makers become cynical about new tech initiatives. Additionally, ethical breaches caused by careless AI claims (for example, misuse of personal data under the guise of “AI-driven personalization”) can destroy public trust. Once lost, trust is hard to regain – and agencies that abuse it may find themselves losing clients to competitors who take a more honest, measured approach. In a worst-case scenario, AI washing by a few players could taint the reputation of AI in marketing overall, making businesses and the public hesitant to embrace even well-founded AI innovations.
3. Stunted Innovation and Competitive Disadvantage: Ironically, those who engage in AI washing may be sabotaging their own long-term success. When a company devotes more effort to talking about AI than actually building capability, it falls behind technologically. Many advertising agencies today face a skills and infrastructure gap – while they tout “AI solutions,” behind the scenes they often lack the data architecture and talent needed to truly leverage AI (bionic-ads.combionic-ads.com). This gap can lead to what one industry observer calls “fantasy disconnect” between what marketing pitches claim and what engineering teams can actually deliver (monetate.com). In the short term, the company might win some business with bold claims, but soon clients will notice the emperor has no clothes. Meanwhile, emerging competitors or consultancies that actually invest in real AI and data science will outpace them. For example, new tech-driven entrants could offer marketers genuine algorithmic buying platforms that outperform the legacy agencies’ rebranded tools (bionic-ads.com). The incumbents then risk losing market share. We’re already seeing defensive consolidation in the agency sector: in December 2024, Omnicom and Publicis’ rival Interpublic announced a $13 billion merger to create the world’s largest ad firm, explicitly aiming to “better compete… amid the accelerating use of AI” and to develop the in-house tech needed to survive. John Wren, Omnicom’s CEO, noted that “soaring use of AI tools… has squeezed traditional agencies, forcing them to scramble to develop similar in-house tools to retain clients.” (cio.economictimes.indiatimes.com) This scramble suggests that those agencies which fail to build real AI competency will face a stark choice: either merge, radically transform, or become obsolete. In broader economic terms, AI washing wastes resources on hype instead of innovation, potentially slowing the overall advancement of the industry. If everyone is busy faking it, who’s actually making it? The net result could be a stagnation in genuinely useful AI development in marketing – a lose-lose for the economy and society, which won’t reap the benefits of what AI could have properly delivered.
How AI Washing is Playing Out in Media Agencies
The media agency sector – including major advertising holding companies and independent firms – has become a hotbed of AI hype over the last couple of years. Under intense pressure to appear tech-savvy, many agencies have rolled out AI-branded initiatives and tools. However, there’s a fine line between earnest innovation and AI washing. Let’s examine how some of the biggest players are navigating (or succumbing to) this trend:
WPP (GroupM / WPP Media): The world’s largest advertising company, WPP, has made bold moves to brand itself as AI-forward. In 2023 it restructured its media division, rebranding GroupM as “WPP Media” to emphasize a commitment to AI and integration (marketingdive.com). In mid-2025, WPP Media launched “Open Intelligence,” which it touted as the industry’s first “Large Marketing Model” (LMM) – a play on the term Large Language Model, except aimed at marketing data (marketingdive.commarketingdive.com). According to WPP, Open Intelligence uses AI to aggregate consumer data across 75 markets and predict audience behavior for targeting, moving beyond cookie-based advertising (marketingdive.com). It even allows custom AI models for clients’ specific goals. On paper, this sounds cutting-edge. But skeptics note the hyperbolic claims – calling it the “first LMM” – and question how much is proprietary innovation versus assembling existing partner technologies (indeed, the tool launched with data partnerships with Google, Meta, Amazon, TikTok and others) (marketingdive.commarketingdive.com). WPP’s CEO Mark Read has openly championed AI in earnings calls, and the company is investing heavily. Yet, an analysis by one ad tech firm pointed out that WPP’s AI investment (around $317 million in 2024 on a new system dubbed “IQ”) amounts to only about $3,000 per employee – small relative to what true AI development costs (bionic-ads.com). By contrast, pure tech players like OpenAI are spending billions on R&D. This suggests that despite the splashy announcements, WPP (and peers) may still be dipping a toe in AI rather than fully transforming. The risk is that marketing rhetoric races ahead of actual capability. If Open Intelligence doesn’t meet clients’ high expectations (set by calling it AI-powered and “first-of-its-kind”), WPP could be accused of AI washing.
Publicis Groupe: The French advertising giant has been loudly beating the AI drum – arguably both as pioneer and showman. Publicis actually started an AI initiative back in 2017 with an internal professional assistant platform called “Marcel,” for which it was initially mockeddigiday.com. Fast forward to 2023-24, and Publicis is claiming the last laugh: they announced a sweeping “AI strategy” and even credit it for record stock prices (digiday.com). In early 2024, Publicis unveiled a platform named “CoreAI” and committed €300 million (about $325M) over three years to AI development (digiday.com). CoreAI is described as an “intelligent system” spanning the entire company – integrating “trillions of data points” from consumer profiles, media touchpoints, and creative assets into various AI models (digiday.comdigiday.com). Publicis boasts that CoreAI will power everything from insights and strategy to media planning, creative production, and software development. They have lined up partnerships with OpenAI, Adobe, Microsoft, and more to incorporate the latest AI tools. On the surface, this sounds like an organization truly embracing AI (indeed 50% of their 2024 AI budget is for hiring/training people, which is a good sign of building real capability). However, critics might note that Publicis’ messaging is extremely grandiose – phrases like “super powering [employees] across 5 key disciplines” and leveraging a dataset of 2.3 billion consumer profiles abound. There’s a risk that hype runs ahead of outcomes. For instance, claiming that AI is now at Publicis’ “core” invites scrutiny: will clients actually see noticeably improved results from these AI systems, or is it largely an internal PR exercise? The company will need to demonstrate tangible successes to avoid the stain of AI washing. Publicis’ leadership, for their part, insist they’ve been in the AI game “for a long time” and that now “everyone’s getting on their AI game”– implying they have substance behind the talk. Time will tell if CoreAI lives up to its billing, but Publicis has certainly marketed its AI prowess as aggressively as anyone.
Omnicom (Omnicom Media Group): U.S.-based Omnicom has taken a slightly different tack, focusing on partnerships and incremental tools. In 2023, Omnicom launched “Omni Assist,” a generative AI assistant integrated into its Omni marketing platform, developed via a partnership with OpenAI’s GPT models (phdmedia.comprnewswire.com). Omni Assist is supposed to help agency teams automate tasks like compiling audience insights and drafting media plans (phdmedia.com). Omnicom also inked deals with Adobe, AWS, and Google to get first access to new generative AI and machine learning services for advertising (investor.omnicomgroup.commarketingbrew.com). These moves position Omnicom as a facilitator: rather than building entire AI systems from scratch, it’s leveraging big tech’s AI within its workflows. There’s less bombast in their announcements compared to WPP or Publicis, but the risk of AI washing still exists if Omnicom overstates the uniqueness of its offerings. For example, claiming “first-mover” status or implying exclusive capabilities that are essentially the same GPT or Adobe Firefly tools others can use might be seen as puffery. Omnicom’s CEO John Wren has publicly stated that he doesn’t see AI as “ruining” ad agencies but rather changing them – suggesting a more measured outlook. Interestingly, in late 2024 Omnicom moved to acquire Interpublic Group (IPG), another top ad holding company, in a bid to scale up against competition (including competition from AI-enabled Big Tech platforms). The merger announcement explicitly cited the need to “better position [the combined company] against the rise of AI” and indicated that efficiency and technology investment were driving the deal. This underscores how seriously the big firms take the AI trend – but also how behind they perhaps feel. Omnicom and IPG presumably concluded that together they have a better shot at marshaling the data and talent needed to implement AI at scale. If they simply merge and say AI is a priority but don’t execute, the combined giant could still fall victim to the same skepticism.
Dentsu: The Japanese-headquartered network (which also spans global operations, including the former Dentsu Aegis in Europe/US) has been vocal about its focus on technology. Dentsu has publicized an “AI for Growth” vision in its home market, assembling specialized AI teams and showcasing AI-enabled creative work (for example, demonstrations at Cannes Lions festival) (dentsu.co.jp). They’ve also integrated AI features into their products; for instance, in Latin America, Dentsu developed a secure AI content creation platform called “Playground” for clients, highlighting real impact on processes rather than hype (lbbonline.com). While Dentsu’s communications often emphasize innovation with purpose and “new ways of working” with tech (brands.dentsu.com), it’s less clear how much of this is deep capability versus marketing gloss. One concrete sign of commitment: Dentsu has invested heavily in data through acquisitions (e.g. the purchase of Merkle, a data-driven marketing agency) and could embed AI in those offerings. However, like others, Dentsu faces the challenge of upskilling thousands of traditional ad staff into AI-savvy operators. They have recognized internally that AI is “rapidly transforming workflows… from data collection and analysis to ideation” (group.dentsu.com). If Dentsu’s training and internal adoption lag behind their PR, there’s potential for AI washing accusations. That said, Dentsu’s approach appears to put a lot of emphasis on practical use-cases (e.g., automating certain client reporting, using AI in consumer research) – possibly indicating a focus on real implementation.
Havas: A slightly smaller holding company (part of Vivendi) with a strong European presence, Havas has also jumped on the AI bandwagon recently. In mid-2024, Havas announced a new global strategy called “Converged” which includes a “cutting-edge Operating System powered by data, technology and AI, with creativity at its core.” (havas.com). They earmarked €400 million (2024–2027) for investments in data and AI under this plan (havas.com). Converged is meant to “unlock the full potential” of Havas’ capabilities across media, creative, and health units, delivering tailor-made solutions to clients (havas.com). In practical terms, Havas is integrating its various services via this data/tech backbone, and by late 2024 they started talking up “Havas AI” – a dedicated AI offering for clients with proprietary tools and consulting services to help them “make the most of this transformative technology.”. Clearly, Havas does not want to be seen as lagging in the AI race. But given its smaller scale, one might question whether Havas is truly developing proprietary AI or mostly packaging partnerships. Their press releases highlight new appointments (like a Global Chief Data & Technology Officer) and acquisitions of data analytics firms (havas.comhavas.com), which indicate they are shoring up talent. Still, phrases like “powered by AI” in a broad corporate strategy can be vague to the point of sounding like buzzword compliance. Havas will need to prove that Converged isn’t just a tagline. Thus far, they have shown some results (client wins attributed to the new approach), but the true test will be whether Converged’s AI-driven OS materially improves campaign outcomes for brands. If not, it could be written off as marketing rhetoric – a case of AI washing aimed to assure stakeholders that Havas is keeping up with the big boys.
Independent Agencies (e.g. Local Planet and others): It’s not only the holding companies; independent media agencies are also invoking AI in their pitches. Local Planet, a global network of indie agencies, markets itself as “the genuine alternative” and prominently features AI in its services. On Local Planet’s website, they tout building “proprietary AI-fueled technology” – tools named Gauss and AdMachina – to enhance keyword creation and bidding strategies in digital campaigns. They also claim an “AI powered, integrated capability” in retail media, leveraging retailer data for clients. These are strong claims for a private network; if indeed Local Planet has home-grown AI tech that can beat the major firms’ offerings, that’s significant. It’s just as possible that these tools are relatively basic automation given an AI spin – and without a detailed audit, it’s hard to know if they are really part of Local Planet´s core capabilities or are they subcontracted to Making Science, a Spanish pubclicly listed company - with a Google only offering - that now controls a significant part of the “Independent Network”. Many other independents and ad-tech startups similarly advertise AI-based optimization, personalized targeting algorithms, etc. – because they know it attracts clients. The danger, again, is if small agencies oversell their data science chops. Unlike holding companies, independents might not have the resources to hire large AI teams, so their “proprietary AI” might be a thin layer over third-party software.
On the other hand, some independents position themselves against the AI washing trend by emphasizing transparency and expertise. One example is Remotive Media, an independent European agency, that was built on the pillars of Bedrock - a data science pioneer in the media industry launched in 2018. There are a growing number of AI-expert Independents, part of the so called Post-Digital era, who are advocating for Privacy first, Humanised, honest data practices, and against AI washing – These new, yet experienced players are evidence that there’s a rising segment of challenger agencies taking a skeptical stance.) These agencies argue that clients’ trust is better earned through honest capability than through grandiose AI claims. They call out larger competitors for confusing jargon and opaque practices, and instead focus on demystifying AI for clients, and workin on building real value hand by hand, leaing together, and workin hand byhand with the academia, and university researchers in the fields of Physics, Maths and other STEM disciplines alongside humanities. Philosophers, sociologists, psychologist and the arts. In the European market especially, where regulatory scrutiny and client demand for transparency are high, this approach can be a real differentiator.
In summary, virtually every player in the media agency industry now talks about AI – but the substance behind the talk varies widely. Some are making genuine strides in data science, while others may be repackaging old tools with a new AI label. The ones engaging in AI washing tend to share a telltale trait: vagueness and lack of detail in how their AI actually works. As a savvy client or observer, one should listen for concrete examples (e.g. an AI system that “increased conversion by X% by doing Y”) versus just buzzwords (e.g. “uses AI to supercharge your marketing” with no further explanation). The more an agency leans on the word “AI” without providing transparency, the more likely it’s indulging in AI washing.
Lack of Data Science Understanding: The Heart of the Problem
Why do so many agencies slip into AI washing? A core reason is the industry’s skill and knowledge gap in data science. Historically, advertising agencies – even media-focused ones – were not built like tech companies. Their strengths were in client service, negotiations, creativity, and strategy, not in software development or statistics. As a result, many agencies entered the digital era with fragmented data systems and a dearth of technical talent. Even today, one of the primary reasons agencies aren’t ready for AI is that their data infrastructure is not fit for purpose. Data is often siloed across departments and tools; different markets or teams use different software, and there may not be a clean, unified database to feed an AI model. Remarkably, a lot of media planning and buying data in agencies still lives in basic Excel spreadsheets rather than robust databases. As one industry veteran quipped, when data is “incarcerated in spreadsheets,” it’s effectively unusable for modern AI analyses. An AI can’t easily learn from a mess of spreadsheets with inconsistent formats and missing context. Trying to force AI into that scenario “is a recipe for disaster” – it increases the risk of errors or nonsensical outputs.
In addition to infrastructure issues, there’s a human capital issue: agencies have excellent media professionals and creatives, but most lack in-house data scientists, machine learning engineers, and AI-savvy analysts. The advertising workforce hasn’t historically required coding in Python or building AI models. Now suddenly those skills are in demand, and agencies are scrambling to upskill or hire – but they’re competing with tech giants for a limited pool of talent. Thus, a significant AI skills gap exists in many agencies. This can lead to internal misalignment: marketing teams are eager to pitch AI capabilities externally (because it sounds impressive), while the technical teams know how far they still have to go to build those capabilities (emergingtechbrew.com). Monetate’s AI experts noted that often “marketing departments may rush to promote capabilities that engineering teams haven’t fully developed”, leading to overstated or even false claims (monetate.com). It’s usually not born of malice – rather, it’s over-enthusiasm colliding with ignorance. Executives read headlines about AI revolutionizing everything and feel pressure not to be left behind, so they tell their teams “we need AI in our offerings now”. The result can be press releases about “AI-powered platforms” that, under the hood, are still beta-stage or only automate a small part of the process.
This lack of deep understanding of what real AI entails also means agencies might not appreciate the limitations and risks of AI. For instance, a management team that doesn’t include experienced data scientists might assume an “AI” tool is infallible and push it directly to clients without proper validation. They might not test for bias, errors, or edge cases. They also may not set up the necessary human oversight for AI outputs. In media, this could manifest as an AI-driven buying platform that no one fully understands – not even the agency’s staff – making decisions about where to spend millions in client budget. That black-box nature exacerbates opacity in media planning. One of the longstanding concerns in programmatic advertising is transparency: clients often struggle to know where their ads run, what fees are taken, and why certain buying decisions are made. AI, if used properly, could improve transparency by analyzing supply chains. But in practice, many AI-based systems are complex and proprietary, making it even harder for clients to trace ad placements or understand decision logic. As a digital consultancy noted, “transparency remains one of the biggest challenges in AI-driven ad buying”, with common issues including unclear decision-making processes, hidden costs, and difficulty tracing where ads actually appeared. In the worst cases, agencies might hide behind “the algorithm” to deflect questions – effectively saying, “Our AI knows best, trust us,” while not actually disclosing details. This opaque media buying is dangerous: it can mask conflicts of interest (e.g. an agency’s AI favoring media that gives the agency kickbacks), it can allow mistakes to go undetected, and it leaves clients in the dark about their own campaigns.
All of these factors – poor data foundations, skills gaps, and opacity – create an environment where AI washing thrives. Instead of confronting the tough work of improving data quality or hiring expensive technical talent, some agencies find it easier to market an illusion of AI capability. They bank on the fact that clients might not be able to tell the difference, at least not immediately. However, this short-term approach is brittle. Once results come in or deeper questions are asked, the truth comes to light. In contrast, agencies that invest genuinely in understanding data science (even if that means admitting certain AI features are “coming soon” rather than available today) will be better positioned to deliver real value and sustain trust.
The Opaque Impact on Clients and Society
AI washing in media agencies doesn’t only affect abstract notions of trust or innovation – it has tangible impacts on clients’ businesses and potentially on society at large. For clients (the brands and marketers), the rise of opaque, hyped-up “AI” in media planning can lead to suboptimal or even harmful outcomes. Consider a brand entrusting their multi-million dollar ad budget to an agency’s proprietary AI platform under the promise that it will “maximize ROI with sophisticated machine learning.” If that platform isn’t truly up to par, the brand could see poor campaign results – maybe the AI bids on the wrong audiences or spreads money too thin across channels based on a flawed model. Because the process is opaque, the client might not realize why performance is lagging, and the agency might be reluctant to admit the tool’s shortcomings (since they sold it as AI-advanced). This knowledge asymmetry leaves the client unable to course-correct effectively. In the best case, it’s simply wasted opportunity and budget; in the worst case, it could damage the brand (imagine an AI that inadvertently serves ads on unsavory or brand-unsafe content – the client wouldn’t know until a PR crisis hits).
Furthermore, misused AI can amplify biases or privacy infringements, causing societal harm. If a media agency’s AI targeting algorithm is not carefully audited, it might start optimizing in ways that, say, exclude certain demographic groups from seeing housing or employment ads (a form of illegal discrimination), or it might exploit consumer data without proper consent. Agencies claiming “our AI will find you the perfect audience” might not disclose that they’re blending all sorts of personal data signals in ways regulators would frown upon. Society depends on advertising being done within ethical guardrails – when AI washing leads to “black box” systems, accountability for those societal impacts erodes. That’s why watchdog organizations are stepping up (as discussed below): to ensure that the pursuit of AI-enabled efficiency doesn’t trample consumer rights or social values.
On a macro level, there’s also an economic implication for the industry and economy. Advertising is a huge driver of economic activity, and allocating ad spend efficiently helps match products with interested consumers, fueling growth. If AI can genuinely improve that matching, everyone stands to benefit. But if we go through a cycle where lots of companies try half-baked “AI” solutions that don’t really work, billions could be misallocated. It’s reminiscent of past bubbles – resources going into hype instead of productive use. A prolonged period of hype followed by disillusionment (the “AI bubble bursts” scenario) could make companies overly cautious to invest in legitimate AI solutions down the line, slowing the overall pace of innovation. In other words, AI washing might set the industry back by creating a pendulum swing – today over-enthusiasm, tomorrow a backlash – rather than a steady, honest progress.
The Critical Role of Watchdogs and Regulation
As AI washing proliferates, regulatory and industry watchdogs have become crucial in upholding transparency and integrity in marketing. These bodies are working to set standards and, when necessary, punish egregious behavior, in order to protect both brands and consumers from the negative effects we’ve outlined.
In the United States, the Association of National Advertisers (ANA) – which represents the interests of client-side marketers (the brands) – has taken a proactive stance. In mid-2024, the ANA released a comprehensive “Ethics Code of Marketing Best Practices” that, among other things, directly addresses AI usage. A key guideline is AI Transparency: the ANA insists that consumers must be informed when generative AI is used in ads, via clear disclosures like disclaimers or watermarks (adtonos.com). For example, if an agency creates a realistic-sounding radio ad using an AI voice clone of a celebrity, the code would require labeling it as such, so consumers aren’t misled into thinking the celebrity actually endorsed the product. In political advertising, if any AI-generated content is used (say a fake image or an altered quote), it must be clearly disclosed to avoid deception. The ANA’s code also calls for human oversight of AI – meaning companies shouldn’t just let algorithms run wild without human checks, especially given AI’s propensity to err or produce biased content. By establishing these principles, the ANA is effectively pushing back on AI washing by saying: it’s not acceptable to misrepresent AI’s role, and you can’t abdicate responsibility to “the AI”. They are encouraging brands (and by extension their agencies) to be candid about what AI is doing and to ensure it’s being used ethically. This protects brands from the reputational risk of being caught in a lie about AI and helps maintain consumer trust in advertising content.
Government regulators have even sharper teeth. We’ve discussed the FTC and SEC already in terms of enforcement actions. To elaborate, the FTC’s Operation AI Comply in September 2024 was a clear signal that using AI “hype” as a smokescreen for fraud will be punished. The FTC Chair, Lina Khan, stated succinctly, “Using AI tools to trick, mislead, or defraud people is illegal.”ftc.gov. In those cases, some companies were selling supposed AI tools that either didn’t work or were used for scams (like fake review generators). Media agencies typically aren’t running scams of that nature, but the principle applies: if an agency advertises AI capabilities that it doesn’t actually have and that misleads clients into buying its services, that could be considered a deceptive business practice under FTC laws. The SEC, focusing on investor protection, has warned publicly traded companies that throwing around buzzwords in earnings calls or investor materials can amount to securities fraud if it misrepresents the business. Gurbir Grewal, SEC Enforcement Director, coined it “old-school fraud using new-school buzzwords” and explicitly said the SEC will “police the markets against AI washing.” (emergingtechbrew.com). This is highly relevant to large ad holding companies, which are public entities – if WPP or Publicis, for instance, made claims about AI driving huge efficiencies or new revenue and those claims were knowingly overstated, they could potentially face class-action suits or regulator action from shareholders when the truth comes out.
Across the Atlantic, the European Union is finalizing the AI Act, a sweeping regulation to govern AI deployment. While it primarily focuses on high-risk AI systems (like those in healthcare or finance) and ensuring they meet certain requirements (transparency, safety, non-discrimination), its existence shows a climate of ensuring AI is not a wild west. The EU is also looking specifically at labeling of AI-generated content and disinformation. For example, the EU’s updated Code of Practice on Disinformation encourages platforms and advertisers to clearly label deepfakes or synthetic media. Media agencies operating in Europe will be expected to follow any such rules, meaning they can’t, for instance, use an AI to create a fake influencer in an ad without telling the audience it’s AI – that would breach forthcoming EU standards. Additionally, Europe has strong consumer protection laws; an EU enforcement body could theoretically prosecute a company for unfair commercial practices if it finds AI washing that materially misleads business customers or consumers.
Even industry self-regulatory groups are in on the act. In the UK, for example, the Advertising Standards Authority (ASA) has warned advertisers about misleading claims around AI. Though no high-profile cases have emerged yet of the ASA banning an “AI-powered” claim, it’s within their remit if an ad agency says something like “our AI guarantees 50% better results” without evidence – that could be ruled as unsubstantiated and thus not allowed in marketing materials. Meanwhile, the BBB National Programs in the U.S. (a self-regulation unit) has published guidance on AI claims, noting that “AI washing refers to overstating AI's capabilities or ethical rigor... without sufficient evidence”, and cautioning businesses to be ready to back up any AI assertions they make.
All these watchdog efforts serve a crucial purpose: they pressure agencies and companies to be honest and responsible. The ANA gives a playbook for ethical behavior (essentially, “don’t do AI washing, but if you use AI, be transparent and careful”). The FTC/SEC and EU give the stick of legal penalty if companies cross the line. This external oversight is critical because it helps correct the current imbalance of information. Clients might not know when they’re being fed BS about AI – but regulators can investigate and penalize, which in turn deters the most blatant AI washing. It also encourages a cultural shift: agencies are more likely now to consult their legal teams before making AI-related statements. Savvy clients are invoking contract clauses about transparency, asking agencies to demonstrate their AI capabilities rather than just claim them.
Ultimately, the goal of these watchdogs is to ensure the market for AI in advertising evolves in a healthy way. They want brands to get genuine value and consumers to be protected, rather than having the AI narrative be dominated by hype and deception. By calling out AI washing, regulators and industry bodies actually help legitimate AI developers – because when the fakes are punished, the real innovators can shine through. Honest players have nothing to fear from transparency requirements or truth-in-advertising rules. In fact, they benefit as trust in the ecosystem grows.
Conclusion: Toward a Future of Responsible AI in Media
The rise of AI washing in media agencies is a classic case of irrational exuberance meeting an industry ill-prepared to fulfill it. On one hand, we have incredible advances in AI technology – tools that truly can transform how we target, create, and optimize advertising. On the other, we have a marketing industry that has at times leapt to claim transformation before doing the homework to achieve it. The result has been a spate of buzzword-laden promises, “AI-powered” branding, and magical thinking, many of which cannot withstand scrutiny. This is dangerous for businesses (who might waste money or lose trust), for the industry (which could see its credibility damaged), and for society (which might suffer from unchecked or misused AI).
However, the reckoning is underway. Enlightened voices within the industry are calling for a change – emphasizing knowledge, ethics, and transparency over hollow hype. Media agencies must invest in real data science capabilities, clean up their data foundations, and foster collaboration between their marketing visionaries and their technical builders. It’s time to candidly admit what AI can and cannot do. For instance, agencies should not shy away from telling a client, “This planning tool uses a statistical model (not true AI yet), but here’s how we are improving it.” In the long run, such honesty will be far more valuable to client relationships than flashy claims that fall flat.
Using AI the right way means also recognizing its limits and risks. Human oversight isn’t a nice-to-have; it’s essential. No algorithm should be left unchecked in making significant media spending decisions without expert review. “Black box” excuses (“the AI did it, we don’t know why”) won’t fly in a future where accountability is demanded at every level. Agencies should therefore make interpretability and transparency a feature of their AI deployments – sharing with clients what factors the AI considers, how it’s tested for bias, and what guardrails are in place. This demystification can actually be a selling point: an agency that educates its clients on AI, rather than baffling them with jargon, will stand out as a trustworthy partner.
Moreover, a cultural shift is needed from the top. The industry’s leaders – holding company CEOs, agency heads – must temper the hype and set realistic expectations. It’s encouraging to see some candid discussions emerging. For example, some agency CEOs have acknowledged that while AI will change workflows, it doesn’t mean overnight replacement of creative or strategic jobs; rather, it’s a journey of augmentation. By refocusing the narrative on solving client problems (with AI as one tool in the toolbox, not a silver bullet), agencies can realign with the fundamental purpose of advertising. As the Marketoonist cartoon humorously reminded us: “People don’t want to buy a quarter-inch drill, they want a quarter-inch hole.” In the context of AI, clients don’t ultimately care if it’s AI or not – they want better results, more efficiency, and more insight. If those can be delivered by simpler tech or human genius, that’s fine. If AI can truly deliver it, even better. But calling something AI for its own sake is missing the point.
Finally, the combined pressure from clients insisting on proof, regulators insisting on truth, and perhaps a bit of healthy skepticism from the public, will likely move the industry past this frothy “Peak of Inflated Expectations” and into a more mature phase. AI in media has a bright future if handled responsibly. Imagine transparent algorithms that help eliminate ad waste, AI tools that free up human creatives to focus on big ideas, and predictive models that help brands anticipate consumer needs without crossing privacy lines. All of that is within reach, but only if we navigate through the current hype carefully. The trough of disillusionment need not be too deep if lessons are learned now.
In summary, AI washing is a cautionary tale of what happens when innovation loses its tether to reality. The media and advertising community – global, but especially in the US and Europe where these issues are front and center – has the opportunity to course-correct. By championing education over exaggeration, integrity over illusions, and outcomes over optics, agencies can ensure that AI becomes a genuine force for good in the industry. The alternative is a “stain on the industry” that could be impossible to scrub out. The choice, and challenge, lies with all of us in the field to keep ourselves honest and aim for substance in this transformative AI era.
Sources:
Goovaerts, D. (2024). “AI washing could leave a big stain on the industry.” Fierce Telecom – Fierce Networkfierce-network.comfierce-network.com
Kulp, P. (2024). “Hype responsibly: Why ‘AI washing’ can get companies in trouble.” Morning Brew – Emerging Tech Brewemergingtechbrew.comemergingtechbrew.com
FTC Press Release. (Sept 25, 2024). “FTC Announces Crackdown on Deceptive AI Claims and Schemes” – Operation AI Comply casesftc.govftc.gov
Bionic Advertising Systems Blog. (Mar 2024). “Most Ad Agencies Are Not Ready for AI.” (Highlights agency data infrastructure and skills gap)bionic-ads.combionic-ads.com
Marketing Dive. (June 5, 2025). “WPP Media launches AI-driven tool to push beyond ID-based targeting.” (Open Intelligence launch)marketingdive.commarketingdive.com
Digiday. (Jan 25, 2024). “Publicis Groupe debuts new CoreAI platform and €300 million AI investment.”digiday.comdigiday.com
Reuters (via ET CIO). (Dec 10, 2024). “Omnicom takes aim at Big Tech, AI era with IPG deal.” (Merger rationale and AI tools pressure)cio.economictimes.indiatimes.comcio.economictimes.indiatimes.com
Havas Press Release. (Mar 5, 2025). “2024 marks a historic year… for Havas” – Converged strategy and Havas AI detailshavas.comhavas.com
Lounge Lizard Blog. (June 13, 2025). “AI in Ad Buying: Navigating Efficiency and Transparency Challenges.”loungelizard.com
AdTonos. (Aug 1, 2024). “ANA Releases New Ethics Code for Marketing – Introducing AI Guidelines.”adtonos.comadtonos.com