Artificial intelligence is increasingly transforming the advertising landscape, enabling the rapid production, review, and delivery of promotional content. Marketers can now generate copy, visuals, and audience targeting in mere seconds, significantly enhancing efficiency and creativity. However, this technological advancement also introduces new compliance and reputational risks that brands must navigate carefully.
According to recent figures from Reuters, Meta anticipates that approximately ten percent of its advertising revenue in 2024 will come from promotions linked to scams or prohibited items, with billions of misleading ads appearing daily. This statistic underscores the dual nature of the issue: while some misuse AI intentionally, well-meaning advertisers can inadvertently breach regulatory standards through AI-generated content.
The consequences of AI misuse can be severe for brands. The 2024 incident involving the “Glasgow Willy Wonka Experience” highlights the potential fallout when AI-generated visuals create unrealistic expectations. The disparity between the promotional material and the actual event sparked public outrage, prompting government intervention and a swift shutdown of the event. Such examples illustrate that inaccuracies in AI-generated content can have far-reaching implications.
Moreover, the use of AI in personalized advertising raises legal concerns, particularly when content reaches unintended audiences. For instance, ads for alcohol or gambling may inadvertently target minors, while sensitive material could be delivered to vulnerable individuals who have opted out. Intellectual property and data protection issues also emerge when AI uses external models or datasets containing protected works.
To mitigate these risks, advertisers must adopt a proactive approach to compliance in AI-driven advertising. First, embedding AI responsibilities in contractual arrangements is crucial. Agreements with agencies, freelancers, and technology partners should clearly outline how AI will be used, who is responsible for checking outputs, and the liability for any errors. This clarity can help reduce uncertainty during potential disputes.
Additionally, firms must focus on thorough content reviews to ensure that AI-generated material does not create false impressions. If AI alters the appearance, scale, or functionality of a product, it may be prudent to include a brief explanation to inform viewers about the content’s production process. This transparency can help maintain consumer trust.
When utilizing digital characters in advertising, brands should exercise caution and clearly identify whether these figures are synthetic. If a virtual character is portrayed as testing a product, advertisers must consider whether such an action is feasible. If not, alternative formats may be more appropriate to avoid misleading consumers.
Campaigns involving age-restricted or sensitive products should undergo rigorous legal review. Targeting tools can sometimes produce unintended audience segments, so close oversight is essential to prevent inappropriate placements that could lead to reputational damage or regulatory action. This extra scrutiny is vital in maintaining compliance with advertising laws.
Disclosure is another critical area for advertisers. The Competition and Markets Authority (CMA) emphasizes the importance of avoiding misleading consumers and providing information that could influence their decisions. If there is a realistic chance of confusion, brands should inform consumers when they are interacting with AI rather than a human. While prominent disclaimers are not always necessary, advertisers should refrain from presenting AI-generated figures as real individuals.
A similar approach applies to AI-generated imagery. If the artificial nature of the content is not readily apparent and could affect a viewer’s understanding of the product, disclosure may be advisable. Such practices promote ethical advertising and ensure consumers are fully informed.
The Advertising Standards Authority (ASA) is actively using AI monitoring tools to identify potentially rule-breaching adverts. Content related to high-priority issues is reviewed by specialists, with problematic cases leading to investigations or rulings. The regulatory landscape surrounding AI disclosure in the UK is still evolving; while the CMA prioritizes consumer clarity, the ASA focuses on preventing misleading content. Over time, increased enforcement and guidance are expected to create a more consistent regulatory environment.
As AI continues to streamline advertising processes, brands must establish clear internal protocols to mitigate risks. By embedding AI responsibilities in contracts, rigorously reviewing AI-generated content, treating digital characters with caution, applying enhanced checks to regulated categories, and staying informed about developments from the CMA and ASA, advertisers can navigate this complex landscape responsibly. These measures will not only protect brands but also foster a more ethical and transparent advertising ecosystem.
See also
Senator Marsha Blackburn Reveals TRUMP AMERICA AI Act to Establish Federal AI Standards
Europe’s AI Sector Faces $500B Investment Gap as Omnibus Proposal Falls Short
Windsor Launches AI Chatbot Pilot to Streamline 30% of Provincial Offences Calls
Texas Legislators Defend AI Regulation Amid Trump’s Threat to Federal Funding
OpenAI Updates AI Guidelines for Users Under 18, Enhancing Safety and Transparency


















































