The ASA aims for high ethical standards, delivering regulation that is transparent, proportionate, targeted, consistent and accountable.
Increasingly we are making use of machine learning and AI-based automated systems to help deal with the challenges of regulating digital advertising. The scale of the modern advertising ecosystem can make it a struggle to search through vast numbers of ads, or to manage large volumes of complaints. We are already seeing that automated systems can assist in these cases, enabling our experts to deliver more effective regulation.
For example, the ASA is currently working to ensure claims about climate impacts in ads are clear and don’t mislead consumers. The data science team has been able to search large volumes of ads from carbon-intensive industries, identifying those that make climate claims, and putting them in front of experts for further assessment, allowing them to act rapidly where appropriate.
The use of AI and machine learning tools has the potential to significantly improve our regulation, but we also recognise that this comes with risks. AI-based systems can get things wrong, for example making decisions a human expert wouldn’t agree with, or decisions that are unfair towards individuals or groups. The efficiency improvements we might gain from these systems aren't worth it if they undermine our consistency and transparency, or lead to unfair outcomes. We are committed to ensuring these potential downsides are considered carefully when using these innovative technologies.
The risks associated with AI-based systems vary depending on the application. There is no silver bullet for mitigating them, and it’s important we explore the full range of ways we can ensure our work is best aligned with our principles. There are technical measures that can be implemented in the construction of machine learning models, and we will pursue these. But we believe it is as important to think about how the overall systems we build influence outcomes for consumers and advertisers.
The core of our approach is to ensure every production AI-based system that aims to influence the ASA’s actions has in place a risk assessment which considers the full range of potential harms that could arise as a result. Those assessments will contain clear actions designed to mitigate any identified risks. We will regularly review those assessments to ensure they’re still accurate, identified actions are being undertaken and they’re having the desired impact.
It’s also important to bear in mind the ASA has many experts with years of experience regulating ads. Wherever possible our AI systems will focus on guiding those experts to make decisions more effectively and efficiently themselves, rather than acting in a fully automated way. Well-designed systems combining automated tools with human insight have the potential to deal with large volumes of content more efficiently, while making the most of the common sense and experience that comes from experts.
Finally, we also recognise that some things should not be automated, and we should only use AI where it’s appropriate. For example, AI systems are unlikely anytime soon to be able to judge how the ASA should respond to a complaint, which invariably depends on the complex interplay of the content of the ad, the nature of the targeting or medium, the wider context and the way the audience is likely to interpret the ad.
We believe that machine learning and AI-based systems, when built and used correctly, have the potential to greatly enhance our effectiveness. Ignoring the potential of these technologies misses an opportunity to deliver better outcomes. But it’s also crucial to be aware of the risks involved to make sure we continue to deliver better regulation.
More on
-
Keep up to date
Sign up to our rulings, newsletters and emargoed access for Press. Subscribe now.