AI in political advertising: game-changer or double-edged sword?
Elections in the era of AI can offer opportunities for voter
targeting and personalisation, but also pose threats like disinformation and deep fakes.

It’s the year of the AI elections, from political advertisers using machine learning technology to target specific demographics and generative AI for content creation to a literal AI candidate – Steve Endacott – running to become a member of parliament during the UK general elections. The chatbot, who’s the digital counterpart of a real candidate with the same name, aims to change how politicians interact with voters and ‘reinvent democracy’.
The technology is inescapable and while it presents as an opportunity for advertisers, it does come with its set of risks causing even more mistrust and hesitancy among voters. Regulation is more important now than ever to ensure that voters and online users are protected from the risk of AI in political advertising.

Credit: AI Steve
Credit: AI Steve
Harnessing the power of AI for voter targeting
There are numerous examples of how AI is being used within political campaigns and one of its powers is the ability to target specific voters and demographics that political parties are after. AI can use machine learning and data analysis to create and refine audience profiles based on their demographics, behaviours, and more.
In the UK, the Labour party’s investment into ads across Meta and Google targets audiences that are 35 and above while TikTok targets younger generations. The conservative party on the other hand mostly targets men over the age of 45, and very few ads target young voters at all.
According to Moseley, AI can be used in numerous ways including identifying audiences, automated media buying, and targeting, as well as speeding up production by building ad creatives at scale. It can also be used in reporting by making sense of large data sets quickly and efficiently/
Moseley says: “AI can simplify the process of targeting demographics in vast datasets, but the critical choices still remain with people. Yes, the machines can suggest where to find efficiency and effectiveness with data, but still can't know what messages in the context of an election run will turn emotion into action.”
Navigating the threat of misinformation and deep fakes
While AI can be used positively in areas such as micro-targeting and personalisation, the increased use of gen AI has led to numerous cases of deep fakes and spread of misinformation.
In the US, numerous residents received fake robocalls that claimed to come from President Joe Biden urging people not to vote; while in the UK fake audio clips of Labour’s Keir Starmer swearing at staffers went viral on social media. While these videos have been debunked by fact-checkers, they still have a lasting impact on voters and could even sway their decision-making.
Sergii Denysenko, CEO of performance and programmatic advertising platform MGID, tells PMW that the spread of deep image and video fakes can not only threaten the integrity of political campaigns, but can also undermine democratic systems.
According to a research by Thinks Insight and Strategy, 30% of British adults believe that the UK elections are more likely to be ‘manipulated’ or ‘rigged’ than ‘free and fair’; and 38% agree with the idea that it would be acceptable to question the validity of the election results in the UK than respect them.
Denysenko adds: “If misleading ads are allowed to gain traction, there is a real chance that voters will find it increasingly hard to tell fact from fiction. This could result in those voters losing faith in the electoral process altogether.”
To combat that, there needs to be safeguards in place that can minimise these risks and Denysenko emphasises that this would take a “multi-faceted approach” by focusing on four main action areas:
- Transparency: by applying clear labelling that distinguishes AI content from human content;
- Committing to ethical guidelines around the use of AI;
- Pushing for clear legislation at both state and federal level; and
- Educating the public about AI applications and challenges.
In response to some of deep fakes seen in the UK, Moseley says: “We're already seeing the White House openly respond and correct heavily edited videos of Joe Biden looking lost and disjointed – which appear to be taken out of context – at public events, and we're seeing journalists immediately refute misinformation and lies at the speed they're shared.
“'I’d expect this to continue with rebuttals, corrections and counterclaims shared immediately at scale as attacks spring up.”
Regulating AI: what’s being done?
With machine learning tools becoming more widely available to advertisers during sensitive periods like the elections, a number of tech companies have implemented safeguards and regulations around the use of AI in order to protect users from misinformation and misleading content.
In November 2023, Meta introduced a new policy which required political advertisers to disclose the use of third-party AI software, starting from 2024, to ensure that the company is implementing the right safeguards around the use of gen AI.
Additionally, the tech company stated that it would bar political advertisers from using its own gen AI tools advertising products. YouTube, in that same month, also announced that it would start requiring creators to flag synthetic content being uploaded to its platform in order to avoid misleading viewers.
In January this year, OpenAI also declared that politicians and political advertisers were no longer allowed to use the company’s AI tools or create content for the purpose of impersonation, including using chatbots that are posing as political candidates or government agencies and officials.
Other forms of regulation include immediately labelling content that shares misinformation such as the X community feature, which Moseley highlights as an “unlikely hero” despite being very sceptical at launch. However, he adds that the big ask is on whether voters would trust the sources of the counter claims or believe in the facts in the first place.
On a wider level, governments and industry leaders must establish “robust regulations” and industry standards – according to Zefr’s SVP EMEA, Emma Lacey – in order to mitigate the risks associated with AI. This includes the EU AI Act, which establishes a regulatory and legal framework for AI within the European Union.
Lacey says: “In addition, brands and political entities should prioritise ethical AI practices to protect the integrity of their campaigns and the broader digital landscape. By fostering a safer online environment, we can ensure that AI contributes positively to political discourse rather than undermining it.”

Premium content editor Jyoti Rambhai
Reporter Reem Makari
Designer Jide Eguakan
Data projects manager Carolyn Avery
The information on this report is correct as of 21 June 2024