Elections watchdog warned AI presents 'high' risk in current campaign: internal documents
The note highlights concerns around the potential misuse of AI technologies to spread misinformation and manipulate public opinion. The use of AI in election campaigns raises issues around transparency, accountability, and the protection of democratic processes. The watchdog is closely monitoring the situation and working to address any potential threats posed by AI. It is important for regulators to stay ahead of emerging technologies to safeguard the integrity of elections.

Introduction
An internal briefing note prepared for Canada's election watchdog classifies the use of artificial intelligence as a "high" risk for the ongoing election campaign. The briefing note was prepared for Commissioner of Canada Elections Caroline Simard — an independent officer of Parliament tasked with enforcing the Elections Act, including fining people for violations or laying charges for serious offences — roughly a month before the campaign kicked off.
Concerns about AI Use in Elections
The document indicates that while AI can be used for legitimate purposes, there are risks that the tools could be used to violate election rules. Specific concerns were raised about the use of AI tools and deepfakes — hyperrealistic faked video or audio.
The document also flags that "an increase in advertising for customized deepfake service offerings on the dark web has been observed." The impact of a deepfake can depend on how much it is circulated.
Expert Opinions on AI in Elections
Michael Litchfield, director of the AI risk and regulation lab at the University of Victoria, highlighted the difficulties in identifying and going after individuals who use AI to violate election rules. He emphasized that AI amplifies threats and makes it easy to create content that could violate the Elections Act. Similarly, Fenwick McKelvey, an assistant professor of information and communication technology policy at Concordia University, noted that AI tools can generate disinformation faster than it can be debunked, leading to challenges in the media environment.
Regulations and Challenges
The briefing note mentioned that Canada has generally relied on a "self-regulation" approach when it comes to AI, leaving it in the hands of the tech industry. However, the document raised concerns about the effectiveness of self-regulation. Bill C-27, which aimed to regulate some uses of AI, was introduced but did not reach the legislative deadline.
There are concerns that even with a regulatory framework, malicious actors may not adhere to guidelines and regulations. The Communications Security Establishment (CSE) highlighted the use of AI by known hostile actors to fuel disinformation campaigns.
Impact on Elections
There are instances in the current campaign where AI has been used to spread misinformation. Concerns were also raised about the use of AI leading to an increase in "news avoidance" and a decrease in trust in online content.
The briefing note warned that the use of AI is likely to result in numerous complaints during the election campaign, potentially complicating assessments. Despite not necessarily breaking any rules, the use of AI for benign purposes can alter how campaigns are conducted, as seen with examples like AI-generated content posted by political figures.