Artificial IntelligenceSocial Issues

The Role of Biometric Data in the Spread of AI Misinformation

Artificial intelligence (AI) is everywhere in our current lives. From AI in home devices like Amazon’s Alexa to search engines predicting what’s next on our shopping lists, AI has infiltrated not just our internet usage, but our homes, schools, and governments. Though this integration of AI has helped many aspects of human life, there are concerns that without monitoring, AI usage will create something negative.  

Many cases of AI spreading misinformation have been brought to light in recent years, impacting the everyday person, politicians, and even court officials. In particular, AI that mimics people’s identifying features such as appearances, mannerisms, and even their voices pose a safety concern. These characteristics help form a person’s biometrics, defined by the Office of the Privacy Commissioner of Canada as “a range of techniques, devices and systems that enable machines to recognize individuals, or confirm or authenticate their identities.” For example, a passport qualifies as a biometric, and it holds identifying information of its holder. 

Computer science researcher Stuart Russell, who co-authored a book on AI in the 1990s, told New York Magazine in 2023 that the inability of AI to distinguish truth from lies will have “significant negative effects.” Russell called for teams to stop the development of large AI projects, stating that these technologies should only be developed in confidence that these will generate “positive” effects and any risks will be “manageable.”  

Free Crop unrecognizable person demonstrating British passport Stock Photo

(Image source: Pexels)

The Use of AI to Target Politicians and Their Communities 

One of the most critical sectors in our lives impacted by AI misinformation is politics. Political figures have been subject to “deepfake” controversies that make it seem like they said or did something that never happened. Deepfakes use false but realistic-appearing images or videos of someone to deceive viewers. 

This type of AI usage falls under generative AI, defined by the Canadian government as a type of AI that produces content like “text, audio, code, videos, and images,” based on short prompts. In the context of politics, deepfakes can target a public figure in order to try to influence the public’s perception of that person. Prime Minister Justin Trudeau has been the subject of deepfake images in the past, including one instance of him seemingly reading an anti-Trudeau book in 2022, and one deepfake “endorsement” of an investment platform in 2023. 

Building on how deepfakes can harm a politician’s image, AI usage in politicians’ campaigns can also harm the public’s trust. In June 2023, an image of a three-armed woman appeared in then Toronto mayoral candidate Anthony Furey’s campaign. Furey’s team confirmed to CTV News Toronto they used AI to generate the image, saying “it wasn’t intentional.” This usage of AI in political campaigns seeks to mislead viewers by not disclosing that these images are fake. It calls into question the ethics of political teams: if campaigns slowly become riddled with AI-generated content, how can communities tell which parts of a politician’s promises are genuine? 

 The Occupation of AI in the Voice Acting Industry

Free Microphone on tripod attached to laptop in studio Stock Photo

(Image source: Pexels)

AI certainly has a role in the creative field. In the voice acting industry, apprehension about the theft and usage of trademark voices and likenesses of actors being used without consent is a rising concern. This tech doesn’t just create more competition for jobs, AI has made it increasingly easier to copy and mimic the voices of celebrities, actors, or public figures in projects that can harm their reputation or affect their future job prospects.

For tech companies and producers, AI voice generators such as Elevenlabs and VocaliD are able to generate synthetic voices with a few hours worth of voice demos and are an exciting invention for producers that keep costs down. For actors, however, this tech threatens their livelihood.

In 2021, Bev Standing sued TikTok for the use of her voice in its text-to-speech feature without her consent or more compensation. Similarly, in 2011, with the introduction of the iPhone 4S, Susan Bennett discovered she had been made the voice of Apple’s virtual personal assistant, Siri without her knowledge or consent. Bennett had been paid for the initial recordings, done with a different company, but did not know her voice would be bought by Apple to become the most recognizable voice in North America. As of today, Bennett has still not been compensated by Apple for the use of her voice.

Though the recording of a voice can be copyrighted, laws against the synthesis of voices through AI are still catching up. A 2023 article posted by Forbes revealed that “even though voices are inherently distinguishable, they aren’t protected as intellectual property.”

Though AI has not been able to perfectly replicate voices due to the uniqueness of human speech, it’s ability to create synthetic voices is pushing further into the realm of realism each. These generated voices can be easily mishandled and used to spread misinformation, and muddy reputations of the voice provider.

Free People Inside a Voting Center Stock Photo

(Image source: Pexels)

With more AI scams popping up, some governments are trying to mitigate the negative usage of this technology. In October 2023, the Canadian government signed a “global declaration on information integrity online,” alongside many other nations. The “information integrity” it seeks to create and preserve produces “accurate trustworth, and reliable information.” It commits to taking steps to monitor misinformation and protect users’ human rights.

Canada, however, lacks more specific goals in supporting Canadians in this evolving and concerning media space. On Feb. 4, Yoshua Bengio, a renowned Canadian researcher on AI, told Parliament it needs to enact laws regulating AI as soon as possible. Bengio sees AI exceeding the intelligence of a human as a not-so-distant future, asking lawmakers to consider the nefarious ways this technology could impact society. 

South of the border, the United States of America recently launched a more specific approach to addressing one aspect of misinformation. The U.S. Federal Trade Commission (FTC) launched the “FTC Voice Cloning Challenge” in January, which specifically asks for Americans’ input for tackling “AI-enabled voice cloning harms.” A person’s voice is another identifier that technology can manipulate, with applications ranging from financial fraud to making a voice actor say lines they never spoke.

Free Parliament Hill Stock Photo

(Image source: Pexels)


Artificial Intelligence, or AI, has become a constant in our everyday lives, and with these developments, our dependency on technology has grown more prominent than ever. AI has helped progress the digitalization of our world by analyzing, organizing, and processing information faster than humanly possible.

Simultaneously, AI has become a disturbance in many industries as it creates opportunities for misuse as copyright laws catch up. From film, to politics, to academia, industries are adapting this new technology to improve their productivity. Unfortunately, many people within these respective niches are finding that AI threatens their line of work.

AI is currently moving in a concerning direction, masterfully churning out terrifyingly plausible accounts of people’s voices, likenesses, and characteristics. From voice actors having their voices stolen and replicated to politicians and celebrities featured in videos doing and saying things they never have. As more reports of biometric data theft surface, the ethics and regulation of AI is an immensely important topic that must be navigated within the next few years.

Free Black and White Laptop Computer on Brown Wooden Desk Stock Photo

(Image source: Pexels)


Want to learn more about INKspire? Check out our organization's website.
This is default text for notification bar