Candidates, campaign staff and election administrators have always adapted to new technologies, be it political ads popping up in the mid-20th century or more recently as anonymous cryptocurrency contributions became an option.
With the recent emergence of multiple AI programs that can produce realistic images, videos and voices in a matter of seconds, both campaigns and state policymakers are adjusting.
People have always attempted to alter or misrepresent media to influence an election. Some states have criminal impersonation laws that were enacted before the advent of the internet, which may apply to AI such as Wisconsin’s §12.05, enacted in 1976.
In 2023 legislation may use different terms such as “deepfake”, “synthetic media” or “deceptive media” when referring to AI. These terms all refer to what people commonly think of as AI but may have different implications depending on what term is used and how statute defines it. Two bills have been enacted this year, both related to the use of AI generated content
The United States and other countries have seen a surge in foreign and domestic actors attempting to influence electoral outcomes in recent years. That’s old news. The new question on policymakers’ minds is whether the recent explosion in generative artificial intelligence will impact campaigning in 2024.
There are many ways AI may negatively affect the electoral process, including voter misinformation by chatbots and phishing scams on election officials through AI-generated voices. But it’s the effects of deepfakes—manipulated videos and images created by AI—that have been the center of concern for most. Deepfakes have given bad actors another tool to deceive voters and damage political rivals.
“AI’s ability to spread misinformation has been inherited from social media.”
—Nate Persily, Freeman Spogli senior fellow, Stanford University
Although misinformation is as old as time, generative AI poses new challenges and threats to campaigns. NCSL spoke with experts and legislators to better understand the problems AI poses for campaigns and the challenges involved in legislating it: Nate Persily, professor of law and Freeman Spogli senior fellow at Stanford; Ethan Bueno de Mesquita, Sydney Stein professor and interim dean at the University of Chicago; and Kansas Rep. Pat Proctor (R).
The trouble deepfakes cause is not new; people have been doing the same with “shallow fakes” for a long time, Persily says. Shallow fakes involve the use of traditional editing software, such as Photoshop, to deceive people into thinking a piece of altered media is authentic.
One example of a shallow fake is a 2020 video of Rep. Nancy Pelosi seemingly slurring her words. The video, created by manipulating its audio in traditional editing software, led some viewers to question her fitness for office.
There are many unknowns when it comes to generative AI, but some things are already becoming clear. Several countries, including Turkey and Argentina, have held elections in which AI played a role in campaigns.
Bueno de Mesquita puts the current uses of AI into two camps: those to deceive people and those to generate publicity through its novel use and shock value. He says the latter is a novelty that will fade away but believes that deception is here to stay.
As for the first camp, Bueno de Mesquita says AI-generated deception may result in at least two problems, the main one being the spread of misinformation. This was seen in Turkey when a deepfake video showed presidential candidate Kemal Kilicdaroglu clapping alongside a member of the Kurdistan Workers’ Party, a Turkish political militant group that the European Union and several countries have designated as a terrorist organization.
The second problem, he says, is that over time, AI may also erode trust in authentic information. “Widespread circulation of manufactured content may undermine voters’ trust in the broader information environment. If voters come to believe that they cannot trust any digital evidence, it becomes difficult to seriously evaluate those who seek to represent them,” Bueno de Mesquita writes in a white paper he co-authored.
Persily adds that generative AI will make up only a small percentage of what people see online but will lead to skepticism of all other content.
This manifested itself in Turkey with the release of a sex tape allegedly involving presidential candidate Muharrem Ince. In response, he claimed the video was a deepfake, introducing public doubt in its authenticity.
The U.S. is also starting to see how generative AI will affect campaigning. The issue was brought to Proctor’s attention by his wife, who saw a video on Facebook falsely purporting to be from CNN.
Despite these tangible examples, it’s hard to say how much harm generative AI will cause. Bueno de Mesquita says there is little research on misinformation’s effect on voting as it is hard to quantify. “There is good reason to believe that AI will make misinformation worse—although we don’t have a good handle on how much worse,” he says.
Every AI campaign bill enacted in 2023 received some amount of cross-party support. Proctor says that political will exists to take bipartisan action in Kansas and that Republicans and Democrats in the Sunflower State are actively working on legislation.
To date, six states have enacted policies regulating generative AI’s use in campaigns. One approach has been prohibiting the technology’s use in elections and campaigns altogether. These laws restrict the publication of generative AI content a number of months prior to polls opening. Another approach has been to require disclaimers on content generated by AI, similar to disclosures of who pays for political ads.
Digital signatures could be another policy option, Bueno de Mesquita says. Digital signatures involve either putting information in metadata—descriptive data embedded in a file—or putting a physical watermark on an image or video. This identifies where an image originated, whether from a publication site or an AI image generator. Bueno de Mesquita points out that digital signatures would require a cultural shift in how online users verify the authenticity of the content they see online—something that is currently uncommon.
When the COVID pandemic occurred, states needed to quickly enact legislation on health, emergency powers and election administration, with few prior analogs to guide them. In a similar vein, generative AI emerged quickly and with few precedents for policymakers to refer to.
Despite the challenges, many legislators are breaking new ground on policy. Proctor says policy must be implemented in a way that safeguards the First Amendment’s right to free speech. He notes there could be legitimate reasons for using AI—for example, as a tool for creating political satire.
One challenge legislators might face is defining AI, Persily says. He compares this to difficulties in campaign finance law when defining a term like “express advocacy.” Lawmakers want to capture as much as they can in a single term while leaving out specific cases.
Another challenge is determining the threshold for when AI use triggers a legal restriction. Everything from cellphone camera software to text autocorrection have used AI technologies for some time. “Clearly these laws aren’t meant to prohibit changing the lighting of a photo; it’s about the nefarious uses, like to say an event never happened,” Persily says.
It’s important to keep in mind that the problems associated with generative AI are not solely caused by it, Persily says. “AI’s ability to spread misinformation has been inherited from social media.” Social media’s purpose is to disseminate information, including AI generated content.
Persily says that social media creates additional challenges as it is easier for governments to regulate ads than organic content. When a political campaign does something, governments can quickly identify who is at fault and intervene. On the internet, the origins of a post quickly get lost as people unknowingly repost false information.
Artificial intelligence, the development of computer systems to perform tasks that normally require human intelligence, such as learning and decision-making, has the potential to spur innovation and transform industry and government. As the science and technology of AI continues to develop, more products and services are coming onto the market. For example, companies are developing AI to help consumers run their homes and allow the elderly to stay in their homes longer. AI is used in health care technologies, self-driving cars, digital assistants and many other areas of daily life.
Concerns about potential misuse or unintended consequences of AI, however, have prompted efforts to examine and develop standards. The U.S. National Institute of Standards and Technology, for example, is holding workshops and discussions with the public and private sectors to develop federal standards for the creation of reliable, robust and trustworthy AI systems.
In the 2023 legislative session, at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills, and 18 states and Puerto Rico adopted resolutions or enacted legislation. Examples of the enacted legislation and adopted resolutions include:
State lawmakers also are considering AI’s benefits and challenges—a growing number of measures are being introduced to study the impact of AI or algorithms and the potential roles for policymakers.
46 Labs
Copyright © 2020 46 Labs - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.