LANSING, Mich. — Michigan is joining an effort to curb misleading uses of artificial intelligence and manipulated media through state-level policies as Congress and the Federal Election Commission continue to debate more sweeping regulations ahead of the 2024 elections.
State and federal campaigns will have to clearly say which political ads run in Michigan were created using artificial intelligence under legislation expected to be signed into law in the coming days by Gov. Gretchen Whitmer, Democrat. It would also prohibit the use of AI-generated deepfakes within 90 days of an election without a separate disclosure identifying the media as manipulated.
Deepfakes are fake media that misrepresent someone as doing or saying something they didn’t do. They are created using genetic artificial intelligence, a type of artificial intelligence that can create convincing images, videos or audio clips within seconds.
There are growing concerns that genetic artificial intelligence will be used in the 2024 presidential race to mislead voters, impersonate candidates and undermine elections on a scale and speed never seen before.
Candidates and race committees are already experimenting with the rapidly evolving technology, which can create convincing fake images, videos and audio clips in seconds, and in recent years has become cheaper, faster and easier for the public to use.
The Republican National Committee released an ad created entirely by artificial intelligence in April to show the future of the United States if President Joe Biden is re-elected. Revealing in fine print that it was made with artificial intelligence, it featured fake but realistic photos showing boarded-up storefronts, armored military patrols on the streets and huge increases in panic-inducing immigration.
In July, Never Back Down, a super PAC that supports Republican Florida Gov. Ron DeSantis, used an artificial intelligence voice cloning tool to imitate former President Donald Trump’s voice, making it sound like he was narrating a post on social media which he did even though he never said the statement out loud. .
Experts say these are just indications of what could happen if campaigns or outside actors decide to use AI deepfakes in more nefarious ways.
So far, states like California, Minnesota, Texas and Washington have passed laws regulating deepfakes in political advertising. Similar legislation has been introduced in Illinois, Kentucky, New Jersey and New York, according to the nonprofit advocacy group Public Citizen.
Under Michigan law, any person, committee or other entity that distributes an advertisement for a candidate must clearly state whether it uses genetic artificial intelligence. The disclosure must be the same font size as the bulk of the text in print ads and must appear “for at least four seconds in letters as large as the bulk of any text” in television ads, according to a legislative analysis by the state fiscal service.
Deepfakes used within 90 days of the election would require a separate disclaimer informing the viewer that the content is being faked to depict speech or behavior that did not appear. If the medium is video, the disclaimer should be prominent and appear throughout the video.
Campaigns could face a misdemeanor punishable by up to 93 days in jail, a fine of up to $1,000, or both for the first violation of the proposed laws. The attorney general or candidate aggrieved by the misleading media could turn to the appropriate district court for relief.
Federal lawmakers on both sides of the aisle have stressed the importance of deep-fake legislation in political advertising and held meetings to discuss it, but Congress has yet to pass anything.
A recent bipartisan Senate bill, co-sponsored by Democratic Sen. Amy Klobuchar of Minnesota, Republican Sen. Josh Hawley of Missouri and others, would ban “materially misleading” deepfakes related to federal candidates, with exceptions for parody and the satire.
Michigan Secretary of State Jocelyn Benson flew to Washington, DC in early November to participate in a bipartisan discussion on AI and elections and called on senators to pass Klobuchar and Hawley’s federal Deceptive AI Act. Benson said she also encouraged senators to go back home and lobby their state legislators to pass similar legislation that makes sense for their states.
Federal law is limited in its ability to regulate AI at the state and local level, Benson said in an interview, adding that states also need federal funds to address the challenges posed by AI.
“All of this becomes a reality if the federal government would give us money to hire someone to just handle AI in our states and similarly educate voters on how to spot deepfakes and what to do when they find them,” said Benson. “This solves a lot of problems. We can’t do it alone.”
In August, the Federal Election Commission took a procedural step toward potentially regulating AI-generated deepfakes in political ads under its existing rules against “fraudulent misinformation.” Although the commission held a public comment period on the petition, filed by the Public Citizen, it has yet to make a decision.
Social media companies have also announced some guidelines to mitigate the spread of harmful deepfakes. Meta, which owns Facebook and Instagram, announced earlier this month that it would require political ads served on the platforms to disclose whether they were created using artificial intelligence. Google unveiled a similar AI flagging policy in September for political ads served on YouTube or other Google platforms.
___
Swenson reported from New York. Associated Press writer Christina A. Cassidy contributed from Washington.
___
The Associated Press receives support from various private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. AP is solely responsible for all content.