An angry and scathing attack against a sitting elected person was unleashed by the chairman of the Manhattan Democratic Party.
Keith Wright, a prominent figure in New York politics, was overheard saying, “I dug her grave and she rolled into it.” A competitor was characterised as “lazy, incompetent— if it weren’t for her, I’d be in Congress,” he said, interspersed with curse words.
The 10-second video appeared to be an incredible “hot mic moment” for the powerful leader, and it went viral among Harlem political figures. The catch was that it was staged.
An unidentified source distributed AI-generated audio that sounded like Wright in an effort to sow political disarray. It was immediately condemned by Wright.
Among the most recent and heinous instances of deepfake audio to surface in the 2024 election cycle, this episode—first reported by AWN—helps to show the increasing use of AI as a malign instrument in American politics.
Law enforcement officials and artificial intelligence (AI) specialists were also deeply concerned by the occurrence, with the latter group predicting that it portends the widespread distribution of false material during upcoming national elections. Furthermore, the country can do little to change the situation.
Tech entrepreneur Ilya Mouzykantskii told AWN, “The regulatory landscape is wholly insufficient” when asked about his company’s use of artificial intelligence-generated audio for voter phonebanking. “This election year, this tech story will dominate.”
Following a manipulated Joe Biden robocall in the lead-up to the New Hampshire primary, AWN was able to identify the first instance of AI-generated content being utilised against a political opponent in New York with the fabricated Wright audio.
Politics as an art form has always included deceit and manipulation. Campaigns in the United States have a long history of using divisive tactics, from the infamous Watergate incident involving Nixon’s administration to the “Swiftboat” assaults on John Kerry’s 2004 presidential campaign to the Russian meddling in Trump’s 2016 election. Not only have hyper-local campaigns been characterised by disinformation, but so have presidential candidates and incumbents.
There is now AI technology that is credible enough to mislead or misinform the public with relative ease, superseding all previous types of manipulation, whether mundane or epic. This is occurring when both the spread of false information and faith in established news outlets are on the decline.
Many states have approved legislation to combat the political use of deepfakes, but New York and others are only now beginning to do so. This is in contrast to larger states like California and Texas.
A political strategist with Authentic Campaigns, Mike Nellis, expressed his fear about the scalability of the technology when asked about the use of generative AI in crafting fundraising letters for politicians. While the use of false audio in New York political campaigns has not yet garnered much attention, Nellis expressed his certainty that “things like this have been happening” in more intimate groups.
Democratic rival Dean Phillips’ campaign was rejected by the AI platform OpenAI for utilising their technology to construct an audio chatbot with the voice of the candidate, Joe Biden. In January, a robocall mimicking Biden warned people not to vote in the New Hampshire primary.
This time, it was a politician who was using AI to help him win, not his opponent. The mayor of San Francisco, Eric Adams, did something similar in 2023; he used artificial intelligence to create PSAs in languages he doesn’t know how to speak, such as Yiddish and Spanish.
There is a lack of nationwide regulation.
Rep. Yvette Clarke’s (D-N.Y.) bill has failed to get any traction in the House. According to NBC News, three states passed legislation on political deepfakes in 2023, and over a dozen states have brought bills that are related to the topic.
Campaigns in New York would be required to disclose the use of artificial intelligence (AI) in some communications, such as mailers or radio advertisements, under the Political Artificial Intelligence Disclosure Act (PAID).
According to Democratic state Assemblymember Alex Bores, who is the main supporter of the measure, the Wright audio serves as “yet another example of why we need to regulate deepfakes in campaigns” (X). “This threat needs to be taken seriously immediately.”
Voters are enthusiastic about the problem, and lawmakers from both parties are on board with it (Republican state senator Jake Ashby is sponsoring nearly identical legislation in the other house), but the measure only addresses a limited subset of AI’s possible applications.
Since Wright’s voice cloning was done in an anonymous fashion and was not associated with any particular campaign, the PAID Act would not apply.
According to Bores, “this is a first step” during an interview. “I don’t believe this is the final step, but we must begin with disclosure and the entities that are currently subject to the most regulation, which are campaigns.”
There are now twelve other proposals pending in New York’s state legislature that aim to regulate artificial intelligence; however, the majority of these measures concern business applications of AI rather than political ones. If a film’s production utilised artificial intelligence to replace human workers, it might not be eligible for a tax credit.
This year, artificial intelligence (AI) is a priority for Governor Kathy Hochul, who is concentrating on fostering the economic benefits.
On the other hand, deepfakes are addressed by at least one statute in New York. In 2023, laws that penalise the uninvited distribution of sexually explicit photos were revised to include AI-generated photographs as well.