Allthewebnews

AI Regulation Under GOP: A New Era of Innovation?

The outlook is uncertain for AI regulations as the US government pivots to full Republican control

With artificial intelligence at a critical juncture in its development, the federal government is likely to shift from a focus on AI protections to one on reducing red tape.

That’s a promising potential for some investors, but it raises questions about the future of any technological safeguards, particularly the use of AI deepfakes in elections and political campaigns.

President-elect Donald Trump has promised to remove President Joe Biden’s broad AI executive order, which aimed to protect people’s rights and safety without limiting innovation. He hasn’t announced what he will do in its place, but the Republican National Committee’s platform, which he recently updated, states that AI development should be “rooted in Free Speech and Human Flourishing.”

It’s unclear whether Congress, which will soon be completely controlled by Republicans, will want to approve any AI-related legislation. Interviews with a dozen lawmakers and industry professionals reveal that there is still a desire to expand the technology’s usage in national security and crack down on nonconsensual sexual photographs.

However, the use of AI in elections and spreading misinformation is likely to take a backseat as Republican politicians turn away from anything they perceive as potentially restricting innovation or free expression.

“AI has incredible potential to enhance human productivity and positively benefit our economy,” said Rep. Jay Obernolte, a California Republican known for his leadership in the growing technology. “We need to strike an appropriate balance between putting in place the framework to prevent the harmful things from happening while at the same time enabling innovation.”

For years, many interested in artificial intelligence have hoped for broad federal law. However, with Congress gridlocked on practically every issue, no artificial intelligence bill was passed, leaving only a succession of ideas and reports.

Some lawmakers feel there is enough bipartisan support for certain AI-related concerns to pass legislation.

“I find there are Republicans that are very interested in this topic,” said Democratic Sen. Gary Peters, citing national security as one possible area of agreement. “I am confident I will be able to work with them as I have in the past.”

It’s still unclear how much Republicans want the federal government to become involved in AI development. Before this year’s election, few people were interested in regulating how the Federal Election Commission or the Federal Communications Commission handled AI-generated content, fearing that it would raise First Amendment concerns while Trump’s campaign and other Republicans used the technology to create political memes.

When Trump was elected president, the FCC was in the midst of a lengthy process to draft AI-related regulations. That activity has since been halted in accordance with long-standing standards governing administrative transitions.

Trump has displayed both excitement and skepticism about artificial intelligence.

During an interview with Fox Business earlier this year, he described the technology as “very dangerous” and “so scary” because “there’s no real solution.” However, his campaign and followers embraced AI-generated visuals more than their Democratic counterparts. They frequently utilized them in social media messages that were not intended to deceive, but rather to reinforce Republican political beliefs.

Elon Musk, Trump’s close adviser and the founder of numerous companies that rely on AI, has expressed both fear and excitement about the technology, depending on how it is used.

Musk utilized his social media platform, X, to promote AI-generated visuals and videos during the election. Americans for Responsible Innovation, a nonprofit focusing on artificial intelligence, has actively lobbied Trump to choose Musk as his top technology adviser.

“We believe Elon has a fairly sophisticated understanding of both the opportunities and risks of advanced AI systems,” said Doug Calidas, a leading operative with the group.

Others, however, are concerned about Musk’s advice to Trump on AI. Peters claimed that it may weaken the president.

“It is a concern,” the Michigan Democrat stated. “Whenever you have anybody that has a strong financial interest in a particular technology, you should take their advice and counsel with a grain of salt.”

Many AI specialists expressed alarm about an eleventh-hour deepfake — a lifelike AI image, video, or audio clip — that may sway or confuse voters as they went to the polls. While those fears were unfounded, AI nonetheless had an impact on the election, according to Vivian Schiller, executive director of Aspen Digital, a neutral think tank affiliated with the Aspen Institute.

“I would not use the term that I hear a lot of people using, which is it was the dog that didn’t bark,” she said of artificial intelligence in the 2024 election. “It was there, just not in the way that we expected.”

Campaigns used AI systems to tailor messages to voters. AI-generated memes, while not lifelike enough to be mistaken for real, felt authentic enough to exacerbate party divides.

A political consultant imitated Joe Biden’s voice in robocalls that could have discouraged voters from voting in New Hampshire’s primary if not detected soon. Foreign actors also utilized AI techniques to develop and automate phony internet accounts and websites that distribute misinformation to a US audience.

Even if artificial intelligence did not ultimately impact the election outcome, the technology made political inroads and contributed to a climate in which US people lack confidence in what they are seeing. That dynamic is one of the reasons why some in the AI business want to see rules that set guidelines.

“President Trump and people on his team have said they don’t want to stifle the technology and want to support its development, so that is welcome news,” said Craig Albright, the top lobbyist and senior vice president at The Software Alliance, a trade group whose members include OpenAI, Oracle, and IBM. “It is our view that passing national laws to set the rules of the road will be good for developing markets for the technology.”

Suresh Venkatasubramanian, director of Brown University’s Center for Tech Responsibility, said that AI safety proponents presented similar concerns at a recent meeting in San Francisco.

“By putting literal guardrails, lanes, and road rules, we were able to get cars that could roll a lot faster,” said Venkatasubramanian, a former Biden administration official who helped craft White House guidelines for handling AI.

Rob Weissman, co-president of the advocacy group Public Citizen, is skeptical about the possibilities for federal legislation and concerned about Trump’s promise to remove Biden’s executive order, which established an initial set of national standards for the business. His group has called for federal regulation of generative AI in elections.

“The safeguards are themselves ways to promote innovation so that we have AI that’s useful and safe and doesn’t exclude people and promotes the technology in ways that serve the public interest,” according to him.

Exit mobile version