
If Donald Trump wins the US presidential election in November, the guardrails could come off of artificial intelligence development, even as the dangers of defective AI models grow increasingly serious.
Trump’s election to a second term would dramatically reshape—and possibly cripple—efforts to protect Americans from the many dangers of poorly designed artificial intelligence, including misinformation, discrimination, and the poisoning of algorithms used in technology like autonomous vehicles.
The federal government has begun overseeing and advising AI companies under an executive order that President Joe Biden issued in October 2023. But Trump has vowed to repeal that order, with the Republican Party platform saying it “hinders AI innovation” and “imposes Radical Leftwing ideas” on AI development.
Trump’s promise has thrilled critics of the executive order who see it as illegal, dangerous, and an impediment to America’s digital arms race with China. Those critics include many of Trump’s closest allies, from X CEO Elon Musk and venture capitalist Marc Andreessen to Republican members of Congress and nearly two dozen GOP state attorneys general. Trump’s running mate, Ohio senator JD Vance, is staunchly opposed to AI regulation.
“Republicans don’t want to rush to overregulate this industry,” says Jacob Helberg, a tech executive and AI enthusiast who has been dubbed “Silicon Valley’s Trump whisperer.”
But tech and cyber experts warn that eliminating the EO’s safety and security provisions would undermine the trustworthiness of AI models that are increasingly creeping into all aspects of American life, from transportation and medicine to employment and surveillance.
The upcoming presidential election, in other words, could help determine whether AI becomes an unparalleled tool of productivity or an uncontrollable agent of chaos.
Oversight and Advice, Hand in Hand
Biden’s order addresses everything from using AI to improve veterans’ health care to setting safeguards for AI’s use in drug discovery. But most of the political controversy over the EO stems from two provisions in the section dealing with digital security risks and real-world safety impacts.
One provision requires owners of powerful AI models to report to the government about how they’re training the models and protecting them from tampering and theft, including by providing the results of “red-team tests” designed to find vulnerabilities in AI systems by simulating attacks. The other provision directs the Commerce Department’s National Institute of Standards and Technology (NIST) to produce guidance that helps companies develop AI models that are safe from cyberattacks and free of biases.
Content retrieved from: https://www.wired.com/story/donald-trump-ai-safety-regulation/.