In 2020, when Joe Biden won the White House, generative artificial intelligence still seemed like a toy, not a world-changing new technology. The first major AI image generator, DALL-E, wouldn’t come out until January 2021 – and it certainly wouldn’t put any artists out of business, as it still had problems generating basic images. The release of ChatGPT, which took off in AI overnight, was still more than two years away. Google’s AI-powered search results, which – like it or not – are now inevitable, would seem unimaginable.
In the world of AI, four years is a lifetime. This is one of the things that makes AI policy and regulation so difficult. The gears of politics tend to grind slowly. And every four to eight years, they grind into reverse, when a new administration comes to power with different priorities.
This works tolerably well for, say, our food and drug regulation, or other areas where change is slow and there is more or less bipartisan consensus on policy. But when regulating a technology that is essentially too new to go to kindergarten, policymakers face a difficult challenge. And that’s even more the case when we experience a sharp shift in who those policymakers are, as the US will after Donald Trump’s victory in Tuesday’s presidential election.
This week, I reached out to people to ask: What will AI policy look like under a Trump administration? Their guesses were all over the place, but the big picture is this: Unlike many other issues, Washington is still not completely polarized on the issue of AI.
Trump’s supporters include members of the tech-accelerator right, led by venture capitalist Marc Andreessen, who are fiercely opposed to regulating an exciting new industry.
But on Trump’s side is Elon Musk, who supported California’s SB 1047 to regulate AI and has long worried that AI will bring about the end of the human race (a position that’s easy to argue with dismissed as classic Musk arrogance, but it’s actually quite common).
Trump’s first administration was chaotic and featured the rise and fall of various chiefs of staff and senior advisers. Very few of the people who were close to him at the start of his time in office were still around at the bitter end. Where AI policy goes in his second term may depend on who listens at crucial moments.
Where does the new administration stand in AI
In 2023, the Biden administration issued an executive order on AI, which, while generally modest, marked an early government effort to take the risk of AI seriously. The Trump campaign platform says the executive order “stifles AI innovation and imposes radical left-wing ideas on the development of this technology” and has promised to rescind it.
“There will likely be a repeal of Biden’s executive order on AI one day,” Samuel Hammond, a senior economist at the Foundation for American Innovation, told me, though he added, “what replaces it is uncertain.” The AI Security Institute created under Biden, Hammond noted, has “broad, bipartisan support” — though it will be up to Congress to properly authorize and fund it, something they can and should do this winter.
There are reportedly drafts in Trump’s orbit for a proposed replacement executive order that would create a “Manhattan Project” for military AI and build industry-led agencies to evaluate and secure the models.
After that, though, it’s challenging to guess what will happen because the coalition that swept Trump into office is, in fact, sharply divided on AI.
“How Trump approaches AI policy will provide a window into the tensions on the right,” Hammond said. “You have people like Marc Andreessen who want to slam the gas pedal and people like Tucker Carlson who worry that technology is already moving too fast. JD Vance is a pragmatist on these issues, seeing AI and crypto as an opportunity to break Big Tech’s monopoly. Elon Musk wants to accelerate technology in general while taking the existential risks from AI seriously. They are all united against ‘woke’ AI, but their positive agenda for how to deal with the real-world dangers of AI is less clear.”
Trump himself hasn’t commented much on AI, but when he did — as he did in an interview with Logan Paul earlier this year — he seemed familiar with both the prospect of “accelerating defenses against China” and pundits’ fears. “We have to be at the forefront,” he said. “This will happen. And if it’s going to happen, we have to take the lead on China.”
As for whether AI will develop that acts independently and takes over, he said, “You know, there are those people who say it takes over the human race. It’s really powerful stuff, AI. So let’s see how it all works out.”
In one sense, this is an incredibly absurd position on the last real possibility of the human race — you can’t see how an existential threat “works” — but in another, Trump is actually taking a right stand. main view here.
Many AI experts think that the possibility of AI taking over the human race is a realistic possibility and that it could happen in the coming decades, and also think that we do not yet know enough about the nature of this risk to make effective policy. around him. So, implicitly, many people have the policy of “it might kill us all, who knows? I guess we’ll see what happens,” and Trump, as he often proves to be, is mostly unusual for coming out and saying it.
We cannot afford polarization. Can we avoid it?
There’s been a lot of commentary about AI, with Republicans calling equity and bias concerns nonsense, but as Hammond noted, there’s also a good deal of bipartisan consensus. No one in Congress wants to see the US fall behind militarily, or smother a promising new technology in its cradle. And no one wants extremely dangerous weapons developed without oversight by random tech companies.
Meta’s chief scientist Yann LeCun, who is an outspoken critic of Trump, is also an outspoken critic of AI security concerns. Musk supported California’s AI regulation bill — which was bipartisan and vetoed by a Democratic governor — and of course Musk also enthusiastically endorsed Trump for the presidency. Right now, it’s hard to place concerns about extremely powerful AI on the political spectrum.
But this is actually a good thing, and it would be disastrous if this changed. With a rapidly evolving technology, Congress must be able to flexibly make policy and empower an agency to implement it. Partisan makes this almost impossible.
More than any specific agenda item, the best sign of a Trump administration’s AI policy will be whether it continues to be bipartisan and focused on things that all Americans, Democrat or Republican, agree on. like the ones we don’t want to do. everyone dies at the hands of superintelligent AI. And the worst sign would be if the complex policy questions that AI poses are rounded off into a blanket “regulation is bad” or “military is good” view that misses the specifics.
Hammond, for his part, was optimistic that the administration is taking artificial intelligence seriously. “They’re thinking about the right facility-level issues, such as the national security implications of AGI that are years away,” he said. Whether that will lead them to the right policies remains to be seen — but it would have been very uncertain even in a Harris administration.