“The Race for AI Supremacy”. A conversation with Parmy Olson

In the race to develop superintelligence – a technology that will transform the world more fundamentally than electricity or the Internet – two figures stand out: Sam Altman, CEO of OpenAI, and Demis Hassabis, CEO of DeepMind. Both are brilliant, ambitious and determined to create machines that can think, reason and learn like humans do. In her new book, Supremacy: AI, ChatGPT, and the Race That Will Change the World, journalist Parmy Olson explores their rivalry and the forces shaping the future of AI. She discussed her work on the London Futurists podcast.

“This is really about a battle for control of technology,” Olson explains, “and governance over the future of AI.” The competition between Altman and Hassabis is taking place within the context of an industry dominated by US tech giants, with Google and Microsoft in pole position, and Meta, Amazon, Apple and Nvidia eager to catch up.

Olson’s book reveals a world of powerful ambitions, ethical dilemmas and high competition, with tensions between the US and China making it a vital geopolitical as well as a corporate battle.

The Expensive Journey to Superintelligence

Artificial General Intelligence (AGI) is a term that is used to mean a number of different things, but the best way to understand it is as a machine that signals the arrival of superintelligence. An AGI has all the cognitive abilities of an adult human, and some of them at the superhuman level. Very soon after AGI arrives, we will have superintelligence and quite possibly an intelligence explosion as machines improve rapidly.

Developing superintelligence is not only a tremendous technological challenge, but an extremely expensive one. As Olson notes, the capital needed to develop advanced models, manage massive computing power and attract top talent means founders and companies often find themselves caught between idealism and commercial reality.

For example, OpenAI began life as a non-profit organization, supported by substantial donations, including a large gift from Elon Musk. However, when Musk’s request to be put in charge was rejected, he resigned, taking the money with him. OpenAI needed billions to support its research, so it restructured into a “limited profit” model, where profits are limited to attract commercial funding while seeking to preserve its non-profit ideals. Similarly, DeepMind started out with big ambitions to build AGI for the benefit of humanity, but once Google bought the company, they inevitably succumbed to the demands of their corporate managers and shareholders.

“AI development is so expensive,” Olson explains. “It’s almost impossible to do that without being pulled by the gravitational pull of companies like Microsoft or Google.” The need for funding, combined with the ambition to develop ever more advanced systems, often leads to compromises that severely test the original ideals of the founders.

Strategic minds and corporate power

As the heads of OpenAI and DeepMind, Altman and Hassabis have become icons in the AI ​​world, not only for their intelligence, but also for their commitment to confronting the ethical and existential risks posed by AGI. Hassabis, who began his career as a neuroscientist and was once a chess champion, has a reputation for thinking several moves ahead. Olson notes that those who have worked with him describe him as a master strategist—someone who excels at “top and bottom management,” which may explain his ability to rise through Google’s ranks.

After a power struggle between DeepMind and Google Brain, Hassabis heads Google’s entire AI division, responsible not only for DeepMind but also for Google’s overall AI strategy. His leadership has put him in a powerful position, with some speculating that he may one day take over Alphabet as a whole. Altman, meanwhile, has also earned a reputation for being blunt and pragmatic, warning of the existential dangers AI can pose if misused, but criticized by some for changing his approach to suit the funding requirements of his company.

“Both Altman and Hassabis are sincere in their intentions to make a positive impact,” says Olson. “But they are also facing intense pressures and conflicts of interest. They are trying to maintain their ideals while balancing the pull of big commercial interests, which often pull them in opposite directions.”

Navigating the Existential Perils of Superintelligence

A unique aspect of this race is that both Altman and Hassabis acknowledge that superintelligence may pose an existential threat to humanity. While they are optimistic about its potential, they also know that once superintelligence exists, it will almost certainly be beyond our control.

This paradox—the race to develop superintelligence while trying to ensure its security—reflects the complex motivations of these two leaders. Both believe that if superintelligence is inevitable, it is better that it be developed by responsible actors rather than left to “bad actors,” including some foreign governments. However, following them raises difficult questions about how much care they are really exercising.

“They’re caught in this incredible balancing act,” Olson says. “They are committed to advancing AI, but are constantly aware of the risks it poses. It’s like they have to do mental gymnastics to reconcile their ambitions with their concerns.”

The growing influence of tech giants – and the specter of China

While the rivalry between Altman and Hassabis may be the most visible competition, the broader power struggle between Microsoft and Google has intensified. Both companies have invested heavily in AI and both are deeply involved with their respective AGI labs – Microsoft with OpenAI and Google with Google DeepMind.

Further complicating matters is the global race between US tech giants and China, with the government providing extensive support for AI initiatives. Although Chinese models lag behind in sophistication, Chinese tech giants such as Baidu and Alibaba benefit from subsidies that make their large language models accessible to businesses at a fraction of the cost. Meanwhile, the role the US government will play is just one of many meaningless ones now that Trump has secured his return to the White House.

Will governments step in?

Given the immense power and influence that superintelligence will have, it is likely that intelligence agencies and governments will step in once they believe its arrival is imminent, trying to avoid losing control of the technology itself and trying to make sure they don’t fall behind foreigners. competitors.

Olson suggests this could take the form of covert collaboration between tech companies and intelligence agencies, similar to what was revealed by the Edward Snowden revelations. Recently, OpenAI appointed a former NSA official to its board, possibly signaling an openness to increased government surveillance.

The idea of ​​nationalizing companies developing AI would be highly controversial. “The technology lobby in Washington is incredibly powerful, perhaps even more influential than the government itself,” Olson points out, “but some form of cooperation or oversight seems increasingly likely.”

Leave a Comment