“Existential risk” posed by artificial antelligence refers to the possibility that the development and use of AI could intentionally or unintentionally lead to catastrophic consequences for humanity, potentially resulting in the irreversible collapse of human civilisation or even the extinction of the human species.
The main scenarios hypothesised include hostile superintelligence acting in ways incompatible with human survival, military use of AI with autonomous weapons leading to uncontrollable conflicts or catastrophic accidents, or amplified inequalities causing social and political tensions leading to large-scale conflicts.
Supporters of this view include philosophers, entrepreneurs, and scientists.
Swedish philosopher Nick Bostrom is influential in Silicon Valley techno-billionaire circles and is a controversial figure. He is the former director of the Future of Humanity Institute (FHI) at the University of Oxford (founded in 2005, the institute was closed by the British university on April 26, which is the reason why Bostrom left Oxford). He is a member of the Future of Life Institute think tank[1]. He is the author of the book “Superintelligence: Paths, Dangers, Strategies” (2014) which explored the potential risks of superintelligence. A technophile and transhumanist, Bostrom has studied “existential risk”, i.e. the ways in which humanity could fail and become extinct.
Elon Musk, CEO of Tesla, SpaceX and xAI, considers artificial general intelligence (AGI) to be the biggest “existential risk” to humanity. Sam Altman, CEO of OpenAI, has also expressed his concerns about this matter.
Renowned scientists such as Stephen Hawking, and father (Stuart Russell) and godfathers (Geoffrey Hinton and Yoshua Bengio) of AI, have perceived the same risk. The third godfather Yann LeCun, Chief AI Scientist at Meta, is more optimistic.
Writer and scientist Eliezer Yudkowsky has suggested that if an out-of-control superintelligence system was on the verge of being activated, an extremely drastic preventive action could be considered such as destroying another country’s data centre where the AI system resides, even at the risk of nuclear escalation.
The “AI Pause Letter” was an open letter published in March 2023 by the Future of Life Institute. It was signed by more than 2,000 experts, entrepreneurs and researchers in the field of artificial intelligence, including the above-mentioned people. The letter called for a six-month pause to the development of AI systems more powerful than GPT-4 with the aim of allowing more thorough assessment of the potential risks and necessary security measures. The letter anticipated existential risks: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

Those who focus on existential risks argue that they do so for a number of reasons. They want to increase citizens’ awareness and encourage public debate and education so that they can help people to better understand the implications of AI for their daily lives. They want to spur the legislature to enact an AI regulation and promote the creation of international institutions to monitor AI risks. They want to highlight the need to create artificial intelligence systems that respect ethical constraints.
At the same time, the topic of existential risks can be used as a rhetorical tool to focus attention on a distant and very hypothetical risk so as to divert public discourse from the consequences of the AI that is already underway.
In their reply to the AI Pause Letter, the authors of the famous Stochastic Parrots article highlighted the instrumental use of existential risks. They also pointed out that the Future of Life Institute adheres to the philosophy of longtermism. For them, it really is time to act without focusing on imaginary “powerful digital minds”. We should focus instead on the exploitative practices of companies that claiming to be building AI. They are rapidly concentrating power in the hands of a few people, exacerbating social inequalities. They are exploiting the workers who are labelling the training sets. They are getting their hands on data in order to create products that generate profits for a handful of entities. Their systems are bringing about an explosion in synthetic media all over the world, which reproduces systems of oppression and endangers our information ecosystem.
Big Tech could use the rhetoric of existential risk to influence the regulatory process in their favour. By presenting AI as a technology that presents extreme risks, these companies may seek to propose specific regulations that are more likely to be accepted if perceived as responses to existential threats. These regulations could be shaped in a way that favours large companies over smaller competitors because they have the necessary resources to comply with the law. Moroever, future regulations could be useful for slowing down the competition. Complex and burdensome regulations addressing serious risks such as existential risks can be more easily managed by large companies with significant resources, but they can represent an insurmountable obstacle for startups and smaller competitors. A sense of urgency would lead public opinion to accept regulations balanced in favour of industry. Finally, arguing that only large and powerful companies have the capacity to develop and control AI safely can be a way to justify their dominance in the sector and, at the same time, establish themselves as responsible leaders and guardians of global security.
In conclusion, for Big Tech, it is important to be able to control the narrative even it means appearing inconsistent. After all, Big Tech is developing, at an accelerated pace, technologies that it considers to be, and presents to the public as being, fatally dangerous.
[1] The institute’s mission is “[s]teering transformative technology towards benefiting life and away from extreme large-scale risks.”
Previous article

Next Article


