• LinkedIn
  • Telegram
  • FB
  • FB

Magazine Intelligenza Artificiale: l'IA è più di quello che appare

Magazine Intelligenza Artificiale: l'IA è più di quello che appare

The absence of artificial intelligence in the fantasy universe of the Dune cycle by Frank Herbert — Part I: The Butlerian Jihad

Science fiction isn’t afraid of AI

Let’s imagine a hypothetical world where developing and using artificial intelligence is banned. What could have been the reasons that led the inhabitants of that world to introduce such a ban? Could it be fear rooted in the potential threats that AI poses to humanity’s very existence? Or could there be other reasons?

Science fiction literature has investigated in depth the idea of a world that has decided to do without artificial intelligence. Often, these fictional worlds are fuelled by the fear that AI could inflict immeasurable damage on humanity, eventually dominating the planet and the galaxy, or even exterminating the human species. Numerous narratives depict rebellions by AI, embodied in autonomous and humanoid robots, as events that have either taken place or were simply feared. This apocalyptic imagery has been amplified by iconic films such as The Terminator, The Matrix, and I, Robot.

However, in the worlds imagined by science fiction writers, fear is not always the main driver behind banning AI technologies. With such scenarios, science fiction becomes a valuable tool for exploring possible interaction modes between humans and AI as well as the very meaning of humanity.

There can be different reasons that lead the inhabitants of those worlds to ban AI. In The City and the Stars by British scientist and writer Arthur C. Clarke (1956), the inhabitants of the city of Diaspar live in a technologically static utopia where innovation is practically non-existent. The lack of advanced AI stems not from fear of an uprising by machines. It stems from the desire to maintain control and stability in society and avoid the uncertainties and changes that such technologies might introduce. Similarly, the Foundation series by the famous author Isaac Asimov (1951-1993) explores a future in which AI use is regulated to maintain the current balance of power, to prevent possible social instability, and to prevent loss of autonomy and identity. In Do Androids Dream of Electric Sheep? (1968) by American writer Philip K. Dick — the story that inspired the film “Blade Runner” (1982), directed by Ridley Scott — androids (replicants) are banned on Earth but not really because of the fear they might rebel. They are banned for ethical and social reasons. Replicants are indistinguishable from human beings in all respects except for their lack of empathy. The inability of replicants to feel real emotions and to fully understand the experiences of others makes them dangerous and unacceptable to human society. This raises ethical questions about the “legitimacy” of their existence. And their presence raises important questions about identity, the nature of the soul, and the rights that should be granted to replicants.

The planet Dune

Pre-release flyer for a film about Dune directed by Alejandro Jodorowsky which was never made

The American writer Frank Herbert’s cycle of six novels, centred on the imaginary desert planet Dune, presents another particularly interesting example of the reasons that can lead to a total ban on “thinking machines” (he used the term AI very little) as well as non-intelligent computers. This saga has regained its popularity thanks to recent film adaptations by the Canadian director Denis Villeneuve, although his two films have so far brought only the first volume of the cycle to the screen. The 1984 film version directed by David Lynch also did not delve deep into the AI-related themes present in the Dune world.

The danger is not existential risk

The first novel, Dune (1965), is famous not only as a masterpiece of science fiction but also for being ahead of its time in having ecological issues as an important theme. It illustrated in detail the challenges of maintaining sustainable balance in an extreme desert environment, introducing ante litteram the idea of a circular economy of the water cycle.

However, less attention has been paid so far to Frank Herbert’s message about artificial intelligence, a term that had only been around for 10 years when the book came out in 1965. When we read it again today, Frank Herbert’s message reveals surprising hints of modernity and an affinity with contemporary debates.

Banning AI in the Dune universe

Artificial intelligence is completely absent in the world of Dune. For this and other reasons, that universe has an archaic feel to it despite the possibility of interstellar travel. The absence of AI is not a narrative device to justify the archaic nature of that world, neither does it denote the author’s lack of interest in the subject. Paradoxically, it indicates the author’s attention to the relationship between humans and machines. In fact, from the first pages of the novel, the existence of an explicit ban against AI emerges as something deeply rooted and shared in the society imagined by Frank Herbert.

After Gaius Helen Mohiam, Reverend Mother of the Bene Gesserit Order, administers the mysterious pain induction box initiation rite on the young protagonist of the novel, Paul Atreides, a dialogue takes place clarifying the purpose of the test:

“Why do you test for humans?” he asked.

“To set you free.”

“Free?”

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

” ‘Thou shalt not make a machine in the likeness of a man’s mind,’ ” Paul quoted.

“Right out of the Butlerian Jihad and the Orange Catholic Bible,” she said. “But what the O.C. Bible should’ve said is: ‘Thou shalt not make a machine to counterfeit a human mind.'”

This dialogue highlights two key points about the ban’s existence. The first concerns the phrase “[thinking machines] permitted other men with machines to enslave them”. The revolt against AI was not caused by machines rebelling but by technological oppression exercised by an elite group of people who possessed such machines and then enslaved other human beings. The author, however, tells us nothing more about that time of oppression of men by other men.

To fully understand Frank Herbert’s views about the risks of technology, we must focus on only the original six volumes of the Dune cycle. We need not look for clues in the numerous sequels and prequels written by the author’s son, Brian Herbert, and Kevin J. Anderson. These recount a revolt against an omnipresent artificial intelligence called Omnius, assisted by a creepy robot called Erasmus tasked with studying humans, and cymek robots who have human brains but who betrayed their own kind only to be overpowered by Omnius, who has taken control of the universe. This interpretation of the history of Dune before machines were banned is not only totally absent in Frank Herbert’s novels, but, as we have seen, is also incompatible with the few facts about the history of the ban that are referred to in the six volumes of the Dune cycle. Brian Herbert’s claim that he followed the notes left by his father after his death is unconvincing.

Revolt against the machines

We do know, however, that the revolt that led to banning thinking machines is called “Butlerian Jihad“, that it lasted for two generations, and that it took place 10,000 years before the events narrated in the novel. In the world of Dune, these events are starting to be forgotten, and they are only sporadically referred to in the novel. The term “jihad” was less known in 1965 than today (starting with the war in Afghanistan in the 1980s, the term became more associated with the Wahhabi movement’s interpretation). Nevertheless, it provides a religious dimension to the rejection of the machines which we will talk about later. The adjective “Butlerian”, on the other hand, as Paolo Riberi explains in his book “The secrets of Dune“, is a literary reference to the English novelist and critic Samuel Butler.

Samuel Butler is an author best known for his satirical novel Erewhon [1] (1872) where he tackled the subject of machines and artificial intelligence in a way that could be considered pioneering for its time. Butler envisioned a society in which all complex machines are banned and destroyed for fear of the ethical and philosophical implications of their continued development. The ban is not motivated by a fear that machines will turn against humanity. The fear is that machines could evolve (the reference to Charles Darwin’s then recently published theory is clear) in a similar way to living beings, and that if this were allowed to happen unchecked, these machines could reach a point where their intelligence and capabilities surpass those of humans. And that could begin the slow and insidious replacement of humans by machines. Moreover, he feared that excessive automation could lead to decay in human abilities and the loss of our ingenuity and creativity. That fear reflects the view that machines could reduce humanity to playing a passive role, depriving humans of their autonomy and their ability to innovate. Finally, machines are seen as a source of inequality, as their presence, and control over them, are tied to the privileges of a few, to the detriment of the many.

Existential risks?

Frank Herbert seems to have anticipated the contemporary debate about “existential risks” associated with AI. Today, many entrepreneurs and AI scholars are warning that AI could, in the future, dominate the world and even exterminate the human species. In the famous “AI pause letter” of March 2023, many of them called for a pause in the development of overly powerful generative AIs. However, the alarm about existential risks is often used rhetorically by Big Tech to divert public attention away from genuine risks that are already present, risks such as imbalance of power in favour of those who own such technologies, a power asymmetry in society like the one Herbert wrote about, leading to enslaving the other part of humanity. If we can no longer talk about slavery today, there is talk, for example, about compensating possibly widespread unemployment with a “universal basic income” to be provided by multinationals. The term was recently changed by Tesla, SpaceX, and X owner Elon Musk to “universal high income” to make the idea more appealing, although he did not explain much about the economic foundations of this possibility.


[1] Note that the title is an almost perfect anagram of “nowhere” and, like ‘utopia’, it refers to a ‘non-place’.

Esplora altri articoli su questi temi