• LinkedIn
  • Telegram
  • FB
  • FB

Magazine Intelligenza Artificiale: l'IA è più di quello che appare

Magazine Intelligenza Artificiale: l'IA è più di quello che appare

Artificial Intelligence is everywhere, but it doesn’t come from nowhere

Artificial intelligence is in everything: from autonomous vehicles to digital assistants and chatbots; from facial, voice and emotion recognition technologies to social media; from banking, healthcare and education to public services (and, even in love).

However, AI was not born out of nothing. We are living in a period of polycrisis. New wars, conflicts and military coups are emerging on almost every continent in the world, and the United Nations High Commissioner for Human Rights, Volker Turk, has recently stated that a quarter of humanity are involved in 55 global conflicts. The escalation and increase in natural disasters caused by climate change made 2023 the hottest year on record. The Covid-19 pandemic led to a severe global recession, the effects of which are still being felt today, particularly by the poorer social classes. Finally, we are witnessing the rise of far-right politics, which is gradually taking over governments in Europe and beyond.

The spread of AI, in public discourse as well as practical implementation, aligns with the post-2008 global financial crisis and the rise of a ‘new spirit of capitalism’ characterised by austerity measures. AI now embodies a new knowledge regime that enhances the effects of austerity policies whilst being presented as ‘neutral science’. The private sector praises AI for increasing efficiency and objectivity and reducing bias, while public institutions, especially bureaucracies, increasingly embrace AI for their own optimisation purposes, aiming to achieve more with fewer resources.

However, as we shall see, what AI promises is an oversimplified vision of society reduced to statistical numbers and pattern recognition. AI is thus the ultimate symptom of ‘achievement society’.

AI as a new regime of the achievement society

In his book ‘The Burnout Society’, philosopher Byung-Chul Han developed the concept of ‘achievement society’. He wrote: ‘Today’s society is no longer Foucault’s disciplinary world of hospitals, madhouses, prisons, barracks, and factories. It has long been replaced by another regime, namely a society of fitness studios, office towers, banks, airports, shopping malls, and genetic laboratories. Twenty-first-century society is no longer a disciplinary society, but rather an achievement society.’ According to Han, the ideological imperative that drives the achievement society is Unlimited Can.

The ideological imperative of Unlimited Can is at the heart of the AI regime. How? First, it is related to AI’s insatiable need for data. AI technologies require huge amounts of data to train their models. However, not all data are valid. Data must be collected, categorised, labelled, ranked and, in some cases, scored.

Second, human labour plays a significant role in the development and training of AI technologies. For example, the creation of ImageNet, one of the first deep learning datasets and which consists of more than 14 million tagged images, required the efforts of thousands of anonymous workers recruited via platforms such as Amazon’s Mechanical Turk. This practice, known as ‘crowdworking’, involves dividing tasks into smaller parts that are completed by people worldwide, often for minimal pay, like a few cents per task.

The new regime of AI is ‘smart power’. Its power relies on new forms of pervasive surveillance, data and labour extraction, and neocolonialism. As Han wrote: ‘The greater power is, the more quietly it works. It just happens: it has no need to draw attention to itself.’ The ‘smart power’ of AI is violent. Its violence is based on the ‘predatory nature’ of AI technologies. They are camouflaged with mathematical operations, statistical reasoning, and excessive unexplainability known as ‘opacity’.

Photo by ThisisEngineering on Unsplash

Han’s concept of ‘smart power’ highlights the invisible but potentially violent nature of AI influence, often sold with marketing slogans such as ‘smart cities,’ ‘smart homes’ and ‘smartphones,’ ‘smart classrooms,’ etc. These terms imply increased surveillance and data collection. For example, the introduction of a Barbie smart doll in 2015 led to a data privacy scandal. Smartness therefore operates as obfuscation for further control. AI, as the new regime of the achievement society, should come with a warning tag: AI is the Other of human.

AI is the Other of human

The proposition ‘AI is the Other of human’ is a serious denunciation of the new AI regime. Its intensity is comparable to Mladen Dolar’s accusation of fascism (I have repurposed his proposition) in his essay ‘Who is the Victim?”‘ He wrote, ‘Fascism is the Other of the political; even more, it is the Other of the human.’

Under fascism and Nazism, just as under slavery, the struggle for recognition between the Other and the Same created a dissymmetry that implied the superiority of one over the other. This organised process of othering, observable in various mechanisms such as eugenics-driven policies and classification with badges, culminated in tragedies such as the Holocaust, facilitated by technologies such as IBM’s punch card system supplied by its German subsidiary.

However, there are connections between eugenics and the current field of AI. A 2016 paper by Xiaolin Wu and Xi Zhang entitled ‘Automated Inference on Criminality Using Face Images’ claims that machine learning techniques can predict with nearly 90 percent accuracy the probability that a person is a convicted criminal by using nothing but a driving-licence-style facial photo. The study suggests that certain facial features can indicate criminal tendencies, suggesting that AI can be trained to distinguish between the faces of criminals and non-criminals. This is an example of physiognomy and eugenics being validated and entangled with the field of AI. However, research papers like this do not exist in a vacuum. There are practical implications, they have tangible effects on people’s lives. That same year, ProPublica published an investigation that revealed machine bias in predicting the likelihood that two defendants (a black woman and a white man) would commit future crimes. It showed higher risk scores for the black defendant despite her having fewer criminal records than the white defendant, who had previously served five years in prison.

Axis of Exclusion

Differently from fascism and Nazism, AI today does not create visible dissymetries between the Same and the Other. The ‘smart power’ of AI creates new invisible (a)symmetries — a mechanism of separation, segregation, inferiority and violence. This mechanism operates as the Axis of Exclusion, creating three (a)symmetrical lines that deepen the asymmetries of society. That is, it reproduces relations of domination, exploitation, discrimination and slavery.

First of all, AI optimisations like backpropagation, used for training neural networks, can indeed categorise and label people or neighbourhoods, but it ignores the nuances of lived experiences and complex social relationships. This leads to essentialisation, segregation, and power dynamics of ‘superiority’ and ‘inferiority’.

For example, recent research evaluated large language models such as Bard, ChatGPT, Claude, and GPT-4 that had been trained using backpropagation, and found inconsistencies and the perpetuation of race-based medicine in all the models. GPT-4 incorrectly claimed that the lung function of black people is 10-15% lower than that of white people, which shows the dangers of medical assumptions based on race.

Photo by Charles Fair on Unsplash

The second line of (a)symmetry is aimed at intensifying the relationship between ‘the Centre’ and ‘the Periphery’. Drawing on Badiou’s interpretation of Hegel’s master-slave dialectic, I propose that the Centre leans towards the enjoyment side and the Periphery leans towards the labour side. Such relationships are intensified also because of the inherent fragility of AI systems. Specifically, when these systems operate outside the narrow domains on which they have been trained, they often become fragile. This has given rise to an invisible new section of the working class known as ‘ghost workers’.

Finally, the rapid data processing capabilities of AI enable social asymmetries to be amplified globally. Social media algorithms can create echo chambers that reinforce divisive narratives, exacerbating ‘us’ versus ‘them’ dynamics. This expansion fuels widespread polarisation and can perpetuate narratives that justify domination and discrimination by dehumanising the out-groups. For example, in 2018, HireVue stated that their AI-driven video interviewing systems could select successful employees based on facial movements and voice, but this discriminates against people who have disabilities that affect these characteristics.

The perpetuation of ‘in-groups’ (‘us’) as the ‘norm’ and ‘out-groups’ (‘them’) as the ‘non-norm’ yet again demonstrates the empirical harms and large-scale segregations caused by AI technologies, which primarily affects minority groups such as: black and brown people; poor individuals and communities; women; LGBTQI+ people; people with disabilities; migrants, refugees, and asylum seekers; young people, adolescents, and children; and other minority groups.

If the final outcome of using AI in the real world is greater injustice, inequality and marginalisation on the one hand, and ‘greater efficiency with fewer resources’ on the other hand, then the difference between the two sides should be calculated as democratic and human rights deficiencies, and the most vulnerable people will pay the highest price.

Image: Photo by Matteo Catanese on Unsplash

Esplora altri articoli su questi temi