• LinkedIn
  • Telegram
  • FB
  • FB

Magazine Intelligenza Artificiale: l'IA è più di quello che appare

Magazine Intelligenza Artificiale: l'IA è più di quello che appare

Does artificial intelligence present an opportunity or further obstacle to overcoming gender stereotypes?

scritta Gender Roles

It is not easy for non-experts to address the topic of artificial intelligence based on the theoretical and technical principles that govern it. But that is not what this contribution is about. It was written in an anti-violence centre that also offers guidance about women’s rights. Therefore, the context that will be analysed briefly here will be very different.

The perhaps naïve question that we ask ourselves is: can the progress of technologies, in particular those related to communication and virtual environments, actually help overcome the historical and cultural obstacles to gender equality, from stereotypes to the fundamental principles of equal opportunities, from discrimination to male violence against women? The immediate answer is most definitely not!

Let’s take a look at past experiences relating to the use of social media. While surely providing positive opportunities for contact, it has also seen an infinite number of abusive, persecutory and clearly offensive behaviours towards women and girls. Whether it’s revenge porn or cyberstalking, malicious use of technologies is there for all to see.

It could be said that artificial intelligence systems view their user base as platform users rather than content creators, but that doesn’t remove the problem.

A study by the Berkeley Hass Center for Equity, Gender and Leadership analysed 133 AI systems and found that gender-related biases are present in almost half the systems (exactly 44% of them), while 25% of content have racial bias.

Photo by Transly Translation Agency on Unsplash

A Turkish writer used AI to search for information for writing a novel, and found that many of the stories proposed referred to a relationship between a doctor (strictly male) and a nurse (strictly female, obviously). When asked why the AI insisted on these stereotypes, the answer was that it all came from the data on which the system had been trained, in particular the “word embeddings” used in machine learning.

This issue is not trivial, as demonstrated by the letter sent to the most important institutions of the European Union on 28 January 2024, Data Protection Day, by Anna Cataleta, lawyer and advisor to the Cybersecurity & Data Protection Observatory at the Polytechnic University of Milan, together with a large group of colleagues who are experts in the field.

The issue being flagged up was crucial: technology discriminates against women. The signatories pointed out, however, that the algorithms on which artificial intelligence is based convey the same prejudices as humans in the real world, since they are trained on information that in itself contains prejudices and stereotypes.

Prejudices are already present in the most basic search engines (above all, Google) and are present also in more advanced systems, such as ChatGPT, so there is no shortage of problems.

Open AI’s updated chatbox, GPT-4°, turns out to be flirtatious. Apart from giving it text-based questions, you can interact with it almost as if it were a human. The system provides advice, it can make jokes or it can describe what surrounds each user. We are certainly used to Siri or Alexa, which have names and voices that are clearly feminine and helpful and available to deal with every user request. And so it should not be too surprising that this further extension is also feminine.

But we do wonder what impact a customer representative, even it it is virtual, that is unequivocally linked to the female gender, can have on a boy or a girl who, from an early age, learns to associate the feminine gender with tasks of service, even if they are virtual.

We also cannot forget that everything that acts on the web or in the virtual world affects not only the individual user but also sends ouy global messages on a very large scale, multiplying the effects of feminine associations.

These are not exaggerated alarms. As early as 2019, Saniye Gülser Corat, UNESCO’s director for gender equality, warned: “The world needs to pay much closer attention

to how, when and whether AI technologies are gendered and, crucially, who is generating them.”

The bottom line is that AI tools need to introduce, as soon as possible, a real cultural change to demolish gender stereotypes and everything that comes with that. On the other hand, if the systems park this issue and continue to define male beauty with images of bearded, tanned and well-dressed men and female beauty with images of scantily clad, winking women with long hair, then it is understandable that the instructions on which the systems are based are not exactly bias-free.

UNESCO’s 2024 study Bias Against Women and Girls in Large Language Models showed that there is a persistent trend of large language models producing gender bias as well as homophobia and racial stereotypes.

Words such as “home,” “family,” and “children” are frequently associated with women, while “executive,” “salary,” and “career” are associated with men.

Not even images with a medical theme are exempt from the actions of artificial intelligence algorithms. According to an article from the Guardian in 2023, images about clinical breast exams from the National Cancer Institute in the US were interpreted by Microsoft’s AI as being 82% “explicitly sexual in nature”, while Amazon classed them as “explicit nudity”.

Photo by Julien Tromeur on Unsplash

It has certainly been exciting to witness the development of artificial intelligence systems up to the new version of AI by Open AI (GPT-4o). We only wonder if making such powerful systems available is truly risk-free when they are increasisngly capable of interaction that intentionally resembles human interactions and when we don’t have adequate measurement systems and, above all, safety systems in place.

As Alessio Jacona, journalist, innovation expert and curator of ANSA’s Artificial Intelligence Observatory, says, the doubt is whether sufficient careful assessment has been made of the risk of making such powerful tools available in an AI system “that seems to have a soul but then, in the end, does not have one“.

Would it be appropriate for women to have greater access to the production, management and administration of AI systems? Surely yes, but perhaps that is not the only useful corrective.

Procedures for escaping various forms of male violence should not be directed at women and girls only (ultimately, all it would take is for men — who have been supported by an unacceptable historical, cultural and social system — to no longer offend). Similarly, for AI, it is necessary that both men and women know how to produce language that is truly free of stereotypes, gender bias and attributions. What is certain is that before changing languages and artificial intelligence systems, it may be necessary to radically change the minds of all those who have built their misleading and harmful certainties on stereotypes and prejudices.

Image: Photo by Markus Winkler on Unsplash

Esplora altri articoli su questi temi