There is a common fear of generative artificial intelligence in the world of journalism. These fears were catalogued, among other things, during the Artificially Informed training event I organised in collaboration with the editorial staff of Slow News, the Ordine dei giornalisti Lombardia (Lombardy Association of Journalists), and the US Consulate in Milan. There is anxiety about replacing humans, about lack of control over content, about production of deep fakes, about large-scale dissemination of misleading content.
Actually, the problems of modern journalism started well before the advent of generative artificial intelligence. It comes down to human choices, starting with the overproduction of low-value content to pursue a business model that is no longer sustainable.
With the digital age, journalism lost its central position of oligopoly in the production and particularly the distribution of content as it faced various crises — not only economic crises but also crises of identity and trust. The ability to publish online easily and quickly has led to a saturation of articles, often at the expense of quality and in-depth analysis. The pursuit of quantity combined with a reduction in content quality has only exacerbated the other problems.
All this has led to a situation where industry professionals are often underpaid and work in precarious conditions. Newsrooms, under pressure to produce more content in ever-shorter timescales, are seeing a reduction in resources dedicated to training and supporting journalists. The introduction of artificial intelligence is further accelerating content production, amplifying existing problems. Many will see these technologies as a way to save money and cut costs, but this view has the risk of further exacerbating the downward spiral we have described.

AI as an ally
In reality, if we learn to use AI as an ally, it can become a valuable tool. For example, it can automate repetitive tasks such as transcribing interviews, managing emails, disseminating content on social media channels, optimising content for search engines, and analysing vast amounts of data or documents. This would free up time and resources that journalists could dedicate to the human aspects of their work — personal relationships, in-depth investigations, field research, and fact-checking. Moreover, if AI is used to support fact-checking, it can increase the accuracy of published information and reduce the risk of spreading fake or inaccurate news.
AI can, therefore, be liberating. Efforts can be focused on what really matters — great journalism at the service of the people.
But that’s not enough
In addition to using AI as a tool, journalists must also dedicate some of their attention and resources to studying the impact of these technologies. In fact, AI is influencing the economy, society, work and politics in increasingly profound ways. It is important that journalists and newsrooms specialise in understanding these dynamics so that they can inform the public about the changes taking place and their future implications.
That means understanding how to analyse the companies that develop and produce artificial intelligence, the influence they have on politics, and the capital they raise. It also means knowing what to really ask for, above all transparency — in processes, in how the machines are designed, and in guidelines for their use.
Newsrooms and journalists must also avoid simplistic narratives – often used for marketing purposes – about opportunities and risks. There are no technological magic wands that will solve everything, but it also makes no sense to focus on the possible tragedy of machines that will kill us all. Instead, we need to understand the real risks and real applications when it comes to weapons, structures of social and territorial control, and decisions made about people like whether they are allowed to enter a place or whether they are entitled to a bank loan.
The adoption of AI in the world of journalism should not be seen as a threat. It should be seen as both a tool to be used and an area to investigate.
Unfortunately, I know that many publishers will see AI as an opportunity to produce more low-cost content, following the myth of infinite growth and clicks. They will also see the large oligopolistic companies that develop AI as potential clients to secure some money in the short term. For example, they can sell access to their archives; the content can be used to train and fine-tune machines. Or they can get financial support for their newspaper apparatus, as has already happened with Meta and Google.
Be that as it may, we can at least propose an ethical and useful way to exploit these tools and produce quality, accurate and human journalism. Only in this way can we hope to restore journalism’s dignity and value and appreciate the positive impact that AI can have on journalism and on society.

