Published On: 20 November 2025
  • The Sustainable Conversations, organised by the IDAEA-CSIC Sustainability Committee, brought together specialists in computing, computer science and mathematics to discuss energy consumption, software optimisation and sustainable management models for Artificial Intelligence.

Roundtable “The (Un)sustainable Footprint of Artificial Intelligence” (13/11/2025, IDAEA-CSIC). From left to right: Iria Sambruno, Laura Barrios, Ana Conesa, Vicente Palomero, and Antonio Ortiz.

According to the EU Artificial Intelligence Act, Artificial Intelligence (AI) refers to a machine-based system designed to operate autonomously and demonstrate adaptive behaviour, whose outputs, predictions, content, recommendations or decisions, may influence diverse areas of our lives, both in real and virtual environments. Although AI technologies began to develop in the mid-twentieth century, their unprecedented progress in recent years has been driven by the enormous computational capacity now available to process large volumes of data.

The daily use of AI tools for research, work or leisure generates an environmental, social and sometimes ethical impact, often unknown due to the “magical” and immaterial appearance of this technology. As journalist Marta Peirano reminds us, “there is no world of pixels and a world of atoms. Every time we look up a street, click a link or play a song on Spotify, a chip heats up somewhere else in the world and at least one fan switches on”.

With the aim of sharing knowledge, experiences and strategies to move towards a more sustainable application of Artificial Intelligence in science and in everyday life, the Institute of Environmental Assessment and Water Research (IDAEA-CSIC) held a seminar on 13 November to reflect on the role of research in this emerging challenge.

The event, organised by the IDAEA Sustainability Committee and part of the “Sustainable Conversations” series—an initiative promoted by the CSIC Sustainability Plan 2024–2026— brought together four specialists: Ana Conesa, Laura Barrios, Antonio Ortiz and Vicente Palomero. Their talks provided more than eighty attendees with complementary perspectives on AI, from sustainable scientific computing and transparency to the cultural implications of this technology.

Moving towards sustainable computing

Laura Barrios, Head of the General Secretariat for Computing at CSIC and member of both the CSIC Ethics Committee and the CSIC Sustainability Committee, opened the session with a focus on the energy, water, material and social costs behind AI infrastructures. Data centres are the only sector where energy demand is currently growing in Western Europe and the United States. A 2024 report from the Lawrence Berkeley National Laboratory (LBNL) even estimates that their energy consumption will triple by 2028.

Barrios advocates for advancing towards efficient and sustainable computing in scientific research, which requires:

“Developing management tools and software to reach a sustainable space in all its dimensions, improving management and architecture, ensuring highly trained technical staff and, above all, designing a long-term institutional strategy that does not depend on governance changes.”

Ana Conesa, Research Professor at CSIC and the Institute for Integrative Systems Biology (I²SysBio), and member of the CSIC Sustainability Committee, shared practical recommendations to reduce the impact of computational research through software choices and good practices.

The computational biologist and bioinformatics specialist highlighted the importance of optimising algorithms, for example by using Green Algorithms software, reducing unnecessary fine-tuning, choosing Central Processing Units (CPUs) or Graphics Processing Units (GPUs) according to real objectives, scheduling tasks during periods of lower energy demand, and properly managing or deleting unused data.

“The scientific community is working to define the principles of sustainable computing: good governance, responsibility, assessing the carbon footprint and the real environmental impact of this technology. And of course, education—because without education, nothing can be applied.”

The need for explainable, ethical and human-centred AI

Antonio Ortiz, researcher of the CSIC Momentum programme and member of the Porous Media Multiscale Modeling Laboratory (PM3Lab) at IDAEA-CSIC, began by reminding the audience of a crucial fact: AI increasingly participates in decisions that deeply affect our lives, such as medical diagnoses, organ-transplant waiting lists, candidate selection processes in the workplace or even legal decisions.

In this context, the mathematician highlighted the three pillars proposed by Virginia Dignum (2019) for establishing responsible AI: explainability, transparency and final human responsibility, never delegated to machines.

“To make responsible decisions, we need truly interdisciplinary teams, transparency to trace and audit AI-generated decisions, and continuous verification to prevent systems from stagnating and to ensure they keep learning over time.”

The final talk was given by Vicente Palomero, full-stack developer and language engineer, who offered a personal testimony of his decision to leave the tech industry due to ethical and environmental concerns, especially the uncontrolled expansion of generative AI, to devote himself to creative writing and activism.

Palomero discussed language models and how the industry has dramatically increased carbon emissions in pursuit of performance and reduced training times, often by adding more energy, GPUs and data. Drawing from his experience as a writer, he explained how AI affects written communication through massive automation and the erosion of responsibility for content, making clear that the debate around the AI footprint is not only technological but also cultural and social.

“When we oversimplify a text, we lose technical precision and gain ambiguity, the message gets distorted, and the author becomes hidden behind layers of adaptation. If a book ends up being just a bridge to a chatbot that imitates its creator, what place is left for the creative work itself?”

 

The event concluded with a roundtable featuring all four speakers and moderated by Iria Sambruno, member of the IDAEA-CSIC Communication Department. The discussion delved deeper into these essential issues for achieving more responsible AI, showing that AI is not intrinsically good or bad, it depends on design, context, ethics and proper governance.

If you missed the session, you can watch the recording here:

More news