Generative artificial intelligence is rapidly transforming how we write, communicate, and consume information. But while journalists are among the professionals most affected by this shift, journalism students appear unprepared for the AI-driven newsroom.
A recent study by Spanish researchers sheds light on this paradox. The authors examined how communication and journalism students interact with AI tools in their studies and early professional experiences, revealing a mix of enthusiasm and unease.
Students recognise AI’s potential to improve efficiency, support creativity, and assist with content generation, but they express concern about reliability, ethics, authorship, and the risk of over-reliance on machine output.
Many reported difficulties in effectively using these systems, struggling with prompt design or interpreting the nuances of AI-generated text.
The study highlights that usability goes beyond technical access – it also involves understanding how AI integrates into workflows, how predictable and controllable it is, and how much users trust its results.
Some students even fear that routine dependence on generative tools could erode originality or professional identity, especially as content creation becomes increasingly automated.
The authors argue that journalism education must move beyond teaching digital tools and cultivate “AI literacy” – the capacity to use generative technologies critically and responsibly.
AI literacy means understanding when and how to use AI, how to evaluate its output, and how to maintain human oversight and ethical judgment in the creative process. Without this, future journalists risk being skilled users but poor editors of AI-driven information.
Everyday AI
If students are struggling to adapt, the general public has already moved far ahead. A recent study by OpenAI and Harvard offers the most comprehensive picture yet of how consumers use generative AI.
Drawing on 1.5 million anonymised ChatGPT conversations, the researchers find that AI has quietly become embedded in both professional and personal life.
About 30 per cent of ChatGPT’s use is work-related, while 70 per cent serves non-work purposes – from writing and planning to seeking advice or information.
The technology’s reach has also widened dramatically: adoption rates in low-and middle-income countries are now more than four times higher than in wealthier nations.
Demographic divides are shrinking too, with gender gaps in AI adoption nearly disappearing since 2024.
Literacy for the AI age
Yet what makes generative AI so powerful – its human-like interface and persuasive fluency – also makes it potentially misleading. When users treat chatbots as authoritative or “human” sources, the boundary between information and simulation blurs.
That is where media and information literacy (MIL) becomes crucial.
According to UNESCO’s recent briefing, societies are still far from prepared to meet the challenges of an AI-driven information ecosystem.
Although 88 per cent of UNESCO member states recognise the importance of MIL by including it within their national policy frameworks, only 17 per cent have adopted a stand-alone policy.
Even where media literacy appears in school curricula, one-third of countries limit it to basic digital skills, neglecting the broader competencies that foster critical thinking.
UNESCO defines media and information literacy as the ability to critically access, analyse, and evaluate both traditional and digital information – a skill set that has become indispensable in the era of algorithmic content.
The report stresses that MIL must now address the realities of AI: understanding how generative systems work, how data is collected and processed, and how bias, privacy and accountability intersect in the production of digital content.
This means going beyond teaching students to recognise misinformation. It requires equipping them to understand the mechanisms behind automated media – to question why certain outputs appear, how they may reinforce bias, and what ethical standards should guide their use.
In short, MIL is no longer a matter of spotting ‘fake news’; it is about cultivating citizens and professionals who can navigate an environment where truth, creativity, and computation constantly overlap.
Building critical competence
The convergence of these trends – journalism students struggling with AI tools, ordinary users embedding them in daily life, and policymakers lagging in digital education – highlights a central truth of the AI era: technology alone does not guarantee understanding.
Generative models may draft articles, summarise data, or even mimic human reasoning, but they cannot replace the critical judgement that defines responsible journalism.
For the next generation of media professionals, mastering AI is not about keeping up with a trend – it is about preserving the credibility and ethical integrity of information itself.
As UNESCO reminds us, media and information literacy has become as essential as literacy itself. In an era where every click, query, and chatbot exchange can shape public opinion, that may be journalism’s most important lesson yet.
(BM)