Vatican City – At the heart of the Holy See, amidst the air-conditioned offices of the Dicastery for Communication, something moves silently, without a badge or contract. It has no face, nor a signature at the bottom, yet it produces texts, translations, multilingual content. It is artificial intelligence—and not merely as a supportive tool but increasingly as a structural substitute for journalistic, editorial, and linguistic work.
For months, content circulating within the Dicastery has shown undeniable traces of automatic generation: artificial syntax, stereotyped constructions, translation errors made by language models, and a perfect, disturbing neutrality. Meanwhile, human employees spend their time "fiddling" on Twitter, Instagram, and Facebook, checking what Silere non possum writes about them. This seems to be someone's solution to the blunders of Andrea Tornielli, Salvatore Cernuzio, and company. Previously it was copy-and-paste; now, not even that.
The issue is not the use of AI. It’s the abuse.
That AI can be a valuable ally for journalistic or editorial work is unquestionable. It’s not just common sense saying this, but also the ethical guidelines published by international organizations and agencies. The Associated Press (AP), for example, has established that AI can only be used for support tasks (such as headlines, summaries, or partial translations) but must never produce publishable content without competent human oversight.
The USA Today Network, in its updated 2023 code, prohibits using AI to generate images and mandates direct oversight by a senior editor for any content involving AI. Additionally, the Radio Television Digital News Association (RTDNA)clarified that any use of AI in journalism requires transparency, accuracy, context, and human supervision. In Austria, the national agency APA introduced specific guidelines ("AI Guidelines") binding newsrooms to a principle of non-substitution, demanding that AI remain a tool among others, never an invisible editor. Yet, in the Vatican, the opposite direction is taken: skills are replaced by algorithms, entire content is outsourced to machines—and then left to (digital) providence.
The competency short-circuit
Behind this practice is not merely an organizational drift, but a dangerous epistemic illusion: hoping that AI will compensate for the shortcomings of those who lack skills, instead of leveraging those who have them. It’s not uncommon for managers with limited linguistic knowledge to input texts into AI platforms for translation into English, French, or Portuguese, trusting blindly in the automatic outcome, unable to recognize serious mistakes, incorrect titles, or repetitive content structures. AI thus becomes not a support for professionalism but a means to conceal its absence.
The security issue: sensitive data in the wrong hands
The most severe problem, however, is security. In Vatican editorial processes, texts produced (even confidential ones) are often inserted into commercial AI tools that process and store data for their training. In other words, sensitive documents pass through external systems, which collect, archive, and potentially reuse that information—a risk strongly denounced by international guidelines. The Austrian APA, for instance, requires AI use exclusively on internal and secure platforms precisely to avoid data leaks, privacy breaches, and improper appropriation of content. The Poynter Institute, in its ethical toolkit for 2025, insists on the principle of "internal controllability and traceability of use." All principles systematically ignored at Piazza Pia.
A step back is necessary—but a human one.
While Leo XIV clearly calls for an ethical and fair use of artificial intelligence—"Despite being an exceptional product of human genius, artificial intelligence is first and foremost a tool. Tools reflect the human intelligence that created them and acquire moral value from the intentions of their users"—the Dicastery for Communication heads in an entirely different direction. The Pontiff warns against distorted uses of AI, which, while potentially serving the common good and promoting equity, can also be manipulated for selfish interests, fueling inequalities, tensions, and even conflicts. Yet, precisely where ecclesial communication integrity should be witnessed, AI is often employed indiscriminately, with logics more attuned to speed and consensus than truth.
Requesting serious reflection on the use of artificial intelligence within the Dicastery for Communication is not an act of technological phobia. It’s a call to responsibility. AI can assist but not replace. It can simplify but not discern. It can translate but not interpret. And, above all, it cannot safeguard truth unless guided by human judgment. Not to mention that the Holy See employs hundreds of paid employees who should carry out their tasks competently and responsibly, not systematically delegate them to tools like ChatGPT. In essence, even using AI requires skill—it cannot be ignored.
d.R.A.
Silere non possum