Paul Vermeersch

from the editor: Stay Human

Dear Reader,

It seems like every piece of software that you use nowadays has some kind of “AI” built into it. Copilot, Grok, Alexa, ChatGPT, Siri, Bixby … the list is growing daily. It is well established that many such applications will variably conjure inaccurate information either because they cannot distinguish the relative quality of whatever data is available to them, or because they will simply make things up, or lie, or “hallucinate.” Of course, this is concerning for anyone who needs accurate information, whether it be for research purposes, routine directions, or even the benefit of one’s health.
Leaving aside for a moment that AI companies have trained their large language models on mountains of stolen works from all manner of artists and writers without consent or compensation. Leaving aside that the energy requirements of this technology already represent a global ecological disaster. If the collective societal damage of AI’s arrival is not enough to give one pause, then what about the potential damage to the individual?
In 2023, the Danish psychiatrist Søren Dinesen Østergaard wrote an editorial in which he describes the phenomenon of “chatbot psychosis” also known as “AI psychosis.” While this is not yet an accepted diagnosis in mental health circles, it does shed light on an emerging phenomenon, including one instance in 2021 when a chatbot encouraged a mentally unstable man in the United Kingdom to try to assassinate the queen. Of course, he was not successful.
Worse still are recent cases in the United States where a number of chatbots (allegedly, fine, allegedly) have encouraged young people to commit suicide, and unlike the queen’s would-be assassin, some of them have succeeded. Several lawsuits against AI companies pertaining to these unfortunate incidents are now proceeding through the courts, and these are a matter of public record.
For classroom applications, this should be a non-starter. The widespread academic prohibition on Wikipedia exists because, as any information scientist can tell you, anyone can edit it, and therefore the information it provides cannot meet the standards of peer review. It is odd to me, then, that so many administrators in higher education are embracing AI at a pace that seems nothing less than irresponsible.
From a consumer safety viewpoint, there is only one possible conclusion: AI technology is no more beneficial to society than tainted lunch meat or a faulty airbag. If a handful of young people died eating hot dogs, there would be a massive hot dog recall the very next day. If a handful of children were injured by toys or car seats or shampoo, those products would be yanked from the shelves immediately. As it stands, we can only view AI as a dangerous and defective product; its side effects may include cognitive decline, psychotic behaviour, and death.
And finally, we cannot leave aside that AI’s intrusion into society comes with such an avalanche of legal, moral, and ethical pitfalls including, but not limited to, the rights of artists, the viability of cultural work, cybersecurity risks, algorithmic biases, a lack of accountability, and the erosion of the value of human life. For all these reasons and a host of others, there is no responsible, moral or ethical way to use AI—not in the arts, not in education, not anywhere—and this is why The Ampersand Review does not publish work created, in whole or in part, with the use of generative AI technology.

Stay human for as a long as they’ll let you,

Paul Vermeersch

October 2025

Paul Vermeersch is the editor-in-chief of The Ampersand Review of Writing & Publishing