Published by the Students of Johns Hopkins since 1896
October 30, 2025
October 30, 2025 | Published by the Students of Johns Hopkins since 1896

Playing devil’s advocate: On whether The News-Letter should ban AI

By LEO LIN | October 30, 2025

computer

IZZYMUNCHTED / CC BY-SA 2.0

Lin reflects on The News-Letter’s editorial article regarding it’s stance on generative AI.

On Sept. 16, The Johns Hopkins News-Letter Editorial Board released an article bearing the title “On generative AI: The News-Letter commitment to journalistic integrity. It set the paper’s opinion on the application of artificial intelligence to student journalism, that The News-Letter would not sacrifice ethical reporting in the cause of efficiency, though the temptation may be. The editors provided three primary justifications for this stance: (1) generative AI is not capable of being responsible for its actions, (2) AI is able to violate copyrights and (3) AI can harm the process of critical thinking.

In this article, I will discuss each of these arguments and give my view of what responsible use of AI may be in a university newsroom. This article is largely an exercise in playing devil’s advocate, not to dismiss The News-Letter’s caution, but to encourage reflection on what truly justifies an AI ban.

Argument No. 1: Generative AI cannot take responsibility for its actions.

The editorial board is right to emphasize accountability. Nevertheless, AI is not an agent but a tool. This argument seems to conflate moral responsibility with tool reliability. No journalistic tool, whether it’s a camera, transcription software or even Microsoft Word, can take “responsibility” for its actions. It is always the human utilizing them who is responsible. In my opinion, the problem is not that AI cannot be to blame, but that people should not be relieved of responsibility for its use.

A journalist using a search engine like Google might get different findings depending on the filtering, ranking and summary of information created by other individuals. If such a search result contains an error, one should not blame Google – it is the journalist’s responsibility to verify it. So what makes AI any different? If an editor fact-checks AI-generated information and approves it, that editor is still accountable.

One might argue that with search engines, we can use trustworthy sources that are more reliable and have a low percentage of error. However, there are also ways to use AI to decrease its chances of hallucination. For example, making the AI fact-check, support and cite all its claims with reliable sources on the internet greatly decreases the possibility of it giving out wrong information.

Argument No. 2: Generative AI can infringe on copyrights.

The editorial board argues that since “generative AI can source its results from original and protected works, in using AI, The News-Letter would be publishing content that is not fully its own.” Here, the concern seems twofold:

Ethical: that AI writing is “not fully one’s own work”

Legal: that it risks copyright infringement

Responding to the ethical concern, I think it should be kept in mind that authorship has always been collaborative and mediated by tools. No writer ever writes in a vacuum. In this digital era especially, full originality appears unattainable. All writers consciously or unconsciously already absorb words, ideas and structures from myriad invisible sources — books we have read, headlines we have scrolled past and phrases we still remember from social media. Nevertheless, is it really “taking” from someone when using AI to help with writing?

It is true that AI synthesizes information without directly citing its sources, but this difference is not unbridgeable. We can make AI provide citations or supporting evidence for its claims, and we can remain transparent about how it was used. In most journalistic contexts, the best use of AI is not to invent ideas or arguments, but to assist human writers in refining them — helping us think more clearly, not think for us. When used under these circumstances, there is little concern that AI is “stealing” ideas, since the creative and argumentative substance still originates from the writer.

Large language models do not work like a database. They function by following language patterns (such as the likelihood of words occurring after one another) and building new sentences through those patterns. If AI generates text, it does not copy paragraphs from existing sources but instead creates new combinations of language based on patterns it has learned. Some might argue that paraphrasing without citation is still plagiarism, but in a broader sense, all writing builds on shared ideas and expressions passed down through culture. The key distinction lies in intention and use: when AI is used to clarify our own reasoning rather than to reproduce another’s work, it becomes part of the same ongoing process of reinterpretation that defines human creativity. The difference between use and misuse lies entirely in human control. 

With the launch of the Hopkins AI Lab, we now have access to secure models along with tools that are designed to help us use AI more responsibly. Having this kind of structure in place gives the Hopkins community a safe space to experiment with these models and learn how to integrate AI ethically into their work, whether that’s research, writing or something else.

Legally, too, using AI does not necessarily violate copyright. There is always a possibility of infringement, but that risk will likely diminish as AI systems improve. As models become more transparent and better at tracking sources, the boundaries between fair use and violation will become clearer. What matters now is weighing the current risks against the benefits — and when used correctly and responsibly, the advantages of AI in writing and research can far outweigh those concerns. The current litigation against firms such as OpenAI is about the use of data to train models, not the ability of students or writers to utilize the models. Legally speaking, using AI in The News-Letter’s newsroom would not create copyright issues, since writers are generating original language outputs through systems already vetted for public or educational use. 

Argument No. 3: Generative AI has the potential to weaken the process of critical thinking.

The News-Letter is concerned that generative AI undermines the process of critical thinking. However, when applied deliberately, it may work the other way. Brainstorming with AI may help us think more effectively. The model responds logically, which is helpful when we want objective feedback. In conversation, its questions or counterpoints often reveal where our reasoning is thin or our assumptions are shaky. That back-and-forth helps refine arguments in the same way a good editor or debate partner would. By articulating ideas to something that always asks “why,” we learn to understand our own thoughts better.

AI can also enhance the way we express our ideas. Some authors might know what they want to say but not how they want to say it. To find the right words, AI could be a helpful tool that assists humans in writing. For example, it could give different versions of a sentence, offering the writer inspiration and help in forming their piece.

It can even be used as a constantly available editor. In cases where AI indicates ambiguous transitions, incongruent assertions or unprovable facts, the user has to choose what feedback to follow. The judgment against suggestion is the weighing of judgment, which is the nature of critical thought.

I think the advantages that AI brings should at least make us pause and reflect on how we should adapt to this new age of writing.

My Take on Generative AI

The question is why we oppose the use of generative AI. Is it because using it is wrong in itself? Or because we fear what might happen if we do? These concerns are valid, yet either is enough to warrant an outright prohibition.

If we believe using AI is wrong in itself, we should ask what makes it wrong. Does using a tool to write make the act less honest? I do not think so, as long as the purpose stays the same — to tell the truth and serve the reader. What gives writing its moral weight is the intention behind it. If a journalist uses AI to express that duty more clearly, not to deceive or cut corners, I think the integrity of the work remains. The value of a story lies in its honesty, not in whether it was typed by hand or refined with help from a tool.

If the concern is about consequences, that AI will make writing worse or shallower, or that incorporating AI will halt the development of writing skills, I think the vast advantages that come with AI are good reason to stop and weigh the pros and cons. Yes, this tool comes with many concerns, but does that really mean we should ban it? Is there a way to utilize it so the pros outweigh the cons?

David Cohn of the Poynter Institute put it well: “While we embrace the efficiencies and insights AI offers, we also preserve the irreplaceable value of human perspective and creativity.” That balance, using AI without losing our judgment, captures the goal.

The New York Times can also teach us about three principles of using generative AI in the newsroom. As an instrument of the mission, AI must assist in finding the truth, reaching more readers and improving journalistic abilities rather than eliminating them. Under human supervision and control, any work supported by AI should be fact-checked and remain the responsibility of editors and journalists. Readers have the right to know how stories are created and trust that machines did not overcome human control. Such principles could show how The News-Letter can incorporate AI appropriately.

I realize how hard this would be in practice. Controlling the use of AI is tricky and untidy, as it is necessary to make sure the technology augments human labor, not substitutes it. People will abuse it, and no rules can avoid all abuses. The answer is not to back away but to educate and be responsible.

I do not think The News-Letter should ban AI altogether. Instead, we should think about how to use it well. I understand why a complete ban feels easier. The risks are real, and creating guidelines would be complicated. But ignoring AI does not make those challenges disappear. It only means we delay learning how to handle them responsibly. Journalism will need to integrate AI into the newsroom eventually, so I argue that it is wiser to start now, while we still have the power to shape the rules.

The News-Letter could even take a leading role here. As the editorial staff, we could begin testing what the responsible use of AI in student journalism looks like and organize workshops and discussions with experts. This way, we would not only act according to our ethical standards but also prepare for a future where AI is part of the newsroom, and learn how tradition and technology can work together.


Have a tip or story idea?
Let us know!

News-Letter Magazine