Published by the Students of Johns Hopkins since 1896
September 16, 2025
September 16, 2025 | Published by the Students of Johns Hopkins since 1896

On generative AI: The News-Letter’s commitment to journalistic integrity

By THE EDITORIAL BOARD | September 16, 2025

ai-in-journalism

SYDNOR DUFFY / DESIGN & LAYOUT EDITOR

Marking a new academic year, the Editorial Board voices and emphasizes The News-Letter’s commitment to journalism: publishing original, reliable works without the use of generative AI. 

Generative AI is everywhere. ChatGPT and other similar software are used across disciplines — in the office, in classrooms, for assignments and even for participation points in class discussions. At Hopkins, many students use AI not just for quick and easy answers on homework assignments but also in their writing — in essays, short stories, poems and, in some cases, articles. The University recently announced the Hopkins AI Lab, which “offers members of the Johns Hopkins community secure and easy access to Large Language Models (LLMs) from industry leaders including OpenAI, Anthropic and Meta.” 

The News-Letter is aware of the benefits of AI in everyday life. It saves time and energy, and it’s an easy way to get feedback and ideas for projects and assignments. It may seem natural to integrate this powerful technology into journalism to streamline writing and editing.

AI is efficient, but journalism, as time-sensitive as it may be, requires more than just efficiency. When tackling sensitive topics, nuanced interviews and care for our subject matter are required to produce a sincere publication. The News-Letter will not sacrifice ethical reporting for efficiency, no matter how alluring the prospect may be. For the following reasons, we will not use generative AI in our writing and editing process.

1. Generative AI cannot take responsibility for its actions.

One of the merits of using AI is its ability to provide extensive detail on a variety of topics. However, it remains possible for AI to make mistakes and produce inaccurate results, a common disclaimer among several engines. When broadcasting sensitive and nuanced information to a public audience, it is important to pay special attention to inaccuracies.

Humans are also bound to make mistakes, but it is not the issue is not inevitability — it is responsibility. No matter how “human” AI may sound, it can neither account for its mistakes the way a human writer can, nor pursue the right answers with journalistic integrity. Especially in an industry where accountability and emotion build a big part of public trust, AI should not take the place of journalists. News concerning our community must be written by reporters who not only produce reliable information, but also address their shortcomings with compassion.

2. Generative AI can infringe on copyrights.

The New York Times is currently in a legal dispute with OpenAI, claiming that the latter had unlawfully infringed upon the Times’ copyright by utilizing millions of articles to train LLMs. More recently, Anthropic has agreed to pay a $1.5 billion settlement to a group of authors who claimed that the company accessed their books through pirated websites. AI’s tendency to reproduce content without proper credit is highly problematic.

Copyrights are a two-way street. Because generative AI can source its results from original and protected works, in using AI, The News-Letter would be publishing content that is not fully its own. In everything we release, we put special emphasis on respecting our resources’ copyright while crediting creators properly. This also allows us to provide content fully brought together by Hopkins students, for the Hopkins community.

3. Generative AI can undermine the critical thinking process, a pillar of journalism.

The act of intentionally writing is inherently tied with the practice of critical thinking as it brings together multiple skills: reading from a collection of sources, recognizing main ideas and conveying ideas cohesively. Each step of the process deepens understanding of the article and the world around us. Even after a piece is published, the writing process continues to promote knowledgeable discussion and meaningful engagement, keeping critical thinking alive.

Collaboration has also been shown to contribute to the development of critical thinking. As we edit all articles submitted to us, we meaningfully engage with our contributors’ ideas, evaluate whether their suggestions will be taken into effect and, throughout it all, communicate clearly and effectively.

Ultimately, The News-Letter focuses on the voices in Hopkins and Baltimore. Generative AI’s training data is not limited to this specific community, nor does it form a cohesive, singular voice with an intentional directive in mind. Regardless of whether readers are interested in our content being organic, we cannot ensure accuracy when using LLMs.

As AI continues to evolve, the line between human and machine-generated work becomes increasingly blurry. While it has many valuable applications and promises to radically change society, AI incurs too many risks at the moment. Uncertainty surrounds its use and ethics, especially when prioritizing journalistic integrity. The News-Letter will not publish articles written or edited by generative AI. 


Have a tip or story idea?
Let us know!

News-Letter Magazine