The mission of the University, inspired by Daniel Coit Gilman’s inaugural address, is “to educate its students and cultivate their capacity for lifelong learning, to foster independent and original research and to bring the benefits of discovery to the world.” This mission has helped shape Hopkins into the academic powerhouse that it is today, having produced the world’s most notable scholars, innovators and thinkers such as Michael Bloomberg, Woodrow Wilson, John Dewey, Madeleine Albright and 29 Nobel Prize Winners. But what happens when these scholars are gifted a secondary, artificial mind to complement their studies? Do the scholars remain independent and original? Do they bring their own benefits of discovery to the world or instead the benefits of a computer system that can reason and problem-solve the way humans do? The rise of AI has significantly disrupted the pursuit of higher learning, overshadowing intellectual struggle and catalyzing a generational cognitive decline.
The use of generative AI in the writing and production of any academic work is generally prohibited or restricted at the discretion of course instructors. This policy is appropriate to maintain academic integrity and intellectual honesty and should remain in place even as technology advances and intertwines further with day-to-day life. However, artificial intelligence as a supplement to learning, a tool to assist in idea generation, problem-solving and explanation of difficult topics is inevitable yet detrimental to the learning mind.
Sam Altman, CEO of OpenAI — the company behind ChatGPT — described the GPT-5 model, launched in August 2025, as “having a team of Ph.D.-level experts in your pocket.” This convenient abundance of knowledge can significantly decrease the intellectual struggle that one may otherwise face. For example, if a student is struggling to understand an answer in a practice problem while studying for a midterm, they can reference ChatGPT, simulating private office hours at their convenience. These applications can expand beyond the classroom. AI can also provide help towards interpersonal issues, such as dealing with a messy roommate by providing psychological insights to help manage the conflict. A Nature Human Behaviour study shows that over time, repeated human-AI interactions lead people to adopt the system’s bias, proving that AI is bound to reshape our cognition and learning as students and people.
A reflection on the impact of artificial intelligence posted by the NIH states that while “search engines and platforms like Wikipedia provide users with vast amounts of information... [artificial-intelligence chatbots] are not just repositories of information; they simulate human conversation... [a] dynamic interaction [that] could lead to a different kind of cognitive reliance.” Some argue that concerns regarding the rise of artificial intelligence are inherent, just as they were with any previous technology wave. For example, similar concerns regarding cognition came about with the rise of search engines, such as Nicholas Carr’s article for The Atlantic titled “Is Google Making Us Stupid?” However, no prior wave of technology has posed a threat to human intelligence and usefulness the way AI does: a technology designed to mimic human thought and sentience.
Google, since its popularity in the early 2000s, provided access to pieces of information, but our minds were required to bring those pieces together. With search engines, you were the compiler, and that compiling provided you with the learning pathway; however, artificial intelligence puts you in the passenger’s seat, directly giving you the answer. You, the user, are only responsible for evaluating whether its output agrees with your previous breadth of knowledge. And when its knowledge surpasses your own, that’s when the overreliance begins: an arbitrary trust that begins to erode our cognition.
In a CNN interview, Dr. Nataliya Kos’myna, a research scientist at MIT, shared findings from a study which showed how relying solely on AI for tasks like writing can reduce brain activity and memory. This research highlights how, as technology advances, if we are not careful in how we use it, we could actively affect how we think. Furthermore, a paper by Professor Michael Gerlich of SBS Swiss Business School highlights how AI allows us to outsource cognitive tasks to external systems, which lets individuals focus on other more complex and creative activities. However, he warns that “the reliance on AI for cognitive offloading has significant implications for cognitive capacity and critical thinking.”
The question becomes: How can we leverage AI systems without allowing them to infiltrate the mission of discovery that Hopkins deeply aspires to endow on its students? Classroom guidelines are critical, but they can only govern the learning students do within their coursework. Intellectual learning and growth, especially as a college student, extend beyond the classroom to research opportunities, student organizations and personal learning. Ultimately, our only defense against the overreliance on AI is our own cognizance: to remain present in the act of learning, to remember the reward that follows intellectual struggle. As students and faculty, we must honor the school’s mission, using AI for the advancement of knowledge whilst remaining in control of AI during our pursuit of it.
Arman Momeni is a sophomore from Toronto, Canada majoring in Neuroscience.