Published by the Students of Johns Hopkins since 1896
April 21, 2024

From CLSP to HEART: A conversation with Orion Weller on information retrieval systems

By ANNIE HUANG | October 19, 2023



Weller shares aspects of his research into large language models like ChatGPT in an interview with The News-Letter.

Orion Weller is a third-year doctoral student affiliated with the Center for Language and Speech Processing (CLSP) advised by Benjamin Van Durme and Dawn Lawrie. He is currently teaching a Hopkins Engineering Applications and Research Tutorials (HEART) course titled Reasoning with ChatGPT in which he discusses the contexts and relevance of his research.

In an interview with The News-Letter, Weller shared some of his work as well as opinions on Large Language Models (LLMs) and Information Retrieval (IR) systems.

IR is the process of obtaining relevant information from digital resources. A web search engine is the most visible application of IR systems, where the algorithm ranks and displays the most relevant documents or passages after an inquiry is made by a user.

Weller explains the relevance of his work using an example of ChatGPT.

“My work focuses on making sure [LLMs] don't make stuff up, that their output can be attributed to some sources and [that they] give better answers in general,” he said. “The thing that I think most people don't realize is that [ChatGPT] doesn't necessarily say true things. For example, there is a case in which a lawyer submitted a case that was not real and just made up by ChatGPT. That's a huge problem if you're using it for anything important.”

One of Weller’s recent projects is related to ChatGPT prompt engineering. In a recently published paper titled “‘According to...’ Prompting Language Models Improves Quoting from Pre-Training Data,” Weller found that ChatGPT can give better quotations when questions are phrased differently. He briefly explained the workflow of the project.

“First, we developed a way to measure if the quote is from a large data collection. Then, we tried different ways of asking the language model to quote more or less,” he said.

The team discovered that by prompting the LLM in a certain way, the user has some level of control over the output. For example, since LLMs have learned that journalists quote sources accurately, when they are asked to respond in a journalistic fashion there is indeed an increase in performance.

Weller shared relevant work that has been conducted by his recently graduated labmate, Sheng Zhang, on information extraction of biomedical entities. Their work focuses on using LLMs for biomedical applications, such as automatically gathering and extracting adverse drug interactions or answering medical questions.

Though LLMs have been proven to be extremely powerful, Weller contends that there remains room for improvement.

"We have a [doctoral] candidate — who is also a professor of law at the [University of Maryland Francis King Carey School of Law] — who cares a lot about tax law and works with the [Internal Revenue Service] to help people use large language models to prevent tax loopholes," he explained. "The model has been trained on the tax code, but it's not reasoning over it correctly. So if you were to ask ChatGPT to file your tax right now, it would miss some of those conditions and not get you the correct answer."

In closing, Weller shares his own attitudes toward LLMs. According to him, LLMs like ChatGPT should be regarded as personal interns at best.

“You don't want to trust everything [ChatGPT] does, but it can be helpful sometimes. For me, I love using it for drafting emails or summarizing. It's great to know the right words to use in a situation,” he said.

As a HEART course instructor, Weller is excited for undergraduate students to get involved with NLP. He recommended playing around with Hugging Face, a straightforward Python library that provides basic tools for building machine-learning applications. OpenAI’s public Application Programming Interface is another good choice.

Weller‘s HEART course, in which he went into detail on the inner workings of NLPs and LLMs, is well received by his students.

Freshman Beck Chen described his excitement about taking this class.

“I have learned so much about NLP through the HEART course, and the inner workings of these complex models are truly impressive. I'm super excited about the future of technology and language,” Chen said.

Have a tip or story idea?
Let us know!

Comments powered by Disqus

Please note All comments are eligible for publication in The News-Letter.

Alumni Weekend 2024
Leisure Interactive Food Map
The News-Letter Print Locations
News-Letter Special Editions