Published by the Students of Johns Hopkins since 1896
May 20, 2024

Brain surgery visualization with navigational technology found in self-driving cars

By JASON HAN | September 14, 2023

fl9042718338-image-kp2ergzz

U.S. ARMY / CC0 1.0

A team of Hopkins researchers examined the neurosurgical application of a navigation system used in self-driving cars.

Prasad Vagdargi and his team from both the Department of Computer Science and the Department of Biomedical Engineering at Hopkins invented a real-time endoscopic guidance method for neurosurgeries that resembles navigational technologies in self-driving cars. Their findings were recently published in IEEE Transactions on Medical Robots and Bionics.

Traditional brain surgeries utilize stereotactic systems to hold the patient’s head in a stable place. Like a balloon suspended in a fluid environment, the brain moves and shifts dynamically in time which introduces difficulty in pinpointing areas of interest during surgery. Even minute displacements may render scans and images taken before the surgery useless.

Vision-based navigation systems have been developed to address the issue of accurately targeting brain regions during procedures such as transventricular neuroendoscopy or cauterization – a medical technique to close wounds by burning tissue with electricity or chemicals. With just a single endoscopic video feed from a camera inserted into the brain, neurosurgeons may be able to navigate within the brain and identify regions of interest in real time during operations.

Vagdargi, a doctoral student in the Department of Computer Science at Hopkins, works on realizing this prospect through real-time orientation. His focus on endoscopic navigation systems for neurosurgery encompasses many fields, namely biomedical engineering, computer science and medical imaging. 

Vagdargi and his team tackled the difficulties in reorienting the camera within the brain during operations when surgeons shift the camera to view a different region. They envisioned a camera, called an endoscope, that would help determine the the X, Y and Z coordinates of target structures after displacement. 

To realize and apply this advanced algorithm of navigation in surgical settings, Vagdargi not only resorted to his expertise in computer science but also cooperated with researchers from different fields. This project involves collaboration between students and professors from the Department of Computer Science and the Department of Biomedical Engineering.

“It is important for us as engineers to broaden our horizons and work with other fields as well, a case in point being neurosurgery and biomedical engineering,” Vagdargi said in an interview with The News-Letter.

The importance of interdisciplinary research for computer scientists is that many developments in computer science — vision-based navigation, 3D reconstruction or just tracking systems — cannot realize their applicational value in medicine without the help of biomedical engineers and medical practitioners.

“It is an important direction to work in, where scientists are translating technological developments from what happened in the lab directly over to the middle ground — which is clinical testing, clinical studies and clinical trials — all the way down to the patients,” Vagdargi said.

Vagdargi describes bench-to-bedside testing — bench referring to lab benches and bedside referring to the patient’s bedside. He explained that bench-to-bedside translation development and clinical translation are essential areas to work on and account for his own research initiatives.

Vagdargi evaluated how well the navigation system visualizes the brain on a self-designed ventricle phantom, a white rubbery material modeled after the human brain. A CT-scan system provides high-quality 3D image reconstructions of the ventricles. The team compared the effectiveness of two computer vision algorithms for analyzing 3D reconstructions: Structure from Motion (SfM), which Vagdargi used for his preceding research, and Simultaneous Localization and Mapping (SLAM), the state-of-the-art navigation system that was new to the team at the time but is widely utilized in self-driving cars. 

The evaluation demonstrated that both SfM and SLAM had similar accuracy: achieving sub 2-mm error when targeting erratic points, which require close monitoring during surgery. Nevertheless, SLAM operates at a much greater speed, thus generating 3D image reconstructions in a more timely manner.

Using this navigational technology, researchers and neurologists can reconstruct a 3D view of the brain to better navigate its dynamic environment during operations. The team predicts patients will also benefit from more accurate neurosurgeries with lower risks of complications and shortened operation times.


Have a tip or story idea?
Let us know!

Comments powered by Disqus

Please note All comments are eligible for publication in The News-Letter.

Podcast
Multimedia
Be More Chill
Leisure Interactive Food Map
The News-Letter Print Locations
News-Letter Special Editions