Best of both worlds: A hybrid method for tracking laparoscopic ultrasound transducers
Laparoscopic surgery, a less invasive alternative to conventional open surgery, involves inserting thin tubes with a tiny camera and surgical instruments into the abdomen. To visualize specific surgical targets, ultrasound imaging is used in conjunction with the surgery. However, ultrasound images are viewed on a separate screen, requiring the surgeon to mentally combine the camera and ultrasound data.
Modern augmented reality (AR)-based methods have overcome this issue by embedding ultrasound images into the video taken by the laparoscopic camera. These AR methods precisely map the ultrasound data coordinates to the coordinates of the images seen through the camera. Although the process is mathematically straightforward, it can only be done if the pose (position and orientation) of the ultrasound probe (transducer) is known by the camera coordinate system. This has proven to be challenging, despite many strategies for tracking the laparoscopic transducer. Hardware-based tracking by attaching electromagnetic (EM) sensors to the probe is a feasible approach, but it is prone to errors due to calibration and hardware limitations. Camera vision (CV) systems can also be used to process the images acquired by the camera and determine the probe’s pose. However, because they rely entirely on camera data, such methods fail if the probe is defocused or if the camera’s view is occluded. Thus, such CV systems are not yet ready for clinical settings.
To this end, in a recent study published in the Journal of Medical Imaging, a team of scientists from the US have come up with a creative solution. Instead of relying entirely on either hardware- or CV-based tracking, they propose a hybrid approach that combines both methods. Michael Miga, Associate Editor of the journal, explains, “In the context of interventional imaging with laparoscopic ultrasound, tracking the flexible ultrasound probe for correlation with preoperative images is a challenging task. The team led by Dr. Shekhar has demonstrated an impressive tracking ability with the proposed hybrid approach; these types of capabilities will be needed to advance the field of image-guided surgery.”
To begin with, the team designed and 3-D-printed a custom tracking mount to be placed on the tip of the transducer. This mount contained a sensor for EM-based tracking and several flat surfaces on which black-and-white markers can be attached for CV-based tracking. These markers, which resemble QR codes, are detected in the images recorded by the camera using an open-source AR algorithm called ArUco. Once two or more markers were detected in a frame, the scientists could immediately calculate the pose of the transducer.
Because CV-based tracking is more accurate than EM-based, the system defaults to using the former to track the transducer. And whenever markers are undetectable in a frame, the system adaptively switches to EM-based tracking. Moreover, to enhance their approach beyond the simple combination of both techniques, the scientists developed an algorithm that can perform corrections to the EM-based tracking results based on previous camera frames. This greatly reduces the errors associated with the EM sensor, especially those due to rotations of the laparoscopic probe.
The team demonstrated the effectiveness of their strategy through experiments on both a realistic tissue phantom and live animals. Excited about the results, Raj Shekhar, who led the study, concludes, “Our hybrid method is more reliable than using CV-based tracking alone and more accurate and practical than using EM-based tracking alone. It has the potential to significantly improve tracking performance for AR applications based on laparoscopic ultrasound.”
As this hybrid strategy undergoes further improvements, it can pave the way for laparoscopic surgery to be more effective and safer, leading to faster recoveries and better patient outcomes overall.