Abstract:
Robot-assisted surgery is an emerging technology that
has undergone rapid growth with the development of robotics and
imaging systems. Innovations in vision, haptics, and accuratemovements
of robot arms have enabled surgeons to perform precise minimally
invasive surgeries. Real-time semantic segmentation of the
robotic instruments and tissues is a crucial step in robot-assisted
surgery. Accurate and efficient segmentation of the surgical scene
not only aids in the identification and tracking of instruments but
also provides contextual information about the different tissues
and instruments being operated with. For this purpose, we have
developed a light-weight cascaded convolutional neural network
to segment the surgical instruments from high-resolution videos
obtained from a commercial robotic system. We propose a multiresolution
feature fusion module to fuse the feature maps of different
dimensions and channels from the auxiliary and main branch.
We also introduce a novel way of combining auxiliary loss and
adversarial loss to regularize the segmentation model. Auxiliary
loss helps the model to learn low-resolution features, and adversarial
loss improves the segmentation prediction by learning higher
order structural information. The model also consists of a lightweight
spatial pyramid pooling unit to aggregate rich contextual
information in the intermediate stage.We show that our model surpasses
existing algorithms for pixelwise segmentation of surgical
instruments in both prediction accuracy and segmentation time of
high-resolution videos.
Citation:
Islam, M., Atputharuban, D. A., Ramesh, R., & Ren, H. (2019). Real-Time Instrument Segmentation in Robotic Surgery Using Auxiliary Supervised Deep Adversarial Learning. IEEE Robotics and Automation Letters, 4(2), 2188–2195. https://doi.org/10.1109/LRA.2019.2900854