Recognizing surgeon's actions during suture operations from video sequences

Ye Li, Jun Ohya, Toshio Chiba, Rong Xu, Hiromasa Yamashita

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines sliding window with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while Sliding window did not show significant improvements for suture and tying and cannot recognize negative actions.

Original languageEnglish
Title of host publicationMedical Imaging 2014
Subtitle of host publicationImage Processing
PublisherSPIE
ISBN (Print)9780819498274
DOIs
Publication statusPublished - 2014 Jan 1
EventMedical Imaging 2014: Image Processing - San Diego, CA, United States
Duration: 2014 Feb 162014 Feb 18

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume9034
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2014: Image Processing
CountryUnited States
CitySan Diego, CA
Period14/2/1614/2/18

    Fingerprint

Keywords

  • 3D SIFT
  • Action recognition
  • Hierarchical K-means
  • Nearest neighbor rule
  • SVM
  • Sliding window
  • Suture surgery

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Atomic and Molecular Physics, and Optics
  • Biomaterials
  • Radiology Nuclear Medicine and imaging

Cite this

Li, Y., Ohya, J., Chiba, T., Xu, R., & Yamashita, H. (2014). Recognizing surgeon's actions during suture operations from video sequences. In Medical Imaging 2014: Image Processing [903417] (Progress in Biomedical Optics and Imaging - Proceedings of SPIE; Vol. 9034). SPIE. https://doi.org/10.1117/12.2043464