Recognizing surgeon's actions during suture operations from video sequences

Ye Li, Jun Ohya, Toshio Chiba, Rong Xu, Hiromasa Yamashita

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines sliding window with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while Sliding window did not show significant improvements for suture and tying and cannot recognize negative actions.

    Original languageEnglish
    Title of host publicationProgress in Biomedical Optics and Imaging - Proceedings of SPIE
    PublisherSPIE
    Volume9034
    ISBN (Print)9780819498274
    DOIs
    Publication statusPublished - 2014
    EventMedical Imaging 2014: Image Processing - San Diego, CA
    Duration: 2014 Feb 162014 Feb 18

    Other

    OtherMedical Imaging 2014: Image Processing
    CitySan Diego, CA
    Period14/2/1614/2/18

    Fingerprint

    surgeons
    robotics
    surgery
    Surgery
    Sutures
    Robotics
    Nurses
    Support vector machines
    sliding
    Hand
    histograms
    Skin
    education
    Pixels
    pixels
    Vocabulary
    Surgeons

    Keywords

    • 3D SIFT
    • Action recognition
    • Hierarchical K-means
    • Nearest neighbor rule
    • Sliding window
    • Suture surgery
    • SVM

    ASJC Scopus subject areas

    • Atomic and Molecular Physics, and Optics
    • Electronic, Optical and Magnetic Materials
    • Biomaterials
    • Radiology Nuclear Medicine and imaging

    Cite this

    Li, Y., Ohya, J., Chiba, T., Xu, R., & Yamashita, H. (2014). Recognizing surgeon's actions during suture operations from video sequences. In Progress in Biomedical Optics and Imaging - Proceedings of SPIE (Vol. 9034). [903417] SPIE. https://doi.org/10.1117/12.2043464

    Recognizing surgeon's actions during suture operations from video sequences. / Li, Ye; Ohya, Jun; Chiba, Toshio; Xu, Rong; Yamashita, Hiromasa.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE. Vol. 9034 SPIE, 2014. 903417.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Li, Y, Ohya, J, Chiba, T, Xu, R & Yamashita, H 2014, Recognizing surgeon's actions during suture operations from video sequences. in Progress in Biomedical Optics and Imaging - Proceedings of SPIE. vol. 9034, 903417, SPIE, Medical Imaging 2014: Image Processing, San Diego, CA, 14/2/16. https://doi.org/10.1117/12.2043464
    Li Y, Ohya J, Chiba T, Xu R, Yamashita H. Recognizing surgeon's actions during suture operations from video sequences. In Progress in Biomedical Optics and Imaging - Proceedings of SPIE. Vol. 9034. SPIE. 2014. 903417 https://doi.org/10.1117/12.2043464
    Li, Ye ; Ohya, Jun ; Chiba, Toshio ; Xu, Rong ; Yamashita, Hiromasa. / Recognizing surgeon's actions during suture operations from video sequences. Progress in Biomedical Optics and Imaging - Proceedings of SPIE. Vol. 9034 SPIE, 2014.
    @inproceedings{670624b837774688adb0948d5edc62c6,
    title = "Recognizing surgeon's actions during suture operations from video sequences",
    abstract = "Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines sliding window with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90{\%} accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while Sliding window did not show significant improvements for suture and tying and cannot recognize negative actions.",
    keywords = "3D SIFT, Action recognition, Hierarchical K-means, Nearest neighbor rule, Sliding window, Suture surgery, SVM",
    author = "Ye Li and Jun Ohya and Toshio Chiba and Rong Xu and Hiromasa Yamashita",
    year = "2014",
    doi = "10.1117/12.2043464",
    language = "English",
    isbn = "9780819498274",
    volume = "9034",
    booktitle = "Progress in Biomedical Optics and Imaging - Proceedings of SPIE",
    publisher = "SPIE",

    }

    TY - GEN

    T1 - Recognizing surgeon's actions during suture operations from video sequences

    AU - Li, Ye

    AU - Ohya, Jun

    AU - Chiba, Toshio

    AU - Xu, Rong

    AU - Yamashita, Hiromasa

    PY - 2014

    Y1 - 2014

    N2 - Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines sliding window with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while Sliding window did not show significant improvements for suture and tying and cannot recognize negative actions.

    AB - Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines sliding window with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while Sliding window did not show significant improvements for suture and tying and cannot recognize negative actions.

    KW - 3D SIFT

    KW - Action recognition

    KW - Hierarchical K-means

    KW - Nearest neighbor rule

    KW - Sliding window

    KW - Suture surgery

    KW - SVM

    UR - http://www.scopus.com/inward/record.url?scp=84902097168&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84902097168&partnerID=8YFLogxK

    U2 - 10.1117/12.2043464

    DO - 10.1117/12.2043464

    M3 - Conference contribution

    AN - SCOPUS:84902097168

    SN - 9780819498274

    VL - 9034

    BT - Progress in Biomedical Optics and Imaging - Proceedings of SPIE

    PB - SPIE

    ER -