Local temporal coherence for object-aware keypoint selection in video sequences

Songlin Du, Takeshi Ikenaga

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Local feature extraction is an important solution for video analysis. The common framework of local feature extraction consists of a local keypoint detector and a keypoint descriptor. Existing keypoint detectors mainly focus on the spatial relationships among pixels, resulting in a large amount of redundant keypoints on background which are often temporally stationary. This paper proposes an object-aware local keypoint selection approach to keep the active keypoints on object and to reduce the redundant keypoints on background by exploring the temporal coherence among successive frames in video. The proposed approach is made up of three local temporal coherence criteria: (1) local temporal intensity coherence; (2) local temporal motion coherence; (3) local temporal orientation coherence. Experimental results on two publicly available datasets show that the proposed approach reduces more than 60% keypoints, which are redundant, and doubles the precision of keypoints.

Original languageEnglish
Title of host publicationAdvances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers
PublisherSpringer-Verlag
Pages539-549
Number of pages11
ISBN (Print)9783319773827
DOIs
Publication statusPublished - 2018 Jan 1
Event18th Pacific-Rim Conference on Multimedia, PCM 2017 - Harbin, China
Duration: 2017 Sep 282017 Sep 29

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10736 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other18th Pacific-Rim Conference on Multimedia, PCM 2017
CountryChina
CityHarbin
Period17/9/2817/9/29

Fingerprint

Local Features
Feature extraction
Feature Extraction
Detectors
Detector
Video Analysis
Descriptors
Pixels
Object
Pixel
Motion
Experimental Results
Background
Relationships
Framework

Keywords

  • Local feature extraction
  • Object-aware keypoint selection
  • Spatio-temporal keypoint
  • Video analysis

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Du, S., & Ikenaga, T. (2018). Local temporal coherence for object-aware keypoint selection in video sequences. In Advances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers (pp. 539-549). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10736 LNCS). Springer-Verlag. https://doi.org/10.1007/978-3-319-77383-4_53

Local temporal coherence for object-aware keypoint selection in video sequences. / Du, Songlin; Ikenaga, Takeshi.

Advances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers. Springer-Verlag, 2018. p. 539-549 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10736 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Du, S & Ikenaga, T 2018, Local temporal coherence for object-aware keypoint selection in video sequences. in Advances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10736 LNCS, Springer-Verlag, pp. 539-549, 18th Pacific-Rim Conference on Multimedia, PCM 2017, Harbin, China, 17/9/28. https://doi.org/10.1007/978-3-319-77383-4_53
Du S, Ikenaga T. Local temporal coherence for object-aware keypoint selection in video sequences. In Advances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers. Springer-Verlag. 2018. p. 539-549. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-77383-4_53
Du, Songlin ; Ikenaga, Takeshi. / Local temporal coherence for object-aware keypoint selection in video sequences. Advances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers. Springer-Verlag, 2018. pp. 539-549 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{6113866baba346018234bdecf78049ed,
title = "Local temporal coherence for object-aware keypoint selection in video sequences",
abstract = "Local feature extraction is an important solution for video analysis. The common framework of local feature extraction consists of a local keypoint detector and a keypoint descriptor. Existing keypoint detectors mainly focus on the spatial relationships among pixels, resulting in a large amount of redundant keypoints on background which are often temporally stationary. This paper proposes an object-aware local keypoint selection approach to keep the active keypoints on object and to reduce the redundant keypoints on background by exploring the temporal coherence among successive frames in video. The proposed approach is made up of three local temporal coherence criteria: (1) local temporal intensity coherence; (2) local temporal motion coherence; (3) local temporal orientation coherence. Experimental results on two publicly available datasets show that the proposed approach reduces more than 60{\%} keypoints, which are redundant, and doubles the precision of keypoints.",
keywords = "Local feature extraction, Object-aware keypoint selection, Spatio-temporal keypoint, Video analysis",
author = "Songlin Du and Takeshi Ikenaga",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-3-319-77383-4_53",
language = "English",
isbn = "9783319773827",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer-Verlag",
pages = "539--549",
booktitle = "Advances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers",

}

TY - GEN

T1 - Local temporal coherence for object-aware keypoint selection in video sequences

AU - Du, Songlin

AU - Ikenaga, Takeshi

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Local feature extraction is an important solution for video analysis. The common framework of local feature extraction consists of a local keypoint detector and a keypoint descriptor. Existing keypoint detectors mainly focus on the spatial relationships among pixels, resulting in a large amount of redundant keypoints on background which are often temporally stationary. This paper proposes an object-aware local keypoint selection approach to keep the active keypoints on object and to reduce the redundant keypoints on background by exploring the temporal coherence among successive frames in video. The proposed approach is made up of three local temporal coherence criteria: (1) local temporal intensity coherence; (2) local temporal motion coherence; (3) local temporal orientation coherence. Experimental results on two publicly available datasets show that the proposed approach reduces more than 60% keypoints, which are redundant, and doubles the precision of keypoints.

AB - Local feature extraction is an important solution for video analysis. The common framework of local feature extraction consists of a local keypoint detector and a keypoint descriptor. Existing keypoint detectors mainly focus on the spatial relationships among pixels, resulting in a large amount of redundant keypoints on background which are often temporally stationary. This paper proposes an object-aware local keypoint selection approach to keep the active keypoints on object and to reduce the redundant keypoints on background by exploring the temporal coherence among successive frames in video. The proposed approach is made up of three local temporal coherence criteria: (1) local temporal intensity coherence; (2) local temporal motion coherence; (3) local temporal orientation coherence. Experimental results on two publicly available datasets show that the proposed approach reduces more than 60% keypoints, which are redundant, and doubles the precision of keypoints.

KW - Local feature extraction

KW - Object-aware keypoint selection

KW - Spatio-temporal keypoint

KW - Video analysis

UR - http://www.scopus.com/inward/record.url?scp=85047473150&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85047473150&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-77383-4_53

DO - 10.1007/978-3-319-77383-4_53

M3 - Conference contribution

AN - SCOPUS:85047473150

SN - 9783319773827

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 539

EP - 549

BT - Advances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers

PB - Springer-Verlag

ER -