Rejecting mismatches of visual words by contextual descriptors

Jinliang Yao, Bing Yang, Qiuming Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

The Bag-of-Visual-Words model has become a popular model in image retrieval and computer vision. But when the local features of the Interest Points (IPs) are transformed into visual words in this model, the discriminative power of the local features are reduced or compromised. To address this issue, in this paper, we propose a novel contextual descriptor for local features to improve its discriminative power. The proposed contextual descriptors encode the dominant orientation and directional relationships between the reference interest point (IP) and its context. A compact Boolean array is used to represent these contextual descriptors. Our experimental results show that the proposed contextual descriptors are more robust and compact than the existing contextual descriptors, and improve the matching accuracy of visual words, thus make the Bag-of-Visual-Words model become more suitable for image retrieval and computer vision tasks.

Original languageEnglish (US)
Title of host publication2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1508-1513
Number of pages6
ISBN (Electronic)9781479951994
DOIs
StatePublished - 2014
Event2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014 - Singapore, Singapore
Duration: Dec 10 2014Dec 12 2014

Publication series

Name2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014

Other

Other2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014
CountrySingapore
CitySingapore
Period12/10/1412/12/14

Keywords

  • contextual descriptor
  • image retrieval
  • semi-local spatial similarity
  • visual word

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Artificial Intelligence
  • Control and Systems Engineering

Fingerprint Dive into the research topics of 'Rejecting mismatches of visual words by contextual descriptors'. Together they form a unique fingerprint.

  • Cite this

    Yao, J., Yang, B., & Zhu, Q. (2014). Rejecting mismatches of visual words by contextual descriptors. In 2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014 (pp. 1508-1513). [7064539] (2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICARCV.2014.7064539