Near-duplicate image retrieval based on contextual descriptor

Jinliang Yao, Bing Yang, Qiuming Zhu

Research output: Contribution to journalArticlepeer-review

29 Scopus citations

Abstract

The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.

Original languageEnglish (US)
Article number6975087
Pages (from-to)1404-1408
Number of pages5
JournalIEEE Signal Processing Letters
Volume22
Issue number9
DOIs
StatePublished - Sep 1 2015

Keywords

  • Contextual descriptor
  • near-duplicate image retrieval
  • spatial constraint
  • visual word

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Near-duplicate image retrieval based on contextual descriptor'. Together they form a unique fingerprint.

Cite this