Cross-Modal Coherence for Text-to-Image Retrieval
Paper |
Code |
Abstract
Common image-text joint understanding techniques presume that images and the associated text can universally be characterized by a single implicit model. However, co-occurring images and text can be related in qualitatively different ways, and explicitly modeling it could improve the performance of current joint understanding models. In this paper, we train a Cross-Modal Coherence Modelfor text-to-image retrieval task. Our analysis shows that models trained with image--text coherence relations can retrieve images originally paired with target text more often than coherence-agnostic models. We also show via human evaluation that images retrieved by the proposed coherence-aware model are preferred over a coherence-agnostic baseline by a huge margin. Our findings provide insights into the ways that different modalities communicate and the role of coherence relations in capturing commonsense inferences in text and imagery.
Citation
@misc{alikhani2021crossmodal, title={Cross-Modal Coherence for Text-to-Image Retrieval}, author={Malihe Alikhani and Fangda Han and Hareesh Ravi and Mubbasir Kapadia and Vladimir Pavlovic and Matthew Stone}, year={2021}, eprint={2109.11047}, archivePrefix={arXiv}, primaryClass={cs.CV} }