Loughborough University
Browse

VITR: Augmenting vision transformers with relation-focused learning for cross-modal information retrieval

Download (6.28 MB)
journal contribution
posted on 2025-03-26, 11:31 authored by Yan Gong, Georgina CosmaGeorgina Cosma, Axel Finke
The relations expressed in user queries are vital for cross-modal information retrieval. Relation-focused cross-modal retrieval aims to retrieve information that corresponds to these relations, enabling effective retrieval across different modalities. Pre-trained networks, such as Contrastive Language-Image Pre-training networks, have gained significant attention and acclaim for their exceptional performance in various cross-modal learning tasks. However, the Vision Transformer (ViT) used in these networks is limited in its ability to focus on image region relations. Specifically, ViT is trained to match images with relevant descriptions at the global level, without considering the alignment between image regions and descriptions. This paper introduces VITR, a novel network that enhances ViT by extracting and reasoning about image region relations based on a local encoder. VITR is comprised of two key components. Firstly, it extends the capabilities of ViT-based cross-modal networks by enabling them to extract and reason with region relations present in images. Secondly, VITR incorporates a fusion module that combines the reasoned results with global knowledge to predict similarity scores between images and descriptions. The proposed VITR network was evaluated through experiments on the tasks of relation-focused cross-modal information retrieval. The results derived from the analysis of the Flickr30K, MS-COCO, RefCOCOg, and CLEVR datasets demonstrated that the proposed VITR network consistently outperforms state-of-the-art networks in image-to-text and text-to-image retrieval.

History

School

  • Science

Published in

ACM Transactions on Knowledge Discovery from Data

Volume

18

Issue

9

Pages

1 - 21

Publisher

Association for Computing Machinery (ACM)

Version

  • VoR (Version of Record)

Rights holder

© The Owner / The Author(s)

Publisher statement

This work is licensed under a Creative Commons Attribution International 4.0 License.

Acceptance date

2024-07-28

Publication date

2024-10-18

Copyright date

2024

ISSN

1556-4681

eISSN

1556-472X

Language

  • en

Depositor

Prof Georgina Cosma. Deposit date: 24 October 2024

Article number

220