LLM-Optic: Unveiling the Capabilities of Large Language Models for Universal Visual Grounding

1The Hong Kong University of Science and Technology (Guangzhou), 2The Hong Kong University of Science and Technology

Abstract

Visual grounding is an essential tool that links user-provided text queries with query-specific regions within an image. Despite advancements in visual grounding models, their ability to comprehend complex queries remains limited. To overcome this limitation, we introduce LLM-Optic, an innovative method that utilizes Large Language Models (LLMs) as an optical lens to enhance existing visual grounding models in comprehending complex text queries involving intricate text structures, multiple objects, or object spatial relationships—situations that current models struggle with. LLM-Optic first employs an LLM as a Text Grounder to interpret complex text queries and accurately identify objects the user intends to locate. Then a pre-trained visual grounding model is used to generate candidate bounding boxes given the refined query by the Text Grounder. After that, LLM-Optic annotates the candidate bounding boxes with numerical marks to establish a connection between text and specific image regions, thereby linking two distinct modalities. Finally, it employs a Large Multimodal Model (LMM) as a Visual Grounder to select the marked candidate objects that best correspond to the original text query. Through LLM-Optic, we have achieved universal visual grounding, which allows for the detection of arbitrary objects specified by arbitrary human language input. Importantly, our method achieves this enhancement without requiring additional training or fine-tuning. Extensive experiments across various challenging benchmarks demonstrate that LLM-Optic achieves state-of-the-art zero-shot visual grounding capabilities.

Method

We propose using LLMs and LMMs as effective reasoning modules for handling complex user queries to achieve universal visual grounding. Our framework includes three key modules: an LLM-based Text Grounder, a Candidate Positioning and Setting Marks module, and an LMM-based Visual Grounder. It does not require any additional training and features a fully modular design, allowing for the seamless integration of rapid advancements in new technologies.

Additional Results

Query: a giraffe in between two other giraffes.

(a) Grounding DINO

(b) LLM-Optic

Query: a giraffe in between two other giraffes.

(a) Grounding DINO

(b) LLM-Optic

Query: a giraffe in between two other giraffes.

(a) Grounding DINO

(b) LLM-Optic

Query: a giraffe in between two other giraffes.

(a) Grounding DINO

(b) LLM-Optic

Query: a giraffe in between two other giraffes.

(a) Grounding DINO

(b) LLM-Optic

Query: a giraffe in between two other giraffes.

(a) Grounding DINO

(b) LLM-Optic

Query: a giraffe in between two other giraffes.

(a) Grounding DINO

(b) LLM-Optic

BibTeX

If you're using LLM-Optic in your research or applications, please cite using this BibTeX:


        @misc{zhao2024llmoptic,
          title={LLM-Optic: Unveiling the Capabilities of Large Language Models for Universal Visual Grounding}, 
          author={Haoyu Zhao and Wenhang Ge and Ying-cong Chen},
          year={2024},
          eprint={2405.17104},
          archivePrefix={arXiv},
          primaryClass={cs.CV}
    }
      

Acknowledgement

This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.