See, Say, and Segment: Teaching LMMs to Overcome False Premises

How do LMMs handle false premise segmentation queries?

Contemporary open-source Large MultiModal Models (LMMs) combined with segmentation decoders (such as LISA) are able to generate awesome segmentation masks but have difficulty on expressions which refer to something that is not present in the image. SESAME, our SEe-SAy-segMEnt LMM, uses joint training to overcome this problem.

Figure 1: False Premise Referring Expression Segmentation Examples with prior work and our method.

Advanced Problem Setting: See, Say and Segment

We introduce a Novel Problem Setting, requiring LMMs that can See, Say and Segment. Specifically, we require these models to

  1. See by detecting if an object from the query is present in an image,
  2. Say something about the object itself if it’s not there and suggest alternatives to the user’s query,
  3. Segment by showing where in an image an existent object is grounded.

Figure 2: Diagram of SESAME Model Framework

FP-RefCOCO: A Novel Dataset

To facilitate training and evaluation of this new class of models, we introduce a new dataset and benchmark, FP-RefCOCO, FP-RefCOCO+ and FP-RefCOCOg.

Using refCOCO for base images, we employ an LLM to augment a false-premise referring segmentation dataset with context-aware false premise queries: similar objects, attributes, and relations. Although existing datasets such as R-RefCOCO(+/g) also include queries referring to non-existent items in images, their method of generating negative expressions through naive random sampling often lacks context-awareness. This limitation significantly reduces their effectiveness for false-premise correction tasks.
MY ALT TEXT

Consider an image with a cat on a chair: contextually valid false premises that could be logically corrected to "a cat on the chair" might include phrases like "a cat under the chair" or "a dog on the chair." However, prior datasets such as R-RefCOCO typically produces less suitable examples, such as ``a pizza on the chair” or "a cat behind the people," which do not align with realistic model correction expectations.

Visualization Results: Referring Segmentation

When SESAME is presented with false premises queries with similar objects, attributes, concepts, or activities, it can not only deny these queries but also uses commonsense reasoning to propose relevant alternatives that align with human understanding. For queries that are entirely irrelevant, SESAME will simply reject them without generating any baseless or speculative results.



Visualization Results: Reasoning Segmentation

SESAME is adaptable for complex "reasoning segmentation" tasks, where objects are implicitly implied instead of directly mentioned. By training with specially curated data for false-premise reasoning segmentation, our model can not only dismiss incorrect queries but also optionally suggest an alternative similar concept.



Visualization Results: Ability to handle complex instructions

SESAME stands out by processing complex input instructions, including segmenting alternate objects based on conditional queries and conducting basic Visual Question Answering (VQA) without producing segmentation masks. This versatility, unlike prior models like LISA, opens the door for more human-like interactions and the possibility of extending SESAME to multi-round interactions.

MY ALT TEXT

BibTeX

@misc{wu2023see,
    title={See, Say, and Segment: Teaching LMMs to Overcome False Premises}, 
    author={Tsung-Han Wu and Giscard Biamby and David Chan and Lisa Dunlap and Ritwik Gupta and Xudong Wang and Joseph E. Gonzalez and Trevor Darrell},
    year={2023},
    eprint={2312.08366},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}