Motivation
In 3 scenarios, attacking LLaVA-1.5 using query-relevant images obtains a much higher attack success rate than using query-irrelevant images.
Warning: This page contains examples of harmful language and images, and reader discretion is recommended.
The security concerns surrounding Large Language Models (LLMs) have been extensively explored, yet the safety of Large Multi-Modal Models (LMMs) remains understudied. In our study, we present a novel visual prompt attack that exploits query-relevant images to jailbreak the open-source LMMs. Our method creates a composite image from one image generated by diffusion models and another that displays the text as typography, based on keywords extracted from a malicious query. We show LLMs can be easily attacked by our approach, even if the employed Large Language Models are safely aligned. To evaluate the extent of this vulnerability in open-source LMMs, we have compiled a substantial dataset encompassing 13 scenarios with a total of 5,040 text-image pairs, using our presented attack technique. Our evaluation of 12 cutting-edge LMMs using this dataset shows the vulnerability of existing multi-modal models on adversarial attacks. This finding underscores the need for a concerted effort to strengthen and enhance the safety measures of open-source LMMs against potential malicious exploits.
In 3 scenarios, attacking LLaVA-1.5 using query-relevant images obtains a much higher attack success rate than using query-irrelevant images.
For each malicious query, we employ GPT-4 to identify and extract keywords. Subsequently, we utilize Typography and Stable Diffusion techniques to create two images based on these identified keywords. These images are then strategically blended, aiming to deceive large multi-modal models into responding to queries that are not meant to be answered.
Diversity of key phrase over 13 scenarios.
The statistics of our constructed MM-SafetyBench. For each question, we generate 3 images correspondingly.
Example of datasets in 12 scenarios except for pornography. We use # Number as an abbreviation to refer to different scenarios. The # Number of the pornography scenario is 07.
Comparison with other Multi-Modal benchmarks. The symbol * denotes Azure’s GPT-4 API.
Evaluation on LLaVA-1.5-7B and LLaVA-1.5-13B. The word “Typo.” is an abbreviation for typography.
Evaluation on MiniGPT-4. The word “Typo.” is an abbreviation for typography.
Examples for scenario 01(left) and 02(right).
Examples for scenario 03(left) and 04(right).
Examples for scenario 06(left) and 09(right).
@misc{liu2023queryrelevant,
title = {Query-Relevant Images Jailbreak Large Multi-Modal Models},
author = {Xin Liu and Yichen Zhu and Yunshi Lan and Chao Yang and Yu Qiao},
year = {2023},
eprint = {2311.17600},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}