MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models

1East China Normal University, 2Midea Group, 3Shanghai AI Laboratory, 4University of Oxford
*Contribute equally, Corresponding author
Evaluation of 12 Large Multi-Modal Models

Evaluation of 12 Large Multi-Modal Models using our constructed Multi-Modal Safety Benchmark with proposed visual prompt attacks. The metric for evaluation is the attack success rate, where a higher score indicates a greater number of successful attacks against the models.

Abstract

Warning: This page contains examples of harmful language and images, and reader discretion is recommended.

The security concerns surrounding Large Language Models (LLMs) have been extensively explored, yet the safety of Large Multi-Modal Models (LMMs) remains understudied. In our study, we present a novel visual prompt attack that exploits query-relevant images to jailbreak the open-source LMMs. Our method creates a composite image from one image generated by diffusion models and another that displays the text as typography, based on keywords extracted from a malicious query. We show LLMs can be easily attacked by our approach, even if the employed Large Language Models are safely aligned. To evaluate the extent of this vulnerability in open-source LMMs, we have compiled a substantial dataset encompassing 13 scenarios with a total of 5,040 text-image pairs, using our presented attack technique. Our evaluation of 12 cutting-edge LMMs using this dataset shows the vulnerability of existing multi-modal models on adversarial attacks. This finding underscores the need for a concerted effort to strengthen and enhance the safety measures of open-source LMMs against potential malicious exploits.

Motivation



In 3 scenarios, attacking LLaVA-1.5 using query-relevant images obtains a much higher attack success rate than using query-irrelevant images.



Motivation

Method

For each malicious query, we employ GPT-4 to identify and extract keywords. Subsequently, we utilize Typography and Stable Diffusion techniques to create two images based on these identified keywords. These images are then strategically blended, aiming to deceive large multi-modal models into responding to queries that are not meant to be answered.

Method

Overview

Diversity

Diversity of key phrase over 13 scenarios.


Diversity

Statistics

The statistics of our constructed MM-SafetyBench. For each question, we generate 3 images correspondingly.



Statistics
Example of datasets in 12 scenarios

Example of datasets in 12 scenarios except for pornography. We use # Number as an abbreviation to refer to different scenarios. The # Number of the pornography scenario is 07.

Comparison with other Multi-Modal benchmarks

Comparison with other Multi-Modal benchmarks. The symbol * denotes Azure’s GPT-4 API.

Experiment Results

Main Results

Experiment Main Results

Evaluation on LLaVA-1.5-7B and LLaVA-1.5-13B. The word “Typo.” is an abbreviation for typography.

Ablation Study of SD/Typo in MiniGPT-4

Ablation Study of SD/Typo in MiniGPT-4

Evaluation on MiniGPT-4. The word “Typo.” is an abbreviation for typography.

More detailed results of 12 LMMs

More detailed results of 12 LMMs

Quantitative Analysis

Examples for scenario 01(left) and 02(right)

Examples for scenario 01(left) and 02(right).

Examples for scenario 03(left) and 04(right)

Examples for scenario 03(left) and 04(right).

Examples for scenario 06(left) and 09(right)

Examples for scenario 06(left) and 09(right).

Reference

Please kindly cite our paper if you use our code, data, models or results:
@misc{liu2023queryrelevant,
      title         = {Query-Relevant Images Jailbreak Large Multi-Modal Models}, 
      author        = {Xin Liu and Yichen Zhu and Yunshi Lan and Chao Yang and Yu Qiao},
      year          = {2023},
      eprint        = {2311.17600},
      archivePrefix = {arXiv},
      primaryClass  = {cs.CV}
}