Safety of Multimodal Large Language Models on Images and Text

Mar 2, 2024·
Xin Liu
Xin Liu
· 0 min read
Abstract
Attracted by the impressive power of Multimodal Large Language Models (MLLMs), the public is increasingly utilizing them to improve the efficiency of daily work. Nonetheless, the vulnerabilities of MLLMs to unsafe instructions bring huge safety risks when these models are deployed in real-world scenarios. This talk systematically presents current efforts on the evaluation, attack, and defense of MLLMs’ safety on images and text. It begins with introducing the overview of MLLMs on images and text and understanding of safety, which helps researchers know the detailed scope of our survey. Then, it reviews the evaluation datasets and metrics for measuring the safety of MLLMs. Next, it comprehensively presents attack and defense techniques related to MLLMs’ safety. Finally, it analyzes several unsolved issues and discusses promising research directions.
Date
Mar 2, 2024 10:00 AM — 12:00 PM
Event
Location

Online Talk