As an early piece of work, this benchmark systematically reveals that current Multimodal Large Language Models are susceptible to malicious attacks.
Jul 1, 2024
As an early piece of work, this benchmark systematically reveals that current Multimodal Large Language Models are susceptible to malicious attacks.
Dec 14, 2023
Analysis across 12 state-of-the-art models reveals that MLLMs are susceptible to breaches instigated by the approach, even when the equipped LLMs have been safety-aligned. Furthermore, this work proposes a straightforward yet effective prompting strategy to enhance the resilience of MLLMs against these types of attacks.
Nov 29, 2023