AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models Paper • 2310.04451 • Published Oct 3, 2023
JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks Paper • 2404.03027 • Published Apr 3, 2024 • 3
JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks Paper • 2404.03027 • Published Apr 3, 2024 • 3