[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
[Submitted on 18 Nov 2024 (v1), last revised 28 Nov 2024 (this version, v3)] View a PDF of the paper ...
Read more

[2406.19226] Simulating Classroom Education with LLM-Empowered Agents

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
[Submitted on 27 Jun 2024 (v1), last revised 27 Nov 2024 (this version, v2)] Authors:Zheyuan Zhang, Daniel Zhang-Li, Jifan Yu, ...
Read more

FLEX-CLIP: Feature-Level GEneration Network Enhanced CLIP for X-shot Cross-modal Retrieval

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
arXiv:2411.17454v1 Announce Kind: cross Summary: Given a question from one modality, few-shot cross-modal retrieval (CMR) retrieves semantically comparable cases in ...
Read more

A Survey on Red Teaming for Generative Models

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
[Submitted on 31 Mar 2024 (v1), last revised 26 Nov 2024 (this version, v2)] Authors:Lizhi Lin, Honglin Mu, Zenan Zhai, ...
Read more

A Comprehensive Dataset and Benchmark of Textual-Edge Graphs

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
[Submitted on 14 Jun 2024 (v1), last revised 25 Nov 2024 (this version, v3)] View a PDF of the paper ...
Read more

Color-driven Generation of Synthetic Data for Referring Expression Comprehension

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
arXivLabs is a framework that permits collaborators to develop and share new arXiv options immediately on our web site. Each ...
Read more

Evaluating and Advancing Multimodal Large Language Models in Ability Lens

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
arXiv:2411.14725v1 Announce Sort: cross Summary: As multimodal giant language fashions (MLLMs) advance quickly, rigorous analysis has change into important, offering ...
Read more

[2406.04289] What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
[Submitted on 6 Jun 2024 (v1), last revised 21 Nov 2024 (this version, v4)] View a PDF of the paper ...
Read more

[2402.17304] Probing Multimodal Large Language Models for Global and Local Semantic Representations

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
[Submitted on 27 Feb 2024 (v1), last revised 21 Nov 2024 (this version, v3)] View a PDF of the paper ...
Read more

[2411.12103] Does Unlearning Truly Unlearn? A Black Box Evaluation of LLM Unlearning Methods

[2411.11496] Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
[Submitted on 18 Nov 2024 (v1), last revised 20 Nov 2024 (this version, v2)] View a PDF of the paper ...
Read more