...

Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs


View a PDF of the paper titled “Haet Bhasha aur Diskrimineshun”: Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs, by Darpan Aswal and Siddharth D Jaiswal

View PDF
HTML (experimental)

Abstract:Recently released LLMs have strong multilingual \& multimodal capabilities. Model vulnerabilities are exposed using audits and red-teaming efforts. Existing efforts have focused primarily on the English language; thus, models continue to be susceptible to multilingual jailbreaking strategies, especially for multimodal contexts. In this study, we introduce a novel strategy that leverages code-mixing and phonetic perturbations to jailbreak LLMs for both text and image generation tasks. We also present an extension to a current jailbreak-template-based strategy and propose a novel template, showing higher effectiveness than baselines. Our work presents a method to effectively bypass safety filters in LLMs while maintaining interpretability by applying phonetic misspellings to sensitive words in code-mixed prompts. We achieve a 99\% Attack Success Rate for text generation and 78\% for image generation, with Attack Relevance Rate of 100\% for text generation and 96\% for image generation for the phonetically perturbed code-mixed prompts. Our interpretability experiments reveal that phonetic perturbations impact word tokenization, leading to jailbreak success. Our study motivates increasing the focus towards more generalizable safety alignment for multilingual multimodal models, especially in real-world settings wherein prompts can have misspelt words. \textit{\textbf{Warning: This paper contains examples of potentially harmful and offensive content.}}

Submission history

From: Darpan Aswal [view email]
[v1]
Tue, 20 May 2025 11:35:25 UTC (2,596 KB)
[v2]
Tue, 19 Aug 2025 11:43:09 UTC (2,597 KB)
[v3]
Sat, 11 Oct 2025 13:22:55 UTC (1,977 KB)

Source link

#Phonetic #Perturbations #CodeMixed #Hinglish #RedTeam #LLMs