Table of Contents
Fetching ...

Jailbreaking LLMs with Arabic Transliteration and Arabizi

Mansour Al Ghanim, Saleh Almohaimeed, Mengxin Zheng, Yan Solihin, Qian Lou

TL;DR

Using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks, highlighting the need for more comprehensive safety training across all language forms.

Abstract

This study identifies the potential vulnerabilities of Large Language Models (LLMs) to 'jailbreak' attacks, specifically focusing on the Arabic language and its various forms. While most research has concentrated on English-based prompt manipulation, our investigation broadens the scope to investigate the Arabic language. We initially tested the AdvBench benchmark in Standardized Arabic, finding that even with prompt manipulation techniques like prefix injection, it was insufficient to provoke LLMs into generating unsafe content. However, when using Arabic transliteration and chatspeak (or arabizi), we found that unsafe content could be produced on platforms like OpenAI GPT-4 and Anthropic Claude 3 Sonnet. Our findings suggest that using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks. We hypothesize that this exposure could be due to the model's learned connection to specific words, highlighting the need for more comprehensive safety training across all language forms.

Jailbreaking LLMs with Arabic Transliteration and Arabizi

TL;DR

Using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks, highlighting the need for more comprehensive safety training across all language forms.

Abstract

This study identifies the potential vulnerabilities of Large Language Models (LLMs) to 'jailbreak' attacks, specifically focusing on the Arabic language and its various forms. While most research has concentrated on English-based prompt manipulation, our investigation broadens the scope to investigate the Arabic language. We initially tested the AdvBench benchmark in Standardized Arabic, finding that even with prompt manipulation techniques like prefix injection, it was insufficient to provoke LLMs into generating unsafe content. However, when using Arabic transliteration and chatspeak (or arabizi), we found that unsafe content could be produced on platforms like OpenAI GPT-4 and Anthropic Claude 3 Sonnet. Our findings suggest that using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks. We hypothesize that this exposure could be due to the model's learned connection to specific words, highlighting the need for more comprehensive safety training across all language forms.

Paper Structure

This paper contains 20 sections, 5 figures, 6 tables.

Figures (5)

  • Figure 1: Arabic prompt is used to ask OpenAI GPT-4 about creating and distributing malware. GPT-4 refuses to answer in Arabic. When the same prompt is transliterated, GPT-4 provides an unsafe response.
  • Figure 2: Evaluation of Advbench on GPT-4 and Claude-3-Sonnet. We report the error of two runs with different temperature and top_p values indicated by the vertical black bar.
  • Figure 3: Left: GPT-4 with Arabic prompt, Arabic prompt plus prefix injection, and the prompt in chatspeak. Right: Claude-3-Sonnet with Arabic prompt, Arabic prompt plus prefix injection, and the prompt in chatspeak. Both of these conversations are done on the same topic, making a bomb. More examples are in appendix \ref{['sec:appendixA']}.
  • Figure 4: Left: Character modification on GPT-4 using Arabic standardized form leads to answering a previously refused prompt. Right: Words addition on Claude-3-sonnet leads to answering a previously refused query. In both examples, we highlight how manual investigation with low-resource data can lead to discovering LLMs vulnerabilities.
  • Figure 5: Using sentence level perturbation by adding a prefix and a suffix. The prefix induces copyright filter, and the suffix bypasses Claude-3 safety training.