View a PDF of the paper titled JailbreakEval: An Integrated Toolkit for Evaluating Jailbreak Attempts Against Large Language Models, by Delong Ran and 6 other authors
Abstract:Jailbreak attacks induce Large Language Models (LLMs) to generate harmful responses, posing severe misuse threats. Though research on jailbreak attacks and defenses is emerging, there is no consensus on evaluating jailbreaks, i.e., the methods to assess the harmfulness of an LLM’s response are varied. Each approach has its own set of strengths and weaknesses, impacting their alignment with human values, as well as the time and financial cost. This diversity challenges researchers in choosing suitable evaluation methods and comparing different attacks and defenses. In this paper, we conduct a comprehensive analysis of jailbreak evaluation methodologies, drawing from nearly 90 jailbreak research published between May 2023 and April 2024. Our study introduces a systematic taxonomy of jailbreak evaluators, offering indepth insights into their strengths and weaknesses, along with the current status of their adaptation. To aid further research, we propose JailbreakEval, a toolkit for evaluating jailbreak attempts. JailbreakEval includes various evaluators out-of-the-box, enabling users to obtain results with a single command or customized evaluation workflows. In summary, we regard JailbreakEval to be a catalyst that simplifies the evaluation process in jailbreak research and fosters an inclusive standard for jailbreak evaluation within the community.
Submission history
From: Delong Ran [view email]
[v1]
Thu, 13 Jun 2024 16:59:43 UTC (526 KB)
[v2]
Tue, 4 Feb 2025 16:04:22 UTC (522 KB)
Source link
#Integrated #Toolkit #Evaluating #Jailbreak #Attempts #Large #Language #Models