...

Consuming Resource via Auto-generation for LLM-DoS Attack under Black-box Settings


View a PDF of the paper titled Crabs: Consuming Resource via Auto-generation for LLM-DoS Attack under Black-box Settings, by Yuanhe Zhang and 6 other authors

View PDF

Abstract:Large Language Models (LLMs) have demonstrated remarkable performance across diverse tasks yet still are vulnerable to external threats, particularly LLM Denial-of-Service (LLM-DoS) attacks. Specifically, LLM-DoS attacks aim to exhaust computational resources and block services. However, existing studies predominantly focus on white-box attacks, leaving black-box scenarios underexplored. In this paper, we introduce Auto-Generation for LLM-DoS (AutoDoS) attack, an automated algorithm designed for black-box LLMs. AutoDoS constructs the DoS Attack Tree and expands the node coverage to achieve effectiveness under black-box conditions. By transferability-driven iterative optimization, AutoDoS could work across different models in one prompt. Furthermore, we reveal that embedding the Length Trojan allows AutoDoS to bypass existing defenses more effectively. Experimental results show that AutoDoS significantly amplifies service response latency by over 250$\times\uparrow$, leading to severe resource consumption in terms of GPU utilization and memory usage. Our work provides a new perspective on LLM-DoS attacks and security defenses. Our code is available at this https URL.

Submission history

From: Yuanhe Zhang [view email]
[v1]
Wed, 18 Dec 2024 14:19:23 UTC (1,028 KB)
[v2]
Mon, 27 Jan 2025 04:33:39 UTC (1,028 KB)
[v3]
Tue, 18 Feb 2025 06:08:19 UTC (1,105 KB)

Source link

#Consuming #Resource #Autogeneration #LLMDoS #Attack #Blackbox #Settings