Tailoring evaluations for adaptive attacks
Baseline mitigations showed promise against basic, non-adaptive attacks, significantly reducing the attack success rate. However, malicious actors increasingly use adaptive attacks that are specifically designed to evolve and adapt with ART to circumvent the defense being tested.
Successful baseline defenses like Spotlighting or Self-reflection became much less effective against adaptive attacks learning how to deal with and bypass static defense approaches.
This finding illustrates a key point: relying on defenses tested only against static attacks offers a false sense of security. For robust security, it is critical to evaluate adaptive attacks that evolve in response to potential defenses.
Building inherent resilience through model hardening
While external defenses and system-level guardrails are important, enhancing the AI modelโs intrinsic ability to recognize and disregard malicious instructions embedded in data is also crucial. We call this process โmodel hardeningโ.
We fine-tuned Gemini on a large dataset of realistic scenarios, where ART generates effective indirect prompt injections targeting sensitive information. This taught Gemini to ignore the malicious embedded instruction and follow the original user request, thereby only providing the correct, safe response it should give. This allows the model to innately understand how to handle compromised information that evolves over time as part of adaptive attacks.
This model hardening has significantly boosted Geminiโs ability to identify and ignore injected instructions, lowering its attack success rate. And importantly, without significantly impacting the modelโs performance on normal tasks.
Itโs important to note that even with model hardening, no model is completely immune. Determined attackers might still find new vulnerabilities. Therefore, our goal is to make attacks much harder, costlier, and more complex for adversaries.
Taking a holistic approach to model security
Protecting AI models against attacks like indirect prompt injections requires โdefense-in-depthโ โ using multiple layers of protection, including model hardening, input/output checks (like classifiers), and system-level guardrails. Combating indirect prompt injections is a key way weโre implementing our agentic security principles and guidelines to develop agents responsibly.
Securing advanced AI systems against specific, evolving threats like indirect prompt injection is an ongoing process. It demands pursuing continuous and adaptive evaluation, improving existing defenses and exploring new ones, and building inherent resilience into the models themselves. By layering defenses and learning constantly, we can enable AI assistants like Gemini to continue to be both incredibly helpful and trustworthy.
To learn more about the defenses we built into Gemini and our recommendation for using more challenging, adaptive attacks to evaluate model robustness, please refer to the GDM white paper, Lessons from Defending Gemini Against Indirect Prompt Injections.
Source link
#Advancing #Geminis #security #safeguards #Google #DeepMind

























