View a PDF of the paper titled GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language Models, by Seshu Tirupathi and 3 other authors
Abstract:As Large Language Models (LLMs) continue to be increasingly applied across various domains, their widespread adoption necessitates rigorous monitoring to prevent unintended negative consequences and ensure robustness. Furthermore, LLMs must be designed to align with human values, like preventing harmful content and ensuring responsible usage. The current automated systems and solutions for monitoring LLMs in production are primarily centered on LLM-specific concerns like hallucination etc, with little consideration given to the requirements of specific use-cases and user preferences. This paper introduces GAF-Guard, a novel agentic framework for LLM governance that places the user, the use-case, and the model itself at the center. The framework is designed to detect and monitor risks associated with the deployment of LLM based applications. The approach models autonomous agents that identify risks, activate risk detection tools, within specific use-cases and facilitate continuous monitoring and reporting to enhance AI safety, and user expectations. The code is available at this https URL.
Submission history
From: Seshu Tirupathi [view email]
[v1]
Tue, 1 Jul 2025 10:01:21 UTC (1,274 KB)
[v2]
Tue, 8 Jul 2025 15:44:49 UTC (1,274 KB)
Source link
#Agentic #Framework #Risk #Management #Governance #Large #Language #Models