View a PDF of the paper titled On the Role of Feedback in Test-Time Scaling of Agentic AI Workflows, by Souradip Chakraborty and 10 other authors
Abstract:Agentic AI workflows (systems that autonomously plan and act) are becoming widespread, yet their task success rate on complex tasks remains low. A promising solution is inference-time alignment, which uses extra compute at test time to improve performance. Inference-time alignment relies on three components: sampling, evaluation, and feedback. While most prior work studies sampling and automatic evaluation, feedback remains underexplored. To study the role of feedback, we introduce Iterative Agent Decoding (IAD), a procedure that repeatedly inserts feedback extracted from different forms of critiques (reward models or AI-generated textual feedback) between decoding steps. Through IAD, we analyze feedback along four dimensions: (1) its role in the accuracy-compute trade-offs with limited inference budget, (2) quantifying the gains over diversity-only baselines such as best-of-N sampling, (3) effectiveness of composing feedback from reward models versus textual critique, and (4) robustness to noisy or low-quality feedback. Across Sketch2Code, Text2SQL, Intercode, and WebShop, we show that IAD with proper integration of high fidelity feedback leads to consistent gains up to 10 percent absolute performance improvement over various baselines such as best-of-N. Our findings underscore feedback as a crucial knob for inference-time alignment of agentic AI workflows with limited inference budget.
Submission history
From: Souradip Chakraborty Mr [view email]
[v1]
Wed, 2 Apr 2025 17:40:47 UTC (3,466 KB)
[v2]
Sat, 5 Apr 2025 15:45:13 UTC (3,466 KB)
[v3]
Mon, 7 Jul 2025 17:40:28 UTC (4,959 KB)
[v4]
Tue, 8 Jul 2025 03:19:40 UTC (4,959 KB)
Source link
#Role #Feedback #TestTime #Scaling #Agentic #Workflows