View a PDF of the paper titled Praxis-VLM: Vision-Grounded Decision Making via Text-Driven Reinforcement Learning, by Zhe Hu and 4 other authors
Abstract:Vision Language Models exhibited immense potential for embodied AI, yet they often lack the sophisticated situational reasoning required for complex decision-making. This paper shows that VLMs can achieve surprisingly strong decision-making performance when visual scenes are represented merely as text-only descriptions, suggesting foundational reasoning can be effectively learned from language. Motivated by this insight, we propose Praxis-VLM, a reasoning VLM for vision-grounded decision-making. Praxis-VLM employs the GRPO algorithm on textual scenarios to instill robust reasoning capabilities, where models learn to evaluate actions and their consequences. These reasoning skills, acquired purely from text, successfully transfer to multimodal inference with visual inputs, significantly reducing reliance on scarce paired image-text training data. Experiments across diverse decision-making benchmarks demonstrate that Praxis-VLM substantially outperforms standard supervised fine-tuning, exhibiting superior performance and generalizability. Further analysis confirms that our models engage in explicit and effective reasoning, underpinning their enhanced performance and adaptability.
Submission history
From: Zhe Hu [view email]
[v1]
Fri, 21 Mar 2025 09:25:23 UTC (292 KB)
[v2]
Thu, 22 May 2025 07:21:02 UTC (5,068 KB)
Source link
#VisionGrounded #Decision #Making #TextDriven #Reinforcement #Learning