...

Eliciting the Agent to Deeply Think within the Response


View a PDF of the paper titled Chain-of-Conceptual-Thought: Eliciting the Agent to Deeply Think within the Response, by Qingqing Gu and 7 other authors

View PDF
HTML (experimental)

Abstract:Chain-of-Thought (CoT) is widely applied to enhance the LLM capability in math, coding and reasoning tasks. However, its performance is limited for open-domain tasks, when there are no clearly defined reasoning steps or logical transitions. To mitigate such challenges, we propose a new prompt-based paradigm called Chain of Conceptual Thoughts (CoCT), which suggests the LLM first to produce the tag of concepts, then complete the detailed content following the concept. To encourage this hierarchical way of thinking, we implement the concepts with emotions, strategies and topics. We experiment with this paradigm in daily and emotional support conversations, covering tasks with both in-domain and out-of-domain concept settings. Automatic, human, and LLM-based evaluations reveal that CoCT surpasses several prompt-based baselines such as self-refine, ECoT, SoT and RAG, suggesting a potential solution of LLM prompting paradigm for a wider scope of tasks.

Submission history

From: Qingqing Gu [view email]
[v1]
Tue, 21 Oct 2025 09:08:21 UTC (407 KB)
[v2]
Fri, 24 Oct 2025 09:31:29 UTC (407 KB)

Source link

#Eliciting #Agent #Deeply #Response