View a PDF of the paper titled Agent Bain vs. Agent McKinsey: A New Text-to-SQL Benchmark for the Business Domain, by Yue Li and 6 other authors
Abstract:Text-to-SQL benchmarks have traditionally only tested simple data access as a translation task of natural language to SQL queries. But in reality, users tend to ask diverse questions that require more complex responses including data-driven predictions or recommendations. Using the business domain as a motivating example, we introduce CORGI, a new benchmark that expands text-to-SQL to reflect practical database queries encountered by end users. CORGI is composed of synthetic databases inspired by enterprises such as DoorDash, Airbnb, and Lululemon. It provides questions across four increasingly complicated categories of business queries: descriptive, explanatory, predictive, and recommendational. This challenge calls for causal reasoning, temporal forecasting, and strategic recommendation, reflecting multi-level and multi-step agentic intelligence. We find that LLM performance degrades on higher-level questions as question complexity increases. CORGI also introduces and encourages the text-to-SQL community to consider new automatic methods for evaluating open-ended, qualitative responses in data access tasks. Our experiments show that LLMs exhibit an average 33.12% lower success execution rate (SER) on CORGI compared to existing benchmarks such as BIRD, highlighting the substantially higher complexity of real-world business needs. We release the CORGI dataset, an evaluation framework, and a submission website to support future research.
Submission history
From: Ran Tao [view email]
[v1]
Wed, 8 Oct 2025 17:57:35 UTC (1,365 KB)
[v2]
Thu, 9 Oct 2025 02:27:56 UTC (1,365 KB)
[v3]
Sun, 11 Jan 2026 00:42:04 UTC (1,113 KB)
[v4]
Tue, 13 Jan 2026 22:44:40 UTC (1,113 KB)
Source link
#TexttoSQL #Benchmark #Business #Domain
























