View a PDF of the paper titled A Little Human Data Goes A Long Way, by Dhananjay Ashok and Jonathan May
Abstract:Faced with an expensive human annotation process, creators of NLP systems increasingly turn to synthetic data generation. While this method shows promise, the extent to which synthetic data can replace human annotation is poorly understood. We investigate the use of synthetic data in Fact Verification (FV) and Question Answering (QA) by studying the effects of incrementally replacing human generated data with synthetic points on eight diverse datasets. Strikingly, replacing up to 90% of the training data only marginally decreases performance, but replacing the final 10% leads to severe declines. We find that models trained on purely synthetic data can be reliably improved by including as few as 125 human generated data points. We show that matching the performance gain of just a little additional human data (only 200 points) requires an order of magnitude more synthetic data and estimate price ratios at which human annotation would be a more cost-effective solution. Our results suggest that even when human annotation at scale is infeasible, there is great value to having a small proportion of the dataset being human generated.
Submission history
From: Dhananjay Ashok [view email]
[v1]
Thu, 17 Oct 2024 00:04:02 UTC (4,001 KB)
[v2]
Sun, 1 Jun 2025 02:02:34 UTC (3,256 KB)
[v3]
Wed, 20 Aug 2025 01:59:58 UTC (3,257 KB)
Source link
#Human #Data #Long