...

Inversion Learning for Highly Effective NLG Evaluation Prompts


View a PDF of the paper titled Beyond One-Size-Fits-All: Inversion Learning for Highly Effective NLG Evaluation Prompts, by Hanhua Hong and 5 other authors

View PDF
HTML (experimental)

Abstract:Evaluating natural language generation systems is challenging due to the diversity of valid outputs. While human evaluation is the gold standard, it suffers from inconsistencies, lack of standardisation, and demographic biases, limiting reproducibility. LLM-based evaluators offer a scalable alternative but are highly sensitive to prompt design, where small variations can lead to significant discrepancies. In this work, we propose an inversion learning method that learns effective reverse mappings from model outputs back to their input instructions, enabling the automatic generation of highly effective, model-specific evaluation prompts. Our method requires only a single evaluation sample and eliminates the need for time-consuming manual prompt engineering, thereby improving both efficiency and robustness. Our work contributes toward a new direction for more robust and efficient LLM-based evaluation.

Submission history

From: Hanhua Hong [view email]
[v1]
Tue, 29 Apr 2025 18:56:12 UTC (1,121 KB)
[v2]
Tue, 9 Sep 2025 16:49:59 UTC (372 KB)
[v3]
Wed, 10 Sep 2025 10:32:57 UTC (372 KB)

Source link

#Inversion #Learning #Highly #Effective #NLG #Evaluation #Prompts