...

Structure-Aware Fine-Tuning of LLMs for AMR-to-Text Generation


View a PDF of the paper titled SAFT: Structure-Aware Fine-Tuning of LLMs for AMR-to-Text Generation, by Rafiq Kamel and 3 other authors

View PDF
HTML (experimental)

Abstract:Large Language Models (LLMs) are increasingly applied to tasks involving structured inputs such as graphs. Abstract Meaning Representations (AMRs), which encode rich semantics as directed graphs, offer a rigorous testbed for evaluating LLMs on text generation from such structures. Yet, current methods often arbitrarily linearize AMRs, discarding key structural cues, or rely on architectures incompatible with standard LLMs. We introduce SAFT, a structure-aware fine-tuning approach that injects graph topology into pretrained LLMs without architectural changes. We compute direction-sensitive positional encodings from the magnetic Laplacian of transformed AMRs and project them into the embedding space of the LLM. While possibly applicable to any graph-structured inputs, we focus on AMR-to-text generation as a representative and challenging benchmark. SAFT sets a new state-of-the-art on AMR 3.0 with a 3.5 BLEU improvement over baselines. Gains scale with graph complexity, highlighting the value of structure-aware representations in enhancing LLM performance. SAFT offers a general and effective pathway for bridging structured data and language models.

Submission history

From: Filippo Guerranti [view email]
[v1]
Tue, 15 Jul 2025 18:12:57 UTC (1,484 KB)
[v2]
Wed, 10 Dec 2025 17:26:08 UTC (3,419 KB)

Source link

#StructureAware #FineTuning #LLMs #AMRtoText #Generation