...

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
[Submitted on 16 May 2025 (v1), last revised 3 Dec 2025 (this version, v3)] View a PDF of the paper ...
Read more

Contextual Image Attack: How Visual Context Exposes Multimodal Safety Vulnerabilities

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
arXiv:2512.02973v1 Announce Type: cross Abstract: While Multimodal Large Language Models (MLLMs) show remarkable capabilities, their safety alignments are susceptible to ...
Read more

Efficient Distillation of Multi-task Speech Models via Language-Specific Experts

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
[Submitted on 2 Nov 2023 (v1), last revised 29 Nov 2025 (this version, v4)] View a PDF of the paper ...
Read more

H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
arXiv:2512.01797v1 Announce Type: cross Abstract: Large language models (LLMs) frequently generate hallucinations — plausible but factually incorrect outputs — undermining ...
Read more

SO-Bench: A Structural Output Evaluation of Multimodal LLMs

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
arXiv:2511.21750v1 Announce Type: cross Abstract: Multimodal large language models (MLLMs) are increasingly deployed in real-world, agentic settings where outputs must ...
Read more

An Explainable Hybrid Deep Learning Framework for Multi-Aspect Sentiment Analysis with Cross-Domain Transfer Learning

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals ...
Read more

[2510.21984] AI-Mediated Communication Reshapes Social Structure in Opinion-Diverse Groups

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
[Submitted on 24 Oct 2025 (v1), last revised 25 Nov 2025 (this version, v2)] View a PDF of the paper ...
Read more

A Multimodal Multi-Task Dataset for Benchmarking Health Misinformation

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
[Submitted on 24 May 2025 (v1), last revised 25 Nov 2025 (this version, v2)] View a PDF of the paper ...
Read more

[2410.13334] BiasJailbreak:Analyzing Ethical Biases and Jailbreak Vulnerabilities in Large Language Models

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
[Submitted on 17 Oct 2024 (v1), last revised 25 Nov 2025 (this version, v5)] View a PDF of the paper ...
Read more

An AI Pipeline For Real-time Disease Outbreak Detection

Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
[Submitted on 24 Jun 2025 (v1), last revised 24 Nov 2025 (this version, v2)] Authors:Devesh Pant, Rishi Raj Grandhe, Vipin ...
Read more
12337 Next