Podcast cover for "Lie to Me: Knowledge Graphs for Robust Hallucination Self-Detection in LLMs" by Sahil Kale & Antonio Luca Alfeo
Episode

Lie to Me: Knowledge Graphs for Robust Hallucination Self-Detection in LLMs

Dec 29, 202510:07
Computation and LanguageArtificial Intelligence
No ratings yet

Abstract

Hallucinations, the generation of apparently convincing yet false statements, remain a major barrier to the safe deployment of LLMs. Building on the strong performance of self-detection methods, we examine the use of structured knowledge representations, namely knowledge graphs, to improve hallucination self-detection. Specifically, we propose a simple yet powerful approach that enriches hallucination self-detection by (i) converting LLM responses into knowledge graphs of entities and relations, and (ii) using these graphs to estimate the likelihood that a response contains hallucinations. We evaluate the proposed approach using two widely used LLMs, GPT-4o and Gemini-2.5-Flash, across two hallucination detection datasets. To support more reliable future benchmarking, one of these datasets has been manually curated and enhanced and is released as a secondary outcome of this work. Compared to standard self-detection methods and SelfCheckGPT, a state-of-the-art approach, our method achieves up to 16% relative improvement in accuracy and 20% in F1-score. Our results show that LLMs can better analyse atomic facts when they are structured as knowledge graphs, even when initial outputs contain inaccuracies. This low-cost, model-agnostic approach paves the way toward safer and more trustworthy language models.

Links & Resources

Authors

Cite This Paper

Year:2025
Category:cs.CL
APA

Kale, S., Alfeo, A. L. (2025). Lie to Me: Knowledge Graphs for Robust Hallucination Self-Detection in LLMs. arXiv preprint arXiv:2512.23547.

MLA

Sahil Kale and Antonio Luca Alfeo. "Lie to Me: Knowledge Graphs for Robust Hallucination Self-Detection in LLMs." arXiv preprint arXiv:2512.23547 (2025).