Hallucination detection for generative large language models by bayesian sequential estimation

Dec 20, 2023·
Xiaohua Wang
Yuliang YAN
Yuliang YAN
,
Longtao Huang
,
Xiaoqing Zheng
,
Xuanjing Huang
· 0 min read
Abstract
Large Language Models (LLMs) have made remarkable advancements in the field of natural language generation. However, the propensity of LLMs to generate inaccurate or non-factual content, termed “hallucinations”, remains a significant challenge. Current hallucination detection methods often necessitate the retrieval of great numbers of relevant evidence, thereby increasing response times. We introduce a unique framework that leverages statistical decision theory and Bayesian sequential analysis to optimize the trade-off between costs and benefits during the hallucination detection process. This approach does not require a predetermined number of observations. Instead, the analysis proceeds in a sequential manner, enabling an expeditious decision towards “belief” or “disbelief” through a stop-or-continue strategy. Extensive experiments reveal that this novel framework surpasses existing methods in both efficiency and precision of hallucination detection. Furthermore, it requires fewer retrieval steps on average, thus decreasing response times.
Type
Publication
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing