JMLR

UQLM: A Python Package for Uncertainty Quantification in Large Language Models

Authors
Dylan Bouchard Mohit Singh Chauhan David Skarbrevik Ho-Kyeong Ra Viren Bajaj Zeya Ahmad
Research Topics
Machine Learning
Paper Information
  • Journal:
    Journal of Machine Learning Research
  • Added to Tracker:
    Mar 03, 2026
Abstract

Hallucinations, defined as instances where Large Language Models (LLMs) generate false or misleading content, pose a significant challenge that impacts the safety and trust of downstream applications. We introduce UQLM, a Python package for LLM hallucination detection using state-of-the-art uncertainty quantification (UQ) techniques. This toolkit offers a suite of UQ-based scorers that compute response-level confidence scores ranging from 0 to 1. This library provides an off-the-shelf solution for UQ-based hallucination detection that can be easily integrated to enhance the reliability of LLM outputs.

Author Details
Dylan Bouchard
Author
Mohit Singh Chauhan
Author
David Skarbrevik
Author
Ho-Kyeong Ra
Author
Viren Bajaj
Author
Zeya Ahmad
Author
Research Topics & Keywords
Machine Learning
Research Area
Citation Information
APA Format
Dylan Bouchard , Mohit Singh Chauhan , David Skarbrevik , Ho-Kyeong Ra , Viren Bajaj & Zeya Ahmad . UQLM: A Python Package for Uncertainty Quantification in Large Language Models. Journal of Machine Learning Research .
BibTeX Format
@article{paper1001,
  title = { UQLM: A Python Package for Uncertainty Quantification in Large Language Models },
  author = { Dylan Bouchard and Mohit Singh Chauhan and David Skarbrevik and Ho-Kyeong Ra and Viren Bajaj and Zeya Ahmad },
  journal = { Journal of Machine Learning Research },
  url = { https://www.jmlr.org/papers/v27/25-1557.html }
}