Q-GPT

qgpt

Quantum-Enhanced Confidence Estimation for Language Models

PennyLane PyTorch License

Know when your LLM is confident β€” and when it's guessing.


🎯 What is Q-GPT?

Q-GPT is a quantum neural network head that attaches to any language model and estimates how confident the model is in its response. It helps you detect when the model might be "hallucinating" or making up information.

The Problem

Large Language Models (LLMs) always produce fluent text β€” even when they don't know the answer. They sound confident even when they're wrong. This makes it hard to trust their outputs in critical applications.

The Solution

Q-GPT analyzes the internal hidden states of the model using a variational quantum circuit. Quantum computing naturally captures complex patterns and uncertainties that classical networks might miss. The result: a confidence score that tells you whether to trust the response.


🧠 How It Works

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        Q-GPT Architecture                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                   β”‚
β”‚  LLM Hidden States                Quantum Circuit                 β”‚
β”‚  [2880 dimensions]                [4 qubits]                      β”‚
β”‚         β”‚                              β”‚                          β”‚
β”‚         β–Ό                              β”‚                          β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                       β”‚                          β”‚
β”‚  β”‚  Compress   β”‚  ──────────────────►  β”‚                          β”‚
β”‚  β”‚  to 4 dims  β”‚                       β”‚                          β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                       β–Ό                          β”‚
β”‚                               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                 β”‚
β”‚                               β”‚   RY   RZ       β”‚                 β”‚
β”‚                               β”‚   β”‚    β”‚        β”‚  Layer 1        β”‚
β”‚                               β”‚   Rot ─●─ CNOT  β”‚                 β”‚
β”‚                               β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€                 β”‚
β”‚                               β”‚   Rot ─●─ CNOT  β”‚  Layer 2        β”‚
β”‚                               β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€                 β”‚
β”‚                               β”‚   Rot ─●─ CNOT  β”‚  Layer 3        β”‚
β”‚                               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β”‚
β”‚                                        β”‚                          β”‚
β”‚                                        β–Ό                          β”‚
β”‚                               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                 β”‚
β”‚                               β”‚  Measure ⟨Z⟩    β”‚                 β”‚
β”‚                               β”‚  on each qubit  β”‚                 β”‚
β”‚                               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β”‚
β”‚                                        β”‚                          β”‚
β”‚                                        β–Ό                          β”‚
β”‚                               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                 β”‚
β”‚                               β”‚   Confidence    β”‚                 β”‚
β”‚                               β”‚   0.0 β€” 1.0     β”‚                 β”‚
β”‚                               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β”‚
β”‚                                                                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Step by Step:

  1. Extract Hidden States β€” When the LLM generates a response, we capture its internal representation (hidden states from the last layer).

  2. Compress β€” The high-dimensional hidden states (2880 dimensions for GPT-OSS) are compressed to 4 values using a small neural network.

  3. Quantum Encoding β€” These 4 values are encoded into quantum states using rotation gates (RY, RZ). Each value controls the angle of rotation for one qubit.

  4. Variational Layers β€” The qubits pass through multiple layers of:

    • Rotation gates (trainable parameters that learn patterns)
    • CNOT gates (create entanglement between qubits)
  5. Measurement β€” We measure the expectation value ⟨Z⟩ of each qubit, giving us 4 numbers between -1 and +1.

  6. Confidence Output β€” A final layer converts these measurements into a confidence score (0-1) and an uncertainty estimate.

Why Quantum?

  • Entanglement captures complex correlations in the data that classical networks struggle with
  • Superposition allows exploring multiple states simultaneously
  • Inherent probabilistic nature naturally represents uncertainty
  • Compact representation β€” 4 qubits can represent 16-dimensional state space

πŸ“Š What You Get

Output Description
confidence Score from 0.0 to 1.0 β€” how sure the model is
uncertainty Quantum-derived uncertainty measure
should_refuse Boolean β€” True if confidence < 0.3 (model should decline to answer)
confidence_label Human-readable: "very high", "high", "moderate", "low", "very low"

πŸ’» Usage

Installation

pip install pennylane torch transformers

Quick Start

from quantum_head import load_qgpt

# Load model with quantum head
model, tokenizer = load_qgpt("squ11z1/gpt-oss-9b-reasoning")

# Prepare input
prompt = "What is the capital of France?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

# Generate with confidence
outputs = model.generate_with_confidence(
    inputs.input_ids,
    max_new_tokens=50
)

# Check results
print(f"Response: {tokenizer.decode(outputs['sequences'][0])}")
print(f"Confidence: {outputs['confidence_label']}")  # "high"
print(f"Should refuse: {outputs['should_refuse']}")  # False

Using Just the Quantum Head

from quantum_head import QuantumHead
import torch

# Create quantum head for your model's hidden size
head = QuantumHead(hidden_size=2880)

# Get hidden states from your model
# hidden_states shape: [batch_size, hidden_size]
hidden_states = torch.randn(1, 2880)

# Get confidence
output = head(hidden_states)
print(f"Confidence: {output['confidence'].item():.2%}")

πŸŽ“ Training the Quantum Head

The quantum head can be trained on examples where you know if the model was correct:

from train import train_quantum_head

train_quantum_head(
    model_name="squ11z1/gpt-oss-9b-reasoning",
    train_data_path="train_data.jsonl",  # {text, confidence, is_correct}
    epochs=3,
)

Training data format (JSONL):

{"text": "What is 2+2? The answer is 4.", "confidence": 0.95, "is_correct": true}
{"text": "The moon is made of cheese.", "confidence": 0.2, "is_correct": false}

πŸ“ Files

File Description
quantum_head.py Main implementation (QuantumHead, QGPT, load_qgpt)
train.py Training script for the quantum head
__init__.py Package initialization

πŸ”¬ Technical Details

Parameter Value
Qubits 4
Variational Layers 3
Trainable Parameters ~2,000 (quantum) + ~200,000 (classical)
Framework PennyLane + PyTorch
Fallback Classical approximation if PennyLane unavailable

⚠️ Limitations

  • Not perfect β€” Confidence estimation is inherently uncertain
  • Training data dependent β€” Quality depends on training examples
  • Simulation β€” Currently runs on quantum simulator, not real hardware
  • Latency β€” Adds ~10-50ms per inference (quantum circuit execution)

πŸ“– Citation

@misc{qgpt2026,
  title={Q-GPT: Quantum-Enhanced Confidence Estimation for Language Models},
  author={squ11z1},
  year={2026},
  url={https://huggingface.co/squ11z1/Q-GPT}
}

πŸ™ Acknowledgments


Pro Mundi Vita

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including squ11z1/Q-GPT