Single-forward-pass epistemic uncertainty for LLMs. One API call gives you the answer and a calibrated risk score. No ensemble, no extra calls, no guesswork.
POST to /v1/ask with your question. The model generates an answer — one forward pass, same as normal inference.
Three auxiliary heads (FAH, CWMI, ESR) read the model's internal hidden states. A zero-parameter truth direction probe detects logical contradictions from the geometry alone.
You get the answer plus epsilon (overall risk), LOGIC (contradiction score), and 4-dimensional uncertainty breakdown. Route confidently.
Route uncertain answers to web search. Deliver confident answers directly. Your agent knows when to verify before responding.
Score every LLM response before showing it to users. Flag high-risk outputs for human review. Catch 70% of hallucinations automatically.
Audit AI-generated documents for epistemic risk. Flag uncertain claims in contracts, reports, and regulatory filings before they go out.
Score the model's confidence in its own bug analysis. High LOGIC score means the reasoning may be contradictory — flag for human review.