of birds in flight.
There’s no leader. No central command. Each bird aligns with its neighbors—matching direction, adjusting speed, maintaining coherence through purely local coordination. The result is global order emerging from local consistency.
Now imagine one bird flying with the same conviction as the others. Its wingbeats are confident. Its speed is correct. But its direction doesn’t match its neighbors. It’s the red bird.
It’s not lost. It’s not hesitating. It simply doesn’t belong to the flock.
Hallucinations in LLMs are red birds.
The problem we’re actually trying to solve
LLMs generate fluent, confident text that may contain fabricated information. They invent legal cases that don’t exist. They cite papers that were never written. They state facts with the same tone whether those facts are true or completely made up.
The standard approach to detecting this is to ask another language model to check the output. LLM-as-judge. You can see the problem immediately: we’re using a system that hallucinates to detect hallucinations. It’s like asking someone who can’t distinguish colors to sort paint samples. They’ll give you an answer. It might even be right sometimes. But they’re not actually seeing what you need them to see.
The question we asked was different: can we detect hallucinations from the geometric structure of the text itself, without needing another language model’s opinion?
What embeddings actually do
Before getting to the detection method, I want to step back and establish what we’re working with.
When you feed text into a sentence encoder, you get back a vector—a point in high-dimensional space. Texts that are semantically similar land near each other. Texts that are unrelated land far apart. This is what contrastive training optimizes for. But there’s a more subtle tructure than just “similar things are close.”
Consider what happens when you embed a question and its answer. The question lands somewhere in this embeddings space. The answer lands somewhere else. The vector connecting them—what we call the displacement—points in a particular direction. We have a vector: a magnitude and an angle.
We also observed that for grounded responses within a specific domain, these displacement vectors point in consistent directions. We have found something in common: angles.
If you ask five similar questions and get five grounded answers, the displacements from question to answer will be roughly parallel. Not identical—the magnitudes vary, the exact angles differ slightly—but the overall direction is consistent.
When a model hallucinates, something different happens. The response still lands somewhere in embedding space. It’s still fluent. It still sounds like an answer. But the displacement doesn’t follow the local pattern. It points elsewhere. A vector with a totally different angle.
The red bird is flying confidently. But not with the flock. Flies in the opposite direction with an angle totally different from the rest of the birds.
Displacement Consistency (DC)
We formalize this as Displacement Consistency (DC). The idea is simple:
- Build a reference set of grounded question-answer pairs from your domain
- For a new question-answer pair, find the neighboring questions in the reference set
- Compute the mean displacement direction of those neighbors
- Measure how well the new displacement aligns with that mean direction
Grounded responses align well. Hallucinated responses don’t. That’s it. One cosine similarity. No source documents needed at inference time. No multiple generations. No model internals.
And it works remarkably well. Across five architecturally distinct embedding models, across multiple hallucination benchmarks including HaluEval and TruthfulQA, DC achieves near-perfect discrimination. The distributions barely overlap.
The catch: domain locality
We tested DC across five embedding models chosen to span architectural diversity: MPNet-based contrastive fine-tuning (all-mpnet-base-v2), weakly-supervised pre-training (E5-large-v2), instruction-tuned training with hard negatives (BGE-large-en-v1.5), encoder-decoder adaptation (GTR-T5-large), and efficient long-context architectures (nomic-embed-text-v1.5). If DC only worked with one architecture, it might be an artifact of that specific model. Consistent results across architecturally distinct models would suggest the structure is fundamental.
The results were consistent. DC achieved AUROC of 1.0 across all five models on our synthetic benchmark. But synthetic benchmarks can be misleading—perhaps domain-shuffled responses are simply too easy to detect.
So we validated on established hallucination datasets: HaluEval-QA, which contains LLM-generated hallucinations specifically designed to be subtle; HaluEval-Dialogue, with responses that deviate from conversation context; and TruthfulQA, which tests common misconceptions that humans frequently believe.
DC maintained perfect discrimination on all of them. Zero degradation from synthetic to realistic benchmarks.
For comparison, ratio-based methods that measure where responses land relative to queries (rather than the direction they move) achieved AUROC around 0.70–0.81. The gap—approximately 0.20 absolute AUROC—is substantial and consistent across all models tested.
The score distributions tell the story visually. Grounded responses cluster tightly at high DC values (around 0.9). Hallucinated responses spread at lower values (around 0.3). The distributions barely overlap.
DC achieves perfect detection within a narrow domain. But if you try to use a reference set from one domain to detect hallucinations in another domain, performance drops to random—AUROC around 0.50. This is telling us something fundamental about how embeddings encode grounding. It is equivalent to see different flocks in the sky: every flock will have a different direction.
For LLMs, the easiest way to understand this is through the image of what in geometry is called a “fiber bundle”.

The surface in Figure 1 is the base manifold representing all possible questions. At each point on this surface, there’s a fiber: a line pointing in the direction that grounded responses move. Within any local region of the surface (one specific domain), all the fibers point roughly the same way. That’s why DC works so well locally.
But globally, across different regions, the fibers point in different directions. The “grounded direction” for legal questions is different from the “grounded direction” for medical questions. There’s no single global pattern. Only local coherence.
Now look at the following video. Birds flight paths connecting Europe and Africa. We can see the fiber bundles. Different birds (medium/large small, insects) have different directions.
In differential geometry, this structure is called local triviality without global triviality. Each patch of the manifold looks simple and consistent internally. But the patches can’t be stitched together into one global coordinate system.
This has a noticeable implication:
grounding is not a universal geometric property
There’s no single “truthfulness direction” in embedding space. Each domain—each type of task, each LLM—develops its own displacement pattern during training. The patterns are real and detectable, but they’re domain-specific. Birds do not migrate in the same direction.
What this means practically
For deployment, the domain-locality finding means you need a small calibration set (around 100 examples) matched to your specific use case. A legal Q&A system needs legal examples. A medical chatbot needs medical examples. This is a one-time upfront cost—the calibration happens offline—but it can’t be skipped.
For understanding embeddings, the finding suggests these models encode richer structure than we typically assume. They’re not just learning “similarity.” They’re learning domain-specific mappings whose disruption reliably signals hallucination.
The red bird doesn’t d
The hallucinated response has no marker that says “I’m fabricated.” It’s fluent. It’s confident. It looks exactly like a grounded response on every surface-level metric.
But it doesn’t move with the flock. And now we can measure that.
The geometry has been there all along, implicit in how contrastive training shapes embedding space. We’re just learning to read it.
Notes:
You can find the complete paper at https://cert-framework.com/docs/research/dc-paper.
If you have any questions about the discussed topics, feel free to contact me at [email protected]
Source link
#Geometric #Method #Spot #Hallucinations #LLM #Judge























