● PATENT PENDING  ·  DEPLOYED ON GOOGLE CLOUD

The Precision Safety Layer for Enterprise AI

Real-time hallucination detection, surgical auto-correction, and tamper-proof audit trails for LLM outputs. Zero LLM dependency. Sub-200ms latency.

Architecture API Reference
96.9%
Production Accuracy
63/65 benchmark suite
52/52
Adversarial Detection
entity, numeric, negation
<200ms
Median Latency
on Google Cloud Run
$0
LLM API Cost
zero external model calls

Five-Layer Defense-in-Depth Pipeline

Every LLM response passes through five complementary detection layers with OR-gate logic — no single point of failure.

Layer 1A · CRF
Context Redshift Fidelity
Physics-inspired semantic trajectory analysis. Tracks cosine drift from query through response chunks using EMA smoothing and Helmholtz decomposition. Catches gradual topic drift before it compounds.
Layer 1B · CBF
Context Blueshift Fidelity
Information density analysis comparing response expansion ratio against context. Detects fabricated detail — when the LLM invents specifics not grounded in source material.
Layer 2 · NSC
Atomic Fact Verification
Decomposes responses into typed atomic claims (numeric, currency, date, duration, entity, negation, relation). Each fact is verified against context using type-specific extractors plus NLI fallback.
Layer 3 · Correction
Surgical Auto-Correction
Contradicted facts are surgically replaced with context-grounded values. Entity swaps, numeric errors, and negation flips are corrected in-place — preserving the original response structure.
Layer 4 · Interpretability
Evidence Chains
Every verdict includes human-readable explanations: which facts were checked, what evidence supports or contradicts each claim, and exactly why the response was flagged or approved.
Layer 5 · Audit
Hash-Chained Ledger
Append-only JSONL audit trail with SHA-256 hash chaining for tamper detection. Every verification is recorded with full per-fact breakdown, timing, and deterministic audit IDs.

Integrate in 5 Minutes

Single API call. Send your LLM's response plus the retrieved context — get back a verified, corrected response with full evidence chains.

POST /v1/rag

The primary verification endpoint. Accepts any LLM-generated response with its source context and returns a complete trust assessment.

  • Per-fact verification with typed extractors
  • Confidence scores calibrated to accuracy
  • Surgical corrections with original preserved
  • Full timing breakdown per detection layer
  • Deterministic audit ID for compliance trail
  • Works with any LLM provider (OpenAI, Anthropic, Cohere, etc.)
import requests

response = requests.post(
    "https://tensalis-engine-23557189636.us-central1.run.app/v1/rag",
    json={
        "query": "What is the return policy?",
        "context_docs": [
            "Returns accepted within 30 days.",
            "Items must have original tags."
        ],
        "response": "You can return items within 60 days.",
        "auto_correct": True
    }
)

result = response.json()
# result["is_trustworthy"]     → False
# result["severity"]           → "high"
# result["facts_contradicted"] → 1
# result["was_corrected"]      → True
# result["response"]           → "You can return items within 30 days."

How Tensalis Compares

Most observability tools flag hallucinations. Tensalis detects AND corrects them — with no LLM dependency.

Capability Tensalis v6.1.2 Typical Observability Tools
Contradiction Detection 52/52 adversarial detection Embedding similarity — misses contradictions with high cosine overlap
Auto-Correction Surgical fact replacement Not available — detection only
Latency <200ms median 1–5s (LLM-as-judge requires inference call)
Cost per Verification $0 LLM cost $0.01–0.05 per LLM judge call
LLM Dependency None — deterministic Requires LLM API (OpenAI, etc.)
Audit Trail Hash-chained JSONL ledger Log aggregation (no tamper detection)
Fact Granularity Per-fact typed verification Whole-response scoring

Built for Regulated Industries

Where factual accuracy isn't a nice-to-have — it's a compliance requirement.

⚕️
Healthcare
Catch dosage errors, contraindication hallucinations, and fabricated clinical guidelines before they reach patients. HIPAA-compatible audit trails.
💰
Financial Services
Ensure investment summaries match prospectuses. Detect "4.5%" vs "45%" numeric drift, fabricated terms, and contradicted rate conditions.
⚖️
Legal & Compliance
Prevent AI from flipping "mandatory" to "optional" in policy summaries. Verify contractual language fidelity with per-clause evidence chains.
🏢
Enterprise AI Applications
Drop-in verification layer for any RAG pipeline. Works with LangChain, LlamaIndex, custom orchestrations — any LLM provider.
🤖
AI Consultancies
Offer clients verifiable accuracy guarantees on AI deployments. White-label Tensalis as your trust layer with branded audit reports.
🛡️
Customer Support
Stop chatbots from inventing return policies, fabricating product specs, or contradicting published terms. Real-time correction before the customer sees it.

Production Infrastructure

Deployed on Google Cloud with enterprise-grade reliability.

Google Cloud Run (auto-scaling)
FastAPI + Uvicorn
MiniLM-L6-v2 Embeddings
DeBERTa NLI
spaCy NER
Zero GPU required
API: https://tensalis-engine-23557189636.us-central1.run.app

API Endpoints

MethodEndpointDescription
POST/v1/ragFull multi-layer hallucination detection and correction
POST/v1/verifyLegacy claim-vs-reference verification
GET/v1/ledger/recordsQuery audit trail records (filterable)
GET/v1/ledger/statsAggregate analytics (trust rate, latency, distributions)
GET/v1/ledger/verifyHash chain integrity check
GET/v1/rag/healthPipeline health and component status