Why NoHallucinations AI?

EDFL Framework

Uses the Expectation-level Decompression Law to provide mathematically-bounded hallucination risk assessment with transparent decision logic.

Real-time Assessment

Fast API calls return ANSWER/REFUSE decisions in seconds. Integrate seamlessly into any RAG application workflow.

No Retraining

Works with existing models. No fine-tuning or retraining required. Just plug into your current RAG architecture.

Configurable Risk

Set custom hallucination thresholds (1%, 5%, 10%) based on your application's safety requirements.

Safety First

Conservative decision making with safety margins. Protects user trust by refusing when evidence is insufficient.

Transparent Metrics

Get confidence scores, risk bounds, and detailed rationales for every decision. Full audit trail included.

The Science Behind the Solution

Researchers at Harvard and Hassana Labs proved that AI hallucinations aren't bugs to be fixed but mathematically inevitable consequences of how transformers process information, they're "Bayesian in expectation, not in realization." This breakthrough discovery enables precise formulas for detecting when hallucinations will occur, transforming an unsolvable problem into a measurable one.

Read the research paper →

95%+

Accuracy in risk assessment

<3s

Average response time

0

Model retraining required

RAG architectures supported

Live Demo

Test the API with your own factual questions. See how it decides to ANSWER or REFUSE based on hallucination risk.

Test Question

AI answer will appear here when you test the API...

Note: Evaluation takes 3-5 seconds due to OpenAI API calls

Results

Click "Test API" to see results...

API Documentation

Quick Start

Get started with the Factual QA API:

1. Request access:

Access tokens are manually granted. Join our waitlist or contact us directly for API access.

2. Use your access token to call the API:

curl -X POST "http://localhost:5000/evaluate" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \ -d '{ "prompt": "Who invented the telephone?", "h_star": 0.05, "skeleton_policy": "closed_book" }'

Note: All API endpoints require authentication with a valid access token.

Response:

{ "decision": "ANSWER", "hallucination_risk_bound": 0.0234, "information_budget": 3.756, "confidence_score": 2.483, "rationale": "ANSWER: Information budget exceeds B2T threshold..." }

API Endpoints

GET /

Health check endpoint

POST /evaluate

Evaluate a single factual QA prompt (requires authentication)

{ "prompt": "Your factual question", "skeleton_policy": "closed_book", // or "evidence_based" "fields_to_erase": ["Evidence"], // for evidence_based mode "n_samples": 5, // number of samples "m": 6, // number of skeletons "h_star": 0.05 // target hallucination rate }

POST /evaluate/batch

Evaluate multiple prompts efficiently (requires authentication)

RAG Integration

Here's how to integrate NoHallucinations AI into your RAG application:

def safe_rag_answer(question: str, retrieved_docs: List[str]): """RAG with hallucination risk safety check.""" # Build evidence-based prompt evidence = "\n".join(retrieved_docs) prompt = f"""Task: Answer based on evidence. Question: {question} Evidence: {evidence} Constraints: Answer only if sufficient, otherwise refuse.""" # Check hallucination risk headers = { "Authorization": "Bearer YOUR_ACCESS_TOKEN", "Content-Type": "application/json" } risk_response = requests.post("http://localhost:5000/evaluate", headers=headers, json={ "prompt": prompt, "skeleton_policy": "evidence_based", "fields_to_erase": ["Evidence"], "h_star": 0.05 }) risk_result = risk_response.json() if risk_result["decision"] == "ANSWER": # Safe to serve answer return generate_answer(prompt) else: # Too risky - refuse return "Insufficient reliable information available."