1 past event found
Back to Search Begin New Search Save Search Auto-Notify
Preventing LLM Hallucinations
In this session attendees learned how to make AI truly dependable in critical operations. They were joined by John Bohannon, VP of Data Science at Primer, for an in-depth exploration of RAG-Verification (RAG-V) — a system designed to reduce hallucination rates in Large Language Models (LLMs) by 100x. In this session attendees learned:How RAG-V detects and corrects factual errors, using a fact-checking approach that’s tailored for c...
November 12, 2024
Organizer: Primer.ai Government Team at Carahsoft
Location: Webcast
Add Favorite
Back to Search Begin New Search