Preventing LLM Hallucinations

In this session attendees learned how to make AI truly dependable in critical operations. They were joined by John Bohannon, VP of Data Science at Primer, for an in-depth exploration of RAG-Verification (RAG-V) — a system designed to reduce hallucination rates in Large Language Models (LLMs) by 100x.

 

In this session attendees learned:

  • How RAG-V detects and corrects factual errors, using a fact-checking approach that’s tailored for complex, high-stakes environments.
  • How Primer breaks down intricate questions into individual claims, allowing for real-time, automated fact-checking that you can trust.
  • How to apply actionable insights for implementing trustworthy AI practices within your own data operations. 

If you’re ready to explore the future of trustworthy AI and elevate your data operations, view the on-demand webinar today! 

Speaker Details

John Bohannon, VP Data Science, Primer.ai

Event Topic

Artificial Intelligence, Big Data, Technology

Relevant Audiences

All State and Local Government, All Federal Government

Other Agency

Other Federal Agencies
Preventing LLM Hallucinations
Event Type
Virtual / Online
Event Subtype
Webinar / Webcast
When
Tue, Nov 12, 2024 | 2:00 pm ET
Registration Cost
Complimentary
Website
Click here to view event website
Sponsor
Primer.AI