Navigating via Retrieval Evaluation to Demystify LLM Wonderland // Atita Arora // AI in Production
YouTube Viewers YouTube Viewers
22.2K subscribers
226 views
0

 Published On Apr 4, 2024

Brought to you by our Premium Brand Partner @qdrant

// Abstract
This session talks about the pivotal role of retrieval evaluation in Language Model (LLM)-based applications like RAG, emphasizing its direct impact on the quality of responses generated. We explore the correlation between retrieval accuracy and answer quality, highlighting the significance of meticulous evaluation methodologies.

//Bio
Atita Arora is a seasoned and esteemed professional in information retrieval systems and has decoded complex business challenges, pioneering innovative information retrieval solutions in her 15-year journey as a Solution Architect / Search Relevance strategist / Individual Contributor.
She has a robust background from her impactful contributions as a committer in various information retrieval projects.
She has a keen interest in making revolutionary tech innovations accessible and implementable to solve real-world problems.
She is currently immersed in researching about evaluating RAGs while navigating the world of vectors and LLMs, seeking to uncover insights that can enhance their practical applications and effectiveness.

// Sign up for our Newsletter to never miss an event:
https://mlops.community/join/

// Watch all the conference videos here:
https://home.mlops.community/home/col...

// Check out the MLOps Community podcast: https://open.spotify.com/show/7wZygk3...

// Read our blog:
mlops.community/blog

// Join an in-person local meetup near you:
https://mlops.community/meetups/

// MLOps Swag/Merch:
https://mlops-community.myshopify.com/

// Follow us on Twitter:
  / mlopscommunity  

//Follow us on Linkedin:
  / mlopscommunity  

show more

Share/Embed