Informative Example Selection for In-Context Learning
78 views
0

 Published On Mar 29, 2024

Date Presented: 3/28/2024
Speaker: Shivanshu Gupta, UCI

Abstract: In-context Learning (ICL) uses large language models (LLMs) for new tasks by conditioning them on prompts comprising a few task examples. With the rise of LLMs that are intractable to train or hidden behind APIs, the importance of such a training-free interface cannot be overstated. However, ICL is known to be critically sensitive to the choice of in-context examples. Despite this, the standard approach for selecting in-context examples remains to use general-purpose retrievers due to the limited effectiveness and training requirements of prior approaches. In this talk, I’ll posit that good in-context examples demonstrate the salient information necessary to solve a given test input. I’ll present efficient approaches for selecting such examples, with a special focus on preserving the training-free ICL pipeline. Through results with a wide range of tasks and LLMs, I’ll demonstrate that selecting informative examples can indeed yield superior ICL performance.

Speaker's bio: Shivanshu Gupta is a Computer Science Ph.D. Candidate at the University of California Irvine, advised by Sameer Singh. Prior to this, he was a Research Fellow at LinkedIn and Microsoft Research India, and completed his B.Tech. and M.Tech. in Computer Science at IIT Delhi. His primary research interests are systematic generalization, in-context learning, and multi-step reasoning capabilities of large language models.

show more

Share/Embed