Published On Aug 8, 2020
An introduction to Multi-Armed Bandits, an exciting field of AI research that aims to address the exploration/exploitation dilemma. We discuss the problem formulation, popular strategies for addressing it (epsilon-first, epsilon-greedy, and UCB1), and several of the variations on the MAB problem currently pursued in research.
Are there any topics that you believe would be served by a poorly-drawn cartoon introduction?
Twitter: / academicgamer
Email: [email protected]
show more