Exploring the Frontiers of AI with David Quarel
324 views
0

 Published On Premiered Jan 11, 2024

Interview with David Quarel at #EffectiveAltruism - EAGx Australia 2023.
David Quarel, a Ph.D. student at the Australian National University, is deeply involved in the field of AI, specifically focusing on AI safety and reinforcement learning. He works under the guidance of Marcus Hutter and is currently engaged in studying Hutter's Universal AI model. This model is an ambitious attempt to define intelligence through the lens of simplicity and data compression. It operates on the idea that the better you can compress data, the more you understand it. This ties in with the concept of Kolmogorov complexity, suggesting that superior data compression is an indicator of higher intelligence.

However, applying the Universal AI model practically is challenging, especially in complex environments like video games, due to its intricate nature. David also sheds light on the evolution from symbolic AI, which relies on clear sets of rules and axioms, to deep learning. Deep learning models, which are now more prevalent, learn from vast datasets to develop versatile and often uninterpretable solutions.

The size of AI models is a crucial factor in their effectiveness. Larger models have more capacity to learn and encode information but face risks of overfitting or underperforming if not balanced with adequate data and computational resources. An interesting phenomenon in deep learning, known as "grokking," occurs when a model suddenly improves its performance on a test set after extensive training. This leap in understanding is not yet fully understood and might be comparable to how humans experience "eureka" moments in learning.

David also discusses the unexpected capabilities emerging in AI models. For instance, models designed to predict sequences have spontaneously developed advanced skills like playing chess, suggesting they can learn planning and strategy. This advancement indicates a significant leap in AI's potential.

Furthermore, the move towards multimodal AI, where models can process various types of data (like images and text), significantly broadens their potential applications and capabilities. The conversation touches upon finding the right balance in AI model size and the unpredictable nature of AI development, as evidenced by the grokking phenomenon and the emergence of new, unforeseen abilities in AI models. These developments raise critical questions about the future of AI and its implications, particularly in understanding and applying these advanced technologies.

00:00 Intro
00:55 Universal AI, intelligence & compression
03:44 The deep learning explosion
04:47 Emergent capabilities in LLMs
06:17 Grokking - can LLMs 'understand' stuff?
08:26 Are there shared patterns of understanding?
09:21 Operational undertandings of understanding
11:35 Emergence of unexpected behaviors
12:54 Planning capabilities appearing in LLMs
15:29 Mechanistic interpretability of LLMs
19:14 Can AI do closed-loop mechanistic interpretability?
22:11 Why care about AI progress?
26:37 AGI when?
27:45 LLM tool use
30:18 LLM multi-modiality
32:39 AI fuelled wargames and other dangers
35:34 Cybernetics & AI/human collaborative intelligence
36:55 Post-work society
38:00 Reactions to talk about AI risk
40:14 Warning signs for surges in AI
42:53 AI safety research directions
44:58 An AI safety 'manhattan project'?
46:56 Final thoughts


Many thanks for tuning in!

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P...
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_cente...

b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon:   / scifuture  

c) Sharing the media SciFuture creates

Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org

show more

Share/Embed