Stanford CS224N NLP with Deep Learning | Winter 2021 | Lecture 9 - Self- Attention and Transformers
Stanford Online Stanford Online
582K subscribers
70,430 views
0

 Published On Oct 28, 2021

For more information about Stanford's Artificial Intelligence professional and graduate programs visit: https://stanford.io/3CvTOGY

This lecture covers:
1. Impact of Transformers on NLP (and ML more broadly)
2. From Recurrence (RNNs) to Attention-Based NLP Models
3. Understanding the Transformer Model
4. Drawbacks and Variants of Transformers

To learn more about this course visit: https://online.stanford.edu/courses/c...
To follow along with the course schedule and syllabus visit: http://web.stanford.edu/class/cs224n/

John Hewitt
PhD student in Computer Science at Stanford University

Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)

#deeplearning #naturallanguageprocessing

show more

Share/Embed