JEPA Architectures - How neural networks learn abstract concepts about images (IJEPA)
Neural Breakdown with AVB Neural Breakdown with AVB
5.32K subscribers
2,225 views
0

 Published On Jun 22, 2023

This video explains Self-Supervised Learning from Images with a
Joint-Embedding Predictive Architecture paper that proposes a revolutionary approach for "human-like" Machine Learning training. The video dives into the ideologies behind JEPA methods, network architectures, results, and comparison with existing generative methods (like Masked Autoencoders) and contrastive learning methods (like SimCLR).


Follow on Twitter: @neural_avb


To support me, consider JOINING the channel. Members get access to Code, project files, scripts, slides, animations, and illustrations for most of the videos on my channel! Learn more about perks below.
Join and support the channel - https://www.youtube.com/@avb_fj/join


Learn more about contrastive learning in my breakdown video about Multimodal ML:
   • Multimodal AI from First Principles -...  

Papers referenced:
I-JEPA: https://arxiv.org/pdf/2301.08243.pdf
Yan Lecun's original human-like AI paper: https://openreview.net/pdf?id=BZ5a1r-...
SimCLR: https://arxiv.org/pdf/2002.05709.pdf
Masked AE: https://arxiv.org/pdf/2111.06377.pdf
RCDM: https://arxiv.org/pdf/2112.09164.pdf

Timestamps:
0:00 - Intro
1:05 - Why IJEPA?
5:22 - Network architecture
7:43 - Results
8:50 - Summary

#deeplearning #computervision #ai #machinelearning

show more

Share/Embed