Discovering Symbolic Models from Deep Learning with Inductive Biases (Paper Explained)
YouTube Viewers YouTube Viewers
253K subscribers
45,656 views
0

 Published On Jun 25, 2020

Neural networks are very good at predicting systems' numerical outputs, but not very good at deriving the discrete symbolic equations that govern many physical systems. This paper combines Graph Networks with symbolic regression and shows that the strong inductive biases of these models can be used to derive accurate symbolic equations from observation data.

OUTLINE:
0:00 - Intro & Outline
1:10 - Problem Statement
4:25 - Symbolic Regression
6:40 - Graph Neural Networks
12:05 - Inductive Biases for Physics
15:15 - How Graph Networks compute outputs
23:10 - Loss Backpropagation
24:30 - Graph Network Recap
26:10 - Analogies of GN to Newtonian Mechanics
28:40 - From Graph Network to Equation
33:50 - L1 Regularization of Edge Messages
40:10 - Newtonian Dynamics Example
43:10 - Cosmology Example
44:45 - Conclusions & Appendix

Paper: https://arxiv.org/abs/2006.11287
Code: https://github.com/MilesCranmer/symbo...

Abstract:
We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example-a detailed dark matter simulation-and discover a new analytic formula which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn.

Authors: Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, Shirley Ho

Links:
YouTube:    / yannickilcher  
Twitter:   / ykilcher  
Discord:   / discord  
BitChute: https://www.bitchute.com/channel/yann...
Minds: https://www.minds.com/ykilcher

show more

Share/Embed