Low Rank Neural Representation of Hyperbolic Conservation Laws
Date:
- Time : 10:30 - 12:00 (GMT+9, Time zone in South Korea)
- Zoom link : https://snu-ac-kr.zoom.us/my/youngjoonhong
- Speaker : 임동섭 (University of Washington)
A Low Rank Neural Representation (LRNR) is a parametrized family of feedforward neural networks whose weights and biases belong to low rank linear subspaces. In this talk, we will discuss how LRNRs can serve as efficient low dimensional representations of solutions to hyperbolic conservation laws.
First, we motivate the LRNR architecture by reformulating the entropy solutions to scalar conservation laws to reveal their low dimensional structure.
Next, we will show that LRNRs can be trained from numerical solution data through a meta-learning approach, and demonstrate how the trained LRNRs possess important properties:
(1) low dimensionality
(2) smoothness and stability with respect to the parameters even in the presence of shocks
(3) its ability to backpropagate with complexity scaling with the low dimension only
(4) interpretable learned features.
Its applications in the popular Physics Informed Neural Networks (PINNs) framework will also be discussed.