Caltech Young Investigators Lecture
Integrating Machine Learning With Scientific Spatial and Temporal Modeling
Webinar Link: https://caltech.zoom.us/j/83276652110
Machine learning (ML) has achieved success in numerous areas with wide-ranging applications. These methods are also seeing interest as approaches to accelerate and improve science and engineering applications. However, many challenges remain: learning meaningful ML models for scientific phenomena is difficult, and is limited by the lack of training data. In this talk, I will discuss my work aimed to overcome these challenges by incorporating scientific mechanistic modeling into the ML setting.
I will first show how changing the machine learning paradigm to curriculum regularization or sequence-to-sequence learning can achieve better predictive performance for common engineering problems, including those with convection, reaction, and reaction-diffusion differential operators. I will then discuss how developing a neural network architecture that incorporates differential equation constrained optimization can verify desired physical constraints exactly over a given spatial and/or temporal domain. I will show that this architecture allows us to efficiently and accurately fit solutions to new problems, and demonstrate this on fluid flow and transport phenomena problems.
Finally, I will explore learning from discrete scientific data, and discuss a methodology from numerical analysis theory to verify whether an ML model has learned a meaningfully continuous approximation to the underlying dynamics of such systems. Using this, we can verify that the ML models we learn satisfy robustness and convergence properties from numerical analysis. Learning meaningfully continuous models enables better interpolation and better extrapolation for a number of scientific applications. These include resolving fine-scale features despite training only on low resolution data, and correctly extrapolating to other initial conditions than those on which the ML model was trained.
Aditi Krishnapriyan is the 2020 Alvarez Fellow in Computing Sciences at Lawrence Berkeley National Laboratory and UC Berkeley. Her research interests include combining domain-driven scientific mechanistic modeling with data-driven machine learning methodologies to accelerate and improve spatial and temporal modeling. Previously, she received a PhD at Stanford University, supported by the Department of Energy Computational Science Graduate Fellowship. During her PhD, she also spent time working on machine learning research at Los Alamos National Laboratory, Toyota Research Institute, and Google Research.
This talk is part of the Caltech Young Investigators Lecture Series, sponsored by the Division of Engineering and Applied Science.
Contact: Jennifer Blankenship