# Mathematics Colloquium, Zhongqiang Zhang (Worcester Polytechnic Inst.)

This event is in the past.

**When:**

2:30 p.m. to 3:30 p.m.

**Where:**

**Event category:**Seminar

Title: On some scaling issues in training physics-informed neural networks

Speaker: Zhongqiang Zhang

Associate Professor

Department of Mathematical Sciences

Worcester Polytechnic Institute

Abstract

Training in physics-informed machine learning is often realized with low-order methods, such as stochastic gradient descent. Such training methods usually lead to better learning of solutions with low frequencies and small gradients. For solutions of high frequencies and large gradients, we consider two classes of problems. The first class of problems is high-dimensional Fokker-Planck equations, where the solutions are of small scales but not negligible in regions. We use tensor-neural networks and show how to deal with solutions of small scales but with large gradients. The second class of problems is low-dimensional partial differential equations with small parameters, such as boundary layers. We discuss a two-scale neural network method and introduce a streamlined approach to tackle large-gradient issues induced by small parameters.

About the Speaker: Zhongqiang Zhang is an Associate Professor of Mathematics at Worcester Polytechnic Institute. His research interests include numerical methods for stochastic and integral differential equations, computational probability, and mathematics for machine learning. Before he joined in Worcester Polytechnic Institute in 2014, he received Ph.D. degrees in mathematics at Shanghai University in 2011 and in applied mathematics at Brown University in 2014. He co-authored a book with George Karniadakis on numerical methods for stochastic partial differential equations with white noise.