We are pleased to invite you to the next seminar within our traditional
Seminar series in Statistics and Data Science
Speaker: Vegard Antun, Department of Mathematics, UiO
Title: Can stable and accurate neural networks be computed?
When? TUESDAY, 01.02.2021, 14:15-15:15
Where? Zoom https://uio.zoom.us/j/69116189064?pwd=SExXdmFOcmErZ3B1VWFrZWFHem5UUT09
Abstract: Deep learning (DL) has had unprecedented success and is now entering scientific
computing with full force. However, current DL methods typically suffer from instability, even when universal approximation properties guarantee the existence of stable neural networks (NNs). In this talk we will show that there are basic well-conditioned problems in scientific computing where NNs with great approximation qualities are proven to exist, however, there does not exist any algorithm, even randomised, that can train (or compute) such a NN to even 1-digit of accuracy with a probability greater than 1/2. These results provide basic foundations for Smale's 18th problem ("What are the limits of AI?") and imply a potentially vast classification theory describing conditions under which (stable) NNs with a given accuracy can be computed by an algorithm. We begin this theory by initiating a unified theory for compressed sensing and DL, leading to sufficient conditions for the existence of algorithms that compute stable NNs in inverse problems. We introduce Fast Iterative REstarted NETworks (FIRENETs), which we prove and numerically check (via suitable stability tests) are stable. The reference for this talk is: https://arxiv.org/abs/2101.08286 (to appear in Proc. Natl. Acad. Sci. USA).
Welcome!
Best regards,
Sven Ove Samuelsen & Aliaksandr Hubin