Druckansicht der Internetadresse:

Department of Mathematics

Chair of Applied Mathematics Prof. Dr. L. Grüne / Prof. Dr. A. Schiela

Print page

DFG project “Curse-of-dimensionality-free nonlinear optimal feedback control with deep neural networks. A compositionality-based approach via Hamilton-Jacobi-Bellman PDEs”

start of the project: 2021, end of the project: 2024

contract number: GR 1569/23-1

funding institution: DFG (Research Grants)

within the DFG Priority Research Programme 2298 “Theoretical Foundations of Deep Learning”

PROJECT MEMBERS

Principal investigator

Prof. Dr. Lars Grüne

Project member

M.Sc. Mario Sperl

AIMS OF THE PROJECT

Optimal feedback control is one of the areas in which methods from deep learning have an enormous impact. Deep Reinforcement Learning, one of the methods for obtaining optimal feedback laws and arguably one of the most successful algorithms in artificial intelligence, stands behind the spectacular performance of artificial intelligence in games such as Chess or Go, but has also manifold applications in science, technology and economy. Mathematically, the core question behind this method is how to best represent optimal value functions, i.e., the functions that assign the optimal performance value to each state, also known as cost-to-go function in reinforcement learning, via deep neural networks (DNNs). The optimal feedback law can then be computed from these functions. In continuous time, these optimal value functions are characterised by Hamilton-Jacobi-Bellman partial differential equation (HJB PDEs), which links the question to the solution of PDEs via DNNs. As the dimension of the HJB PDE is determined by the dimension of the state of the dynamics governing the optimal control problem, HJB equations naturally form a class of high-dimensional PDEs. They are thus prone to the well-known curse of dimensionality, i.e., to the fact that the numerical effort for its solution grows exponentially in the dimension. It is known that functions with certain beneficial structures, like compositional or separable functions, can be approximated by DNNs with suitable architecture avoiding the curse of dimensionality. For HJB PDEs characterising Lyapunov functions it was recently shown by the proposer of this project that small-gain conditions – i.e., particular conditions on the dynamics of the problem – establish the existence of separable subsolutions, which can be exploited for efficiently approximating them by DNNs via training algorithms with suitable loss functions. These results pave the way for curse-of-dimensionality free DNN-based approaches for general nonlinear HJB equations, which are the goal of this project. Besides small-gain theory, there exists a large toolbox of nonlinear feedback control design techniques that lead to compositional (sub)optimal value functions. On the one hand, these methods are mathematically sound and apply to many real-world problems, but on the other hand they come with significant computational challenges when the resulting value functions or feedback laws shall be computed. In this project, we will exploit the structural insight provided these methods for establishing the existence of compositional optimal value functions or approximations thereof, but circumvent their computational complexity by using appropriate training algorithms for DNNs instead. Proceeding this way, we will characterise optimal feedback control problems for which curse-of-dimensionality-free (approximate) solutions via DNNs are possible and provide efficient network architectures and training schemes for computing these solutions.

See also the GEPRIS information on the project, the website of the DFG priority program 2298 “Theoretical Foundations of Deep Learning”, and the GEPRIS information on SPP 2298.


responsible for the content: Lars Grüne

Facebook Youtube-Kanal Instagram UBT-A Contact