[logo ITN SADCO]

[logo FP7]
[logo Marie Curie Actions]


OMPC 2013 - SADCO Summer School and Workshop
on Optimal and Model Predictive Control

Invited talks

Rolf Findeisen: Predictive Control for Stabilization, Tracking and Path Following: Recent achievements and Open Issues

By now predictive control for the stabilization of nonlinear systems is well understood. Many schemes to guarantee stability and recursive feasibility are well known. Many control problems are, however, inherently trajectory tracking and path following problems. Trajectory tracking refers to the task to follow a time-varying reference trajectory. Path following refers to the task to follow a given state curve as precise as possible. We outline currently existing results with respect to path following and trajectory tracking. Furthermore, we propose an efficient predictive control scheme for path following with stability guarantees. The approach is underlined considering several example systems. The presented results are joint work with Timm Faulwasser.

Matthias Gerdts: Optimal Control Problems with Differential-Algebraic Equations

The talk provides an overview on solution approaches for optimal control problems subject to differential-algebraic equations (DAEs) and control and state constraints. Such problems typically occur in the context of mechanical multibody systems, process engineering, simulation of electric circuits, inverse dynamics, or discretizations of certain partial differential equations like Navier-Stokes equations.

The first part of the talk summarizes first order necessary optimality conditions in terms of local and global minimum principles. The local minimum principle is derived by considering the DAE optimal control problem in suitable Banach spaces and exploiting first order necessary optimality conditions of Fritz-John type. Exploitation of a time transformation technique leads to a global minimum principle, which can be applied to mixed-integer optimal control problems.

Alternatively, direct shooting techniques are discussed. Such methods transform the optimal control problem into a nonlinear program, which can be solved by suitable optimization methods like SQP methods. Extensions towards DAEs are discussed.

Finally, numerical results for selected applications from virtual test-driving, pro-active active chassis control, and rendezvous and docking maneuvers will be presented.

Carl D. Laird: Interior-Point Methods for Parallel Solution of Nonlinear Programming Problems

Nonlinear programming has proven to be an efficient tool for important large-scale inverse problems like optimization of dynamic systems, parameter estimation, and decision making under uncertainty. However, engineering and scientific needs continue to push the boundaries of existing mathematical programming tools, and problems can become prohibitively large, often outstripping the capabilities of a single workstation. Furthermore, computer chip manufacturers are focusing on parallel computing architectures, and future performance improvements demand algorithms that are capable of utilizing these modern parallel architectures.

Fortunately, most large-scale mathematical programming problems are inherently structured, and this structure can be exploited to solve the problems more efficiently in parallel. Internal decomposition strategies exploit problem structure by parallelizing the linear algebra operations of existing algorithms. In this presentation, I will briefly describe the basics of nonlinear interior-point methods, and outline several internal decomposition strategies we have been developing for efficient solution of large-scale, structured NLP problems in parallel. In particular, we address problems with complicating variables (like those arising in optimization under uncertainty) and problems with pass-on structure resulting from optimization of dynamic systems. These strategies are based on a Schur-complement decomposition of the KKT system solved at each iteration of the NLP algorithm, with solution of the Schur-complement by both direct and iterative approaches. We will also discuss the importance of having an extensible, high-level modeling capability that ensures appropriate problem structure and allows model evaluation in parallel.

Tor Arne Johansen: Embedded MPC - Current technology, applications and challenges

MPC was developed for process control applications that are characterized by relatively slow dynamics and the existence of a low-level distributed control system.This has allowed MPC to be implemented on standard PC/server type of hardware that has good processing capability, but somewhat limited reliability.

MPC is now moving towards a wider spectrum of applications with faster dynamics and requirements for ultra-reliable industrial computer hardware such as microcontrollers and PLCs. In addition to much more reduced processing and memory resources, this comes with more structured software architectures that imposes limitations on programming languages and other implementation aspects.

This talk will summerize some recent experiences with embedded MPC, ranging from online MPC to explicit MPC. This will be primarily discussed in the context of several applications from the marine and petroleum domain: Diving computers, subsea petroleum production and processing, floating platform drilling, diesel-electric marine power plants, and unmanned vehicle.

The talk will conclude with some subjective opinions on future challenges and opportunities.

Anders Rantzer: Scalable Control of Monotone Systems

A system is called monotone (or positive) if all step responses are monotone. Such systems have many applications in science and technology. In particular, they appear naturally in stochastic systems, network flow problems and control of vehicle formations.

In this presentation, we will highlight several fundamental advantages that are attained for monotone control systems: Verification and performance optimization can be done in with a complexity that scales linearly with the number of states and interconnections. Distributed controllers can be designed by convex optimization. Lyapunov functions and storage functions for nonlinear monotone systems can be built from scalar functions of the states, with dramatic simplifications as a result.

James B. Rawlings: On the Inherent Robustness of Suboptimal Model Predictive Control (joint work with Gabriele Pannocchia, University of Pisa, Italy and Stephen Wright, University of Wisconsin, USA)

We address the inherent robustness properties of nonlinear systems controlled by suboptimal model predictive control, i.e., when a suboptimal solution of the (generally nonconvex) optimization problem, rather than an element of the optimal solution set, is used for the control.

We extend existing results by relaxing the continuity conditions of the feasible set, and we establish inherent robustness of an example featuring both a discontinuous feedback control law and a discontinuous optimal MPC cost function.

Contributed Talks

Andrea Boccia and Richard B. Vinter: Optimal Control Problems with Time Delay

Solutions to optimal control problems for retarded systems, on a fixed time interval, satisfy a form of the Maximum Principle, in which the co-state equation is an advanced differential equation. An extension of this well-known necessary condition of optimality is presented, covering situations in which the data is non-smooth, and the final time is free. The fact that the end-time is a choice variable is accommodated by an extra transversality condition. A traditional approach to deriving this extra condition is to reduce the free end-time problem to a fixed end-time problem by a parameterized change of the time variable. This approach is problematic for time delay problems because it introduces a parameter dependent time-delay that is not readily amenable to analysis; to avoid this difficulty we instead base our analysis on direct perturbation of the end-time. Formulae are derived for the gradient of the minimum cost as a function of the end-time. It is shown how these formulae can be exploited to construct two-stage algorithms for the computation of solutions to optimal retarded control problems with free-time, in which a sequence of fixed time problems are solved by means of Guinn's transformation, and the end-time is adjusted according to a rule based on the earlier derived gradient formulae for the minimum cost function. Numerical examples are presented.

Jean-Baptiste Caillau: Geometric and numerical methods in space and quantum control

Many controlled mechanical systems are modelled by dynamics that are affine in the control where the drift describes the evolution of the free motion. Some results in controllability, singularities and optimality of the extremal flow, will be reviewed and applied to problems from space and quantum mechanics.

Fabio Camilli: Mean Field Games on Networks

In many applications in control theory and differential games the dynamics of an agent is defined on a network, i.e. a set of vertices with connections among them given by edges. Aim of Mean Field Games (MFG) theory is to describe the asymptotic behavior of differential games in which the number of players tends to +∞. In this talk I will present some results concerning a class of MFG problems defined on a network. On each edge I consider a MFG system given by a Hamilton-Jacobi-Bellman equation giving the optimality conditions and a Fokker-Planck equation describing the corresponding distribution of the agents. At the vertices appropriate transition conditions are introduced in order to have a well-posed global problem. Some applications and numerical examples will be also discussed.

Elisabetta Carlini: Semi-Lagrangian schemes for Mean Field Games models

We present a fully-discrete Semi-Lagrangian scheme for Mean Field Games systems. For the first order case, we prove that the resulting discretization admits at least one solution and, in the scalar case, we prove a convergence result. We propose the natural extension of this scheme to the second order case. Finally, some numerical simulations will be shown. (Joint work with F.J. Silva)

Moritz Diehl: Modelling Tools for Optimal Control and Embedded Optimization

Embedded optimization regards the fast and reliable solution of optimization problems on embedded hardware, without human interaction. One important application is the field of estimation and control, where the model based technologies of moving horizon estimation (MHE) and model predictive control (MPC) are increasingly being used to address difficult problems in sensor fusion and control of constrained nonlinear systems.

We describe the current state of the art in numerical methods for embedded optimization and the requirements they pose for modelling environments. Our focus is on ACADO and CasADi, two open-source tools for numerical optimal control written in C++ and interfaced to high level languages such as MATLAB and Python.

The major aim of ACADO is code generation of embedded solvers for nonlinear MPC and MHE: it allows the user to generate problem specific C-code for fast embedded optimization on specific controller hardware. The tool is designed for small scale MPC and MHE applications with milli and microsecond sampling times.

CasADi is a symbolic environment for simulation based nonlinear programming and automatic differentiation (AD). It offers a level of abstraction that is higher than conventional AD tools and is in particular designed to enable calls to solvers of initial-value problems in differential-algebraic equations (DAE) within nonlinear programming formulations. The tool is designed for the formulation and solution of non-standard optimal control problems such as multi-stage or scenario-tree problems.

We illustrate the possibilities of both tools for nonlinear optimal control and MPC at examples from mechatronics, chemical engineering and flight control.

The talk presents joint work with Joel Andersson, Joris Gillis, Rien Quirynen, Milan Vukov, and Mario Zanon.

Hélène Frankowska: A Second-Order Maximum Principle in Optimal Control under State Constraints

Some new pointwise second order necessary optimality conditions for the Mayer optimal control problem verified by the adjoint state of the maximum principle will be discussed, as well as a second-order necessary optimality condition in the integral form. Extensions to an optimization problem involving a differential inclusion under state constraints and a second order sufficient condition for normality of the maximum principle will be also presented.

Cristopher Hermosilla: Stratified Discontinuous Differential Equations and Necessary Conditions For Robustness

It is well known that optimal controls are, in general, discontinuous and, as a natural consequence, whenever we want to construct an optimal feedback it would be also discontinuous. This is an inconvenient at the moment of consider closed-loop systems because no general theory exists to deal with discontinuous ODEs. Nevertheless, for the minimum time problem it has been observed and proved that, in some cases, optimal feedback strategies present tame singularities that form a stratification of the state space. In this talk we give a general framework in order to deal with discontinuous differential equations arising in this way. For this purpose, we introduce the stratified vector fields. We center our analysis in reasonable hypothesis to ensure existence of solutions. We discuss some notions of robustness and present some conditions to make the system robust with respect to external perturbations.

Igor Kornienko and Maria do Rosário de Pinho: Health problems with constraints

Optimal control can be of help to test and compare different vaccination strategies of infectious diseases. We consider optimal control problems for such problems based on a well-known compartmental model. For the problems of interest we introduce state and mixed constraints. The numerical solution is calculated and an analytical study is conducted.

Pablo Maldonado Lopez: A Dijkstra-type algorithm for dynamic games

We study zero-sum dynamic games with deterministic transitions where player 1 knows player 2’s move, as well as games where players make simultaneous moves and the transitions are stochastic and depending on their actions and the state. Player 1 aims at reaching a given set and minimizing a running and final cost. We propose and analyze an algorithm that computes the value function of these games that is based on Dijkstra’s algorithm for short path problems in graphs. (joint work with Martino Bardi)

Claudio Marchi: Continuous dependence estimates for the ergodic problem with an application to homogenization

We consider the ergodic problem for periodic, uniformly elliptic operators of Hamilton-Jacobi-Bellman type. We establish two continuous dependence estimates (namely, estimates of the distance between the solutions of two equations with different coefficients) with an explicit characterization of the constants: the former in the L-norm and the latter in the C2-norm.

Afterwards, the former result will be crucial for estimating the rate of convergence in the homogenization problem for some stationary HJB equations.

Yalcin Kaya and Helmut Maurer: Numerical Methods for Multi-objective Optimal Control

We propose numerical methods for solving nonconvex multi-objective optimal control problems with control and state constraints. We employ a scalarization technique which reduces the problem to a single-objective optimal control problem. In contrast to a standard weighted-sum scalarization, we use a weighted Tschebychev scalarization that is particularly suited for nonconvex problems. The weighted Tschebychev scalarization is surjective from the space of weights to the Pareto set (front). Solutions (obtained via discretization) of a sequence of scalarized problems yield an approximation of the Pareto front. The numerical method is illustrated on two numerically challenging problems involving tumor anti-angiogenesis and a fedbatch bioreactor. The control problems exhibit bang-bang and singular controls as well as boundary controls for the state constraints. (joint work with Yalcin Kaya)

Michele Palladino: Relaxed Optimal Control Problems

Relaxation is a regularization procedure used in optimal control, involving the replacement of velocity sets by their convex hulls, to ensure the existence of a minimizer. It can be an important step in the construction of sub-optimal controls for the original, unrelaxed, optimal control problem (which may not have a minimizer), based on obtaining a minimizer for the relaxed problem and approximating it. In some cases the infimum cost of the unrelaxed problem is strictly greater than the infimum cost over relaxed state trajectories; there is a need to identify such situations because then the above procedure fails. Following on from earlier work by Warga, we explore the relation between, on the one hand, non-coincidence of the minimum cost of the optimal control and its relaxation and, on the other, abnormality of necessary conditions (in the sense that they take a degenerate form in which the cost multiplier set to zero). For optimal control problems in which the dynamic constraint is formulated as a differential inclusion, we show that a local minimizer which is not also a relaxed local minimizer is an abnormal extremal, in the sense that it satisfies an abnormal form of the Hamiltonian inclusion in which the cost multiplier is zero. We also show that a relaxed local minimizer that is not also a local minimizer is a relaxed abnormal extremal. We discuss the extent to which the existence of an infimum gap is also manifested through the existence of abnormal extremals, also for optimal control problems in which the dynamic constraint is formulated as a differential equation with control.

Vryan G. Palma: Sensitivity based multistep model predictive control

Model predictive control (MPC) is a viable well-established optimal control strategy for linear and nonlinear systems. Several difficulties arise however in practice when MPC faces disturbances and uncertainties wherein only suboptimal robust control strategies can be achieved.

We consider a variant of the MPC algorithm that takes into account the limited time and power, between control update times, available for computation of a control action. This variant uses a multistep feedback law that allows performing optimization less often and incorporates sensitivity analysis to update the entries of the multistep feedback to maintain robustness by counteracting the effect of disturbances.

We also try to examine sufficient conditions for which the sensitivity-based updates of the tail of the optimal control sequence would yield practical asymptotic stability for the system.

Fernando Lobo Pereira: Dynamic Optimization Challenges in Autonomous Vehicle Systems

Human kind is facing huge societal challenges that call for the need of advanced engineering systems many of which may encompass multiple networked autonomous vehicles interacting with one another, with other devices, and with human operators.

The typical scarcity of resources available for the operation of such systems, together with the versatility of state and control constraints - such as the ones arising in the specifcation of formations of vehicles, or induced by communication processes, etc. - and of performance criteria that have to be considered in the required control structures, makes the optimal control framework a paradigm of choice.

Moreover, the rich body of theory that have been consolidated over the years provides not only insight to solutions but also mechanisms for their characterization and computation. Maximum principles, value function based control, and model predictive control are some of the results and techniques that will be considered in this presentation reporting some of the research undertaken at Porto University to address optimal control challenges arising in the control of autonomous vehicles and in the coordinated control of teams of these vehicles. (joint work with João Borges de Sousa)

Franco Rampazzo: (Sup+Bolza)-control problems as dynamic differential games

We consider a (L+Bolza)-control problem, namely a problem where the payoff is the sum of a sup (actually ess-sup) functional and a classical Bolza functional. Owing to the <L1,L> duality, the (L+Bolza)-control problem is rephrased in terms of a static differential game, where a new variable k plays the role of a maximizer. In this framework 1-k is regarded as the available fuel for the maximizer. The relevant (and unusual) fact is that this static game is equivalent to the corresponding dynamic differential game, which allows the (upper) value function to verify a rather simple boundary value problem.

Mario Zanon: Fast Model Predictive Control and Moving Horizon Estimation for Tethered Planes

The Airborne Wind Energy paradigm proposes to generate energy by flying a tethered airfoil across the wind flow. An essential problem is posed by the control of the airfoil during the launching phase. One proposed strategy, the rotational startup, relies on a rotating platform to give the plane the necessary momentum to rise to the height, where the wind is strong enough.

Mechanical systems like tethered rigid wings are typically characterized by fast-evolving dynamics. Those processes are often constrained, motivating for control and estimation approaches based on Nonlinear Model Predictive Control (NMPC) and Moving Horizon Estimation (MHE).

The control frequency of optimization-based techniques is limited by the symbolic complexity of the equations modeling the system. This complexity can often be dramatically reduced by using representations based on non-minimal coordinates, which result in index-3 differential-algebraic equations (DAEs).

To meet the real-time requirements, code generation of both algorithms is used, based on the Real Time Iteration (RTI) scheme and direct multiple shooting. The exported plain C-code makes use of efficient routines tailored to each specific problem, resulting in computational times in the range of few milliseconds.

This event is co-funded by the European Union under the 7th Framework Programme «FP7-PEOPLE-2010-ITN», Grant agreement number 264735-SADCO.


contact: ompc2013[at-sign]uni-bayreuth.de,
created by Lars Grüne, Robert Baier and Wolfgang Riedl

last change: September 25, 2013



University of Bayreuth -