Our Research Field
Control Optimization Through Reinforcement Learning
What Motivates Us
Optimal process control requires expert knowledge of the system. Creating this knowledge base results in significant manual effort. Reinforcement learning techniques have the potential to reduce reliance on a human-expert knowledge base. Potential of reinforcement learning in solving complicated problems through experience has been demonstrated, but it lacks maturity for industrial applications.
Our goal is to automate initial set-up and enable life-long tuning of process control. We focus on learning control where we intend to exploit modelling knowledge and develop new exploration strategies to improve sample efficiency. Further, we explore approaches to ensure integrity and safety of the learned controllers as well as the learning process itself.
As a company with a rich tradition in manufacturing, we explore the contribution of artificial intelligence to the vision of a self-calibrating factory. Reinforcement learning is a key enabler for automatic adaptation of machines. Our research focuses on transferable knowledge and solutions to decrease setup times as well as flexible control structures to increase performance potential.
Doerr et al.“Optimizing Long-term Predictions for Model-based Policy Search”
- Authors: Andreas Doerr, Christian Daniel, Duy Nguyen-Tuong, Alonso Marco, Stefan Schaal, Marc Toussaint, Sebastian Trimpe
- Published in CoRL in 2017
Doerr et al."Model-Based Policy Search for Automatic Tuning of Multivariate PID Controllers"
- Authors: Andreas Doerr, Duy Nguyen-Tuong, Alonso Marco, Stefan Schaal, and Sebastian Trimpe
- Published in ICRA in 2017
Schillinger et al."Safe Active Learning and Bayesian Optimization for Tuning a PI-Controller"
- Authors: Mark Schillinger, Benjamin Hartmann, Patric Skalecki, Mona Meister, Duy Nguyen-Tuong, and Oliver Nelles
- Published in IFAC in 2017
Daniel et al."Probabilistic Inference for Determining Options in Reinforcement Learning"
- Authors: Christian Daniel, Herke van Hoof, Jan Peters, and Gerhard Neumann
- Published in ECML in 2016
Vinogradska et al."Stability of Controllers for Gaussian Process Forward Models"
- Authors: Julia Vinogradska, Bastian Bischoff, Duy Nguyen-Tuong, Henner Schmidt, Anne Romer, and Jan Peters
- Published in ICML in 2016
Bischoff et al."Learning Throttle Valve Control Using Policy Search"
- Authors: Bastian Bischo, Duy Nguyen-Tuong, Torsten Koller, Heiner Markert, and Alois Knoll
- Published in ECML in 2014
Tietze et al."Model-based Calibration of Engine Controller Using Automated Transient Design of Experiment"
- Authors: Nils Tietze, Ulrich Konigorski, Christian Fleck, Duy Nguyen-Tuong
- Published in ISSAM in 2014
Bischoff et al."Policy Search for Learning Robot Control Using Sparse Data"
- Authors: B. Bischoff, D. Nguyen-Tuong, H. van Hoof, A. McHutchon, C. E. Rasmussen, A. Knoll, J. Peters, and M. P. Deisenroth
- Published in ICRA in 2014
Bischoff et al."Hierarchical Reinforcement Learning for Robot Navigation"
- Authors: B. Bischoff, D. Nguyen-Tuong, I-H. Lee, F. Streichert, and A. Knoll
- Published in ESANN in 2013
Bischoff et al."Learning Control Under Uncertainty: A Probabilistic Value-Iteration Approach"
- Authors: B. Bischoff, D. Nguyen-Tuong, H. Markert, and A. Knoll
- Published in ESANN in 2013
Bischoff et al."Solving the 15-Puzzle Game Using Local Value-Iteration"
- Authors: Bastian Bischoff, Duy Nguyen-Tuong, Heiner Markert, and Alois Knoll
- Published in SCAI in 2013