Machine Learning, Reinforcement Learning, Optimal Control
Learning control has become a viable approach in both the machine learning and control community. Many successful applications impressively demonstrate the advantages of learning control: in contrast to classical control methods, it does not presuppose a detailed understanding of the underlying dynamics but tries to infer the required information from data. Thus, relatively little expert knowledge about the dynamics is required and fewer assumptions such as a parametric form and parameter estimates must be made.
As for real-world applications it is desirable to minimize system interaction time, model based approaches are often preferred. However, one drawback of model-based approaches is that the model is inherently approximate, but at the same time is implicitly assumed to model the system dynamics sufficiently well. These conflicting assumptions can derail learning and solutions to the approximate control problem may fail at the real-world task, especially when predictions are highly uncertain. Gaussian processes (GPs) offer an elegant, fully Bayesian approach to model system dynamics and incorporate uncertainty. Given observed data, GPs infer a distribution over all plausible dynamics models and are, thus, a viable choice for model-based reinforcement learning.
Julia's research focuses on closed-loop control systems with GP forward dynamics models. There are several open questions in this field she hopes to address during her PhD. One major difficulty of GPs as forward dynamics models in closed-loop control is that predictions become intractable when the input to the GP is a distribution. There are some well-known approximation methods, that offer rather rough estimates of the output state distribution and can be computed efficiently. However, there are several applications where these estimates are not precise enough and which demand for highly accurate multi-step-ahead predictions. One such field that requires high precision approximate inference is stability analysis of closed-loop control systems with GPs as forward dynamics model. This field deals with evaluating the system behaviour under a certain control policy. For example, one may be interested whether a policy succeeds or from which starting states the policy succeeds. In particular, the goal is to derive guarantees that the system will expose a certain (desired) behaviour. While in classical control stability analysis dates back to the 19th century, there has not been much research in this direction for GP dynamics models yet. However, such guarantees are crucial to learn control in safety critical applications. Julia works on several problems with GP dynamics models from this field: highly accurate approximations for multi-step ahead predictions, that enable stability analysis; stability of the closed-loop control structure (i) for finite time horizons (ii) under the presence of disturbances and (iii) asymptotic stability; learning control based on GP forward dynamics for finite and infinite time horizons.
Vinogradska, J.; Bischoff, B.; Nguyen-Tuong, D.; Peters, J. (2017). Stability of Controllers for Gaussian Process Forward Models, Journal of Machine Learning Research (JMLR), 18, 100, pp.1-37.
See Details [Details] BibTeX Reference [BibTex]
Vinogradska, J. (2017). Gaussian Processes in Reinforcement Learning: Stability Analysis and Efficient Value Propagation, PhD Thesis. See Details [Details] BibTeX Reference [BibTex]
Vinogradska, J.; Bischoff, B.; Nguyen-Tuong, D.; Romer, A.; Schmidt, H.; Peters, J. (2016). Stability of Controllers for Gaussian Process Forward Models, Proceedings of the International Conference on Machine Learning (ICML). See Details [Details] Download Article [PDF] BibTeX Reference [BibTex]