Oleg Arenz

Quick Info

Research Interests

Machine Learning, Robotics, Inverse Reinforcement Learning, Imitation Learning, Grasping and Manipulation, Reinforcement Learning

More Information

Publications Google Citations DBLP

Contact Information

Oleg Arenz
TU Darmstadt, FG IAS,
Hochschulstr. 10, 64289 Darmstadt
Office. Room E226, Building S2|02
work+49-6151-16-20073

Oleg Arenz joined the Computational Learning for Autonomous Systems Lab on May, 1st, 2015 as a PhD student. His research includes imitation learning, inverse reinforcement learning and robot grasping and manipulation. During his PhD, Oleg is working on the Romans project.

Before his PhD, Oleg completed both, his Bachelor Degree in Computer Science and his Master Degree in Autonomous Systems at the Technische Universitaet Darmstadt. His master thesis entitled “Feature Extraction for Inverse Reinforcement Learning" was written under the supervision of Gerhard Neumann and Christian Daniel.


Research Interests

Robots can learn to accomplish a given task by imitating previously observed demonstrations. However, in order to adapt to different situations, true imitation must go beyond blindly repeating demonstrated actions. Instead, imitation learning is deeply connected with the problem of learning the intentions behind observed behaviour. Inverse Reinforcement Learning can be applied in order to learn these intentions by learning a corresponding reward function. Reinforcement Learning can then be applied for imitation learning by learning a policy that aims to maximize that reward function.

Many real-world applications such as autonomous driving have an intractable number of possible states. As even experts are usually not able to identify all relevant features, Inverse Reinforcement Learning depends on feature extraction for learning meaningful reward functions. Furthermore, intentions can be inferred at different levels of abstraction, e.g. steering a car to the right might serve the purpose of taking a corner, while taking that corner might itself serve the purpose of reaching a given destination. Hierarchical reward functions ease the task of both, Inverse Reinforcement Learning and Reinforcement Learning, by making it possible to reuse previously learned high-level goals and low-level strategies. The problem of building and utilizing such hierarchical decompositions provides an interesting route for future research.

Keywords

Machine Learning, Robotics, Inverse Reinforcement Learning, Imitation Learning, Manipulation and Grasping, Hierarchical Learning, Feature Extraction, Reinforcement Learning

Key References

  1. Arenz, O.; Abdulsamad, H.; Neumann, G. (2016). Optimal Control and Inverse Optimal Control by Distribution Matching, Proceedings of the International Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Arenz, O. (2014). Feature Extraction for Inverse Reinforcement Learning, Master Thesis.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

  

zum Seitenanfang