I have graduated and moved to TU Delft, Netherlands where I am an Assistant Professor. Check out my new homepage ...

Jens Kober

Jens Kober joined the Max-Planck Institute for Biological Cybernetics in 2007 as a Master's Student in the Robot Learning Lab (part of the Department of Bernhard Schölkopf) working with Jan Peters and stayed on as a Ph.D. student. From 2011 to 2012, he was a member of the Max-Planck Institute for Intelligent Systems as his lab had moved there. Before doing so, he studied at the University of Stuttgart and at the Ecole Centrale Paris (ECP).

Quick Info

Research Interests

Robotics, Machine Learning.

More Information

Curriculum Vitae Publications Google Citations DBLP

Contact Information

DCSC, Building 34, Mekelweg 2, 2628 CD Delft, The Netherlands
work+31 (0)15 27 85150
fax+31 (0)15 27 86679

In 2008, he completed the double degree program T.I.M.E. and graduated from the University of Stuttgart with a Diplom-Ingenieur in Engineering Cybernetics (German M.Sc. majoring in automation & control) as well as from ECP as a Centralien (French engineering degree with an integrated multidisciplinary approach). He has been a visiting research student at the Advanded Telecommunication Research (ATR) Center in Japan and an intern at Disney Research Pittsburgh, USA. Please see his curriculum vitae for more biographical information.

Jens Kober completed his Ph.D. in the area of motor skill learning with a strong focus on learning motor primitives and on reinforcement learning at the Technische Universität Darmstadt. His Ph.D. thesis committee included Oskar von Stryk, Stefan Schaal, Johannes Fuernkranz, Stefan Roth and, of course, his advisor Jan Peters. His Ph.D. thesis has won the 2013 Georges Giralt PhD Award as the best Robotics PhD thesis in Europe in 2012 and can be found here.

Jens Kober has been a teaching assistant of the Projektpraktikum: Lernende Roboter at TU Darmstadt in Fall 2011/12 and Summer 2012.

Integrating highly complex robots in daily life requires them to become more independent of preprogrammed behaviors and exception handling. As biological research has shown different complex movements are actually a result of the combination of simple motor primitives. This concept is also successfully applied to robotics but there remains still a lot of research to be done in order to make the current formulations more versatile. In nature as in robotics tasks are often learned by observation and imitation. In many cases the imitation is imperfect and has to be improved. Reinforcement Learning is a natural choice for this step.

Jens has graduated in Spring 2012 with Doctor of Engineering from Technische Universität Darmstadt. During his postdoc, he was affiliated to the CoR-Lab, Universität Bielefeld, Germany and was working at the Honda Research Institute Europe in Offenbach, Germany. Since January 2015 Jens is an assistant professor at the Delft Center for Systems and Control, TU Delft, Netherlands and member of the TU Delft Robotics Institute.

Jens' Collaborators include Betty Mohler, Silvia Chiappa, Katharina Muelling, Oliver Kroemer, Christoph Lampert, Bernhard Schölkopf, Erhan Oztop, Jan Peters, Michael Gienger, Jochen J. Steil, Simon Manschitz.

Software

A basic MATLAB/Octave implementation of the PoWER algorithm [1]: matlab_PoWER.zip
The required motor primitive code can be downloaded from http://www-clmc.usc.edu/Resources/Software.

A basic MATLAB/Octave implementation of the motor primitives for hitting and batting [4]: hittingMP.m

Key References

  1. Kober, J.; Peters, J. (2009). Policy Search for Motor Primitives in Robotics, Advances in Neural Information Processing Systems 22 (NIPS 2008), Cambridge, MA: MIT Press.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] with a longer version in Kober, J.; Peters, J. (2011). Policy Search for Motor Primitives in Robotics, Machine Learning (MLJ), 84, 1-2, pp.171-203.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Kober, J.; Peters, J. (2009). Learning Motor Primitives for Robotics, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] with a longer version in Kober, J.; Peters, J. (2010). Imitation and Reinforcement Learning - Practical Algorithms for Motor Primitive Learning in Robotics, IEEE Robotics and Automation Magazine, 17, 2, pp.55-62.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Chiappa, S.; Kober, J.; Peters, J. (2009). Using Bayesian Dynamical Systems for Motion Template Libraries, Advances in Neural Information Processing Systems 22 (NIPS 2008), Cambridge, MA: MIT Press.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Kober, J.; Muelling, K.; Kroemer, O.; Lampert, C.H.; Schoelkopf, B.; Peters, J. (2010). Movement Templates for Learning of Hitting and Batting, IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Kober, J.; Oztop, E.; Peters, J. (2010). Reinforcement Learning to adjust Robot Movements to New Situations, Proceedings of Robotics: Science and Systems (R:SS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] with a longer version in Kober, J.; Wilhelm, A.; Oztop, E.; Peters, J. (2012). Reinforcement Learning to Adjust Parametrized Motor Primitives to New Situations, Autonomous Robots (AURO), 33, 4, pp.361-379, Springer US.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Kober, J.; Bagnell, D.; Peters, J. (2013). Reinforcement Learning in Robotics: A Survey, International Journal of Robotics Research (IJRR), 32, 11, pp.1238-1274.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

  

zum Seitenanfang