Christian Daniel

Quick Info

Research Interests

Motor Control & Learning, Robotics, Machine Learning, Biomimetic Systems.

More Information

Curriculum Vitae Publications Google Citations DBLP

Contact Information

Mail. TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
Office. Room E327, Robert-Piloty-Gebaeude S2|02
work+49-6151-16 25371
fax+49-6151-16 25375

Christian Daniel joined the institute for Intelligent Autonomous Systems in August 2011 as a Masters student. He was born and raised in Frankfurt/ Main in Germany where he also went to school and completed his civil service duties.

Before writing his master thesis at the IAS, he received his bachelor of science from TU Darmstadt in the field of computational fluid dynamics. He then proceeded to leave Darmstadt for one year, during which he studied at EPFL in Lausanne, Switzerland. There, he focussed away from the field of computational fluid dynamics in favor of the field of robotics in general and artificial intelligence in particular. After finishing the official school year at EPFL he got the opportunity to stay on as a research assistant at EPFL's LASA lab, working with Aude Billard and Dan Grollman. Back in Germany, he went on to specialize in the field of AI and became a Master's student at the IAS lab. Recently, his thesis at IAS has won the Datenlotsenpreis 2013 for the best Master's thesis in Computer Science. After his Master's thesis, Christian became a Ph.D. student at the IAS lab.

Christian's research interests specialize on the field of skill and transfer learning, how robot learning compares to human learning and what makes humans 'intelligent'. These are very basic questions on the way to true artificial intelligence, that still have to be covered. While skill learning has been around for some time, there still remains a lot to be done. Transfer learning, on the other hand is an area of research that still is very much unexplored.


Key References

  1. Daniel, C.; Neumann, G.; Kroemer, O.; Peters, J. (2016). Hierarchical Relative Entropy Policy Search, Journal of Machine Learning Research (JMLR), 17, pp.1-50.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] (conference version received the IROS CoTeSys Cognitive Robotics Best Paper Award while being both the IROS 2012 Best Paper Award Finalist and the IROS 2012 Best Student Paper Award Finalist.)
  2. Daniel, C.; Kroemer, O.; Viering, M.; Metz, J.; Peters, J. (2015). Active Reward Learning with a Novel Acquisition Function, Autonomous Robots, 39, pp.389-405 .   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Daniel, C.; Taylor, J.; Nowozin, S. (2016). Learning Step Size Controllers for Robust Neural Network Training, National Conference of the American Association for Artificial Intelligence (AAAI).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Daniel, C.; van Hoof, H.; Peters, J.; Neumann, G. (2016). Probabilistic Inference for Determining Options in Reinforcement Learning, Machine Learning (ML), 104, 2-3, pp.337-357.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] (received the Best Student Paper Award of ECMLPKDD 2016 sponsored by the Machine Learning journal)

For all publications please see my Publication Page

Videos

Active Reward Learning

Manually designing reward functions for real robot tasks is often a lengthy and complicated process. We show how we can leverage machine learning methods to integrate learning a reward function from human ratings into the reinforcement learning framework to replace hand coded reward functions.

Daniel, C.; Kroemer, O.; Viering, M.; Metz, J.; Peters, J. (2015). Active Reward Learning with a Novel Acquisition Function, Autonomous Robots, 39, pp.389-405 .   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Daniel, C.; Viering, M.; Metz, J.; Kroemer, O.; Peters, J. (2014). Active Reward Learning, Proceedings of Robotics: Science & Systems (R:SS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Finite Horizon Relative Entropy Search

Many tasks can only be solved by a combination of subskills. We show how a robot can learn to adapt a sequence of skills to achieve an overarching task goal.

Daniel, C.; Neumann, G.; Kroemer, O.; Peters, J. (2013). Learning Sequential Motor Tasks, Proceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Hierarchical Relative Entropy Search

Real robot applications often allow for more than one solution. Learning multiple solutions for the same task increases robustness of the robot to changes in the environment, as learned backup solutions can be activated when the previously best solution becomes unavailable. Additionally, learning multiple solutions helps avoiding the averaging problem and can be used to present policies that are piecewise linear and can approximate non-linear relations.

Daniel, C.; Neumann, G.; Peters, J. (2012). Learning Concurrent Motor Skills in Versatile Solution Spaces, Proceedings of the International Conference on Robot Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

  

zum Seitenanfang