Elmar Rueckert

Quick Info

Research Interests

Biologically Inspired Motor Skill Learning, Probabilistic Inference, Meta- or Structure Learning for Robotics, Reinforcement Learning

More Information

Curriculum Vitae Publications Google Citations Frontiers Network ResearchGate Network DBLP Academia.edu ORCID

Contact Information

Mail. TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
Office. Room E323, Robert-Piloty-Gebaeude S2|02
work+49-6151-16-25376:

Elmar Rueckert joined the IAS group as Post-Doc in March 2014. He investigates computational models of the motor control system and validates them in robotic applications. A strong emphasis is put on neurorobotics (N), deep networks and machine learning (D), neuro-prostheses applications with brain-machine interfaces (P), and on computational neural models of human motor control and learning (H).

Before coming to Darmstadt, Elmar did his Ph.D. at the Graz University of Technology (TUG) under the supervision of Wolfgang Maass. Elmar started his Ph.D. studies in February 2010 and passed its defense with distinction in February 2014. During his Ph.D., he worked on the AMARSi project, where he developed novel reinforcement learning algorithms for motor planning using probabilistic inference, biologically inspired movement primitive representations based on muscle synergies, and investigated how networks of spiking neurons can solve motor control and motor planning problems. During his Ph.D., he also collaborated with Marc Toussaint, Andrea d'Avella, and Thomas Schack. His Thesis, "On Biologically inspired motor skill learning in robotics through probabilistic inference" concentrated on probabilistic inference for motor skill learning and on learning biologically inspired movement representations. Elmar defended his PhD thesis in February 2014.

Elmar was born in Unterpremstätten, Austria. He received his qualification for university entrance at the technical high school for electronic engineering and informatics, HTL Graz Gösting. Before doing his PhD, he finished his studies in telematics at the TUG in the year 2010. From 2012 to 2014, Elmar gave the data structures and algorithms lecture, where he received an outstanding good reputation from his students. During his Ph.D., he also supervised several student's projects on machine learning and robotics.

Research interests

Neurorobotics

Deep Networks and Machine Learning

Computational Neural Models of Human Motor Control

Neuroprostheses and Brain-Machine Interfaces

  • Novel training schemes for Brain-Computer Interfaces
  • Cybathlon related signal processing and pattern recognition

Key References

1. Rueckert, E.; Camernik, J.; Peters, J.; Babic, J. (2016). Probabilistic Movement Models Show that Postural Control Precedes and Predicts Volitional Motor Control, Nature PG: Scientific Reports, 6, 28455.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

  Matlab Code Probabilistic Trajectory Model

2. Rueckert, E.; Kappel, D.; Tanneberg, D.; Pecevski, D; Peters, J. (2016). Recurrent Spiking Networks Solve Planning Tasks, Nature PG: Scientific Reports, 6, 21142, Nature Publishing Group.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

  Matlab Code Neural Network Framework 

3. Rueckert, E.; Mundo, J.; Paraschos, A.; Peters, J.; Neumann, G. (2015). Extracting Low-Dimensional Control Variables for Movement Primitives, Proceedings of the International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

4. Rueckert, E.A.; Neumann, G.; Toussaint, M.; Maass, W. (2013). Learned graphical models for probabilistic planning provide a new class of movement primitives, Frontiers in Computational Neuroscience, 6, 97.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

5. Rueckert, E.A.; d'Avella, A. (2013). Learned parametrized dynamic movement primitives with shared synergies for controlling robotic and musculoskeletal systems, Frontiers in Computational Neuroscience, 7, 138.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

6. Rueckert, E.A.; Neumann, G. (2012). Stochastic Optimal Control Methods for Investigating the Power of Morphological Computation, Artificial Life.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

For all publications please see his Publication Page

Videos


Software

  • Supplementary Matlab Code to Recurrent Spiking Networks Solve Planning Tasks, The framework contains several demo programs of different sampling methods, discrete and continuous problems and illustrates the effect of online and offline learning rules. Further, we demonstrate how reinforcement learning and imitation learning can be implemented and applied to robotic tasks.
  • Matlab Code MEX-Function implementation of Locally Weighted Regression (LWR) for real-time predictions of learned models.
  • Matlab Code Extracting Low-Dimensional Control Variables for Movement Primitives
  • Matlab Code Robust Policy Updates for Stochastic Optimal Control
  • Matlab Code Sensor Glove Matlab Mex Interface
  • Matlab Code Probabilistic Model of Trajectories (a sub part of ProMPs)
  • OpenSim/Simtk Matlab Interface and Biped Walker Matlab Simulator used in Rueckert, E.A.; d'Avella, A. (2013). Learned parametrized dynamic movement primitives with shared synergies for controlling robotic and musculoskeletal systems, Frontiers in Computational Neuroscience, 7, 138.

Workshops and Summer Schools

  • [2016, Organized by Elmar Rueckert and Martin Riedmiller] NIPS 1.5 days workshop. Title: Neurorobotics: A chance for new ideas, algorithms and approaches.
  • [2014, Invited Talk] TEDUSAR Summer School. Title: An introduction to robot learning and probabilistic movement planning.
  • [2011, Organized by Gerhard Neumann and Elmar Rueckert] Two days workshop. Title: Hands-on Probabilistic Inference for Motor Control.

Current Ph.D. Students

StartStudentAdvisorTypeTopicRelated Document(s)'
11/2016Svenja StarkElmar Rückert, Jan PetersPh.D.Intrinsic Motivation Strategies for Learning Motor Skills 
10/2015Daniel TannebergElmar Rückert, Jan PetersPh.D.Deep Neural Networks for Open-ended Robot Skill Learning 

Current Bachelor and Master Students

StartStudentAdvisorTypeTopicRelated Document(s)'
10/2016Moritz NakatenusElmar RückertM.Sc. ProjectLSTM Networks for movement planning in humanoids 
10/2016Kaushik GondaliyaElmar Rückert, Jan Peters, with IBM.comM.Sc. ThesisLearning to Categorize Issues in Distributed Bug Tracker Systems 
06/2016Sonja HanekElmar RückertB.Sc. ThesisMulti-Scale Latent Modelling of the HuMoD database 
05/2016Simon ThiemElmar RückertB.Sc. ThesisDeep Neural Networks for Controlling Humanoids 
05/2016Denny DittmarElmar RückertM.Sc. ThesisNeural Policy Search 
04/2016Jialiang GaoElmar RückertM.Sc. ThesisStochastic Optimal Control of Humanoid Robots in multi-contact environments 
10/2015Harun PolatElmar RückertB.Sc. ThesisWachsende Neuronale Netz zur BewegungskoordinationProject description
02/2015Viktor PfanschillingElmar RückertB.Sc. ProjectGenetic Reactive Programming 

Supervised Theses at IAS

YearStudentAdvisorTypeTopicDocument
2017David SharmaElmar Rückert, Daniel Tanneberg, Moritz Grosse-WentrupM.Sc. ThesisAdaptive Training Strategies for Brain-Computer-Interfacesabstract pdf, thesis pdf
2016Lena PlageDaniel Tanneberg,Elmar RückertB.Sc. ThesisLearning in-hand manipulation skills through kinesthetic teaching with sensor glovespdf
2016Mike SmykElmar RückertM.Sc. ProjectModel-based Control and Planning on Real RobotsJournal paper in progress of writing
2016Svenja StarkElmar Rückert, Tucker HermansM.Sc.Learning Probabilistic Feedforward and Feedback Policies for Stable Walkingpdf
2016Jan KohlschuetterElmar RückertM.Sc.Learning Probabilistic Classifiers from Electromyography Data for Predicting Knee Abnormalitiespdf
2015Daniel TannebergElmar RückertM.Sc.Spiking Neural Networks Solve Robot Planning Problemspdf
2014Max MindtElmar RückertM.Sc.Probabilistic Inference for Movement Planning in Humanoidspdf
2014Jan MundoElmar Rückert, Gerhard NeumannM.Sc.Structure Learning for Movement Primitivespdf

Supervised Theses and Projects at Technische Universitaet Graz

YearStudentAdvisorTypeTopicDocument
2013Oliver PrevenhueberElmar RückertM.Sc. (Thesis)Monte Carlo Sampling Methods for Motor Control of Constraint High-dimensional Systemsavailable on request
2013Othmar GsengerElmar RückertM.Sc. (Thesis)Probabilistic Models for Learning the Dynamics Model of Robotsavailable on request
2013Gerhard KniewasserElmar RückertM.Sc. (Project)Reinforcement Learning with Dynamic Movement Primitives - DMPsavailable on request
2012Oliver PrevenhueberElmar RückertM.Sc. (Project)Gibbs Sampling Methods for Motor Control Problems with Hard Constraintsavailable on request
2012Tim GeneweinGerhard Neumann, Elmar RückertM.Sc. (Thesis)Structure Learning for Motor Controlavailable on request
2011Thomas WiesnerElmar RückertB.Sc. (Thesis)Ein Vergleich von Lernalgorithmen für Parametersuche im hochdimensionalen Raumavailable on request

  

zum Seitenanfang