Projects

We are partners in several projects and also manage a thematic programme for PASCAL2. In the past at MPI, we have also been part of PASCAL2 Pump Priming Project with Koby Crammer (Technion). Inside TU Darmstadt, we are part of the MoTaSyS project.

SKILLS4ROBOTS (2015-2020; ERC Starting Grant)

The goal of SKILLS4ROBOTS is to develop an autonomous skill learning system that enables humanoid robots to acquire and improve a rich set of motor skills. This robot skill learning system will allow scaling of motor abilities up to fully anthropomorphic robots while overcoming the current limitations of skill learning systems to only few degrees of freedom. To achieve this goal, it will decompose complex motor skills into simpler elemental movements - called movement primitives - that serve as building blocks for the higher-level movement strategy and the resulting architecture will be able to address arbitrary, highly complex tasks -- up to robot table tennis for a humanoid robot. Learned primitives will be superimposed, sequenced and blended. For example, a game of robot table tennis can be represented using different stroke movement primitives, such as a forehand stroke, a backhand stroke or a smash, as well as locomotion primitives for foot placement for maintaining balance by shifting the center of mass of the robot. The resulting decomposition into building blocks is not only inherent to many motor tasks but also highly scalable and will be exploited by our learning system. Four recent breakthroughs in our research will make this project possible due to successes on the representation of the parametric probabilistic representations of the elementary movements, on probabilistic imitation learning, on relative entropy policy search-based reinforcement learning and on the modular organization of the representation. These breakthroughs will allow create a general, autonomous skill learning system that can learn many different skills in the exact same framework without changing a single line of programmed code.

Team Leader: Jan Peters
Contacts: Jan Peters, Boris Belousov, Dorothea Koert, Hany Abdulsamad
Publications: Check here!

GOAL-Robots (2017-2020; EU H2020 FET)

This project aims to develop a new paradigm to build open-ended learning robots called Goal-based Openended Autonomous Learning (GOAL). GOAL rests upon two key insights. First, to exhibit an autonomous open-ended learning process, robots should be able to self-generate goals, and hence tasks to practice. Second, new learning algorithms can leverage self-generated goals to dramatically accelerate skill learning. The new paradigm will allow robots to acquire a large repertoire of flexible skills in conditions unforeseeable at design time with little human intervention, and then to exploit these skills to efficiently solve new user-defined tasks with no/little additional learning. This innovation will be essential in the design of future service robots addressing pressing societal needs. The project will develop the GOAL paradigm by pursuing three main objectives: (1) advance our understanding of how goals are formed and underlie skill learning in children; (2) develop innovative computational architectures and algorithms supporting (2a) the self-generation of useful goals based on user/task independent mechanisms such as intrinsic motivations, and (2b) the use of such goals to efficiently and autonomously build large repertoires of skills; (3) demonstrate the potential of GOAL with a series of increasingly challenging demonstrators in which robots will autonomously develop complex skills and use them to solve difficult challenges in real-life scenarios. The interdisciplinary project consortium is formed by leading international roboticists, computational modelers, and developmental psychologists working with complementary approaches. This will allow the project to greatly advance our understanding of the fundamental principles of open-ended learning and to produce a breakthrough in the field of autonomous robotics by producing for the first time robots that can autonomously accumulate complex skills and knowledge in a truly open-ended way.

Team Leader: Elmar Rueckert
Contacts: Elmar Rueckert, Daniel Tanneberg, Svenja Stark, Jan Peters
Publications: Check here!
Link: GOAL-Robots project

LearnRobotS (2015-2018; DFG Project, SPP Autonomous Learning)

The goal of this project is to develop a hierarchical learning system that decomposes complex motor skills into simpler elemental movements, also called movement primitives, that serve as building blocks of our movement strategy. For example, in a tennis game, such primitives can represent different tennis strokes such as a forehand stroke, a backhand stroke or a smash. As we can see, the autonomous decomposition into building blocks is inherent to many motor tasks. In this project, we want to exploit this basic structure for our learning system. To do so, our autonomous learning system has to extract the movement primitives out of observed trajectories, learn to generalize the primitives to different situations and select between, sequence or combine the movement primitives such that complex behavior can be synthesized out of the primitive building blocks. Our autonomous learning system will be applicable to learning from demonstrations as well as subsequent self improvement by reinforcement learning. Learning will take place on several layers of the hierarchy. While on the upper level, the activation policy of different primitives will be learned, the intermediate level of the hierarchy extracts meta-parameters of the primitives and autonomously learns how to adapt these parameters to the current situation. The lowest level of the hierarchy learns the control policies of the single primitives. Learning on all layers as well as the extraction of the structure of the hierarchical policy is aimed to operate with a minimal amount of dependence from a human expert. We will evaluate our autonomous learning framework on a robot table tennis platform, which will give us many insights in the hierarchical structure of complex motor tasks.

Team Leader: Gerhard Neumann
Contacts: Riad Akrour, Sebastian Gomez, Gerhard Neumann, Jan Peters
Publications: Check here!

SCARL (2015-2018; DFG Project, SPP Autonomous Learning)

Over the course of the last decade, the framework of reinforcement learning (RL) has developed into a promising tool for learning a large variety of different tasks in robotics. During this timeframe, a lot of progress has been made towards scaling reinforcement learning to high-dimensional systems and solving tasks of increasing complexity. Unfortunately, this scalability has been achieved by using expert knowledge to pre-structure the learning problem in several dimensions. As a consequence, the state-of-the-art methods in robot reinforcement learning generally depend on hand-crafted state representations, pre-structured parametrized policies, well-shaped reward functions and demonstrations by a human expert to aid scaling of the learning algorithm. This large amount of required pre-structuring arguably is in stark contrast to the goal of developing autonomous reinforcement learning systems. In this project, we want to advance the field by starting with a 'classical' reinforcement learning setting for a challenging robotic task (i.e., tetherball). Solving this task by RL methods will be already a valuable contribution. From there on, we will start to identify the components for which the learning task design still needs engineering experience. In the course of this project, we show how we aim to drive each of these components towards more autonomy while developing highly scalable approaches.

Team Leader: Jan Peters
Contacts: Simone Parisi, Christian Daniel, Jan Peters
Publications: Check here!

ROMANS (2015-2018; EU H2020 RIA)

The RoMaNS (Robotic Manipulation for Nuclear Sort and Segregation) project will advance the state of the art in mixed autonomy for tele-manipulation, to solve a challenging and safety-critical “sort and segregate” industrial problem, driven by urgent market and societal needs. Cleaning up the past half century of nuclear waste represents the largest environmental remediation project in the whole of Europe. Nuclear waste must be “sorted and segregated”, so that low-level contaminated waste is placed in low-level storage containers, rather than occupying extremely expensive and resource intensive high-level storage containers and facilities. Many older nuclear sites (>60 years in UK) contain large numbers of legacy storage containers, some of which have contents of mixed contamination levels, and sometimes unknown contents. Several million of these legacy waste containers must now be cut open, investigated, and their contents sorted. This can only be done remotely using robots, because of the high levels of radioactive material. Current state-of-the-art practice in the industry, consists of simple tele-operation (e.g. by joystick or teach-pendant). Such an approach is not viable in the long- term, because it is prohibitively slow for processing the vast quantity of material required. The project will: 1) Develop novel hardware and software solutions for advanced bi-lateral master-slave tele-operation. 2) Develop advanced autonomy methods for highly adaptive automatic grasping and manipulation actions. 3) Combine autonomy and tele-operation methods using state-of-the-art understanding of mixed initiative planning, variable autonomy and shared control approaches.

Team Leader: Gerhard Neumann
Contacts: Takayuki Osa, Joni Pajarinen, Gregor Gebhardt, Oleg Arenz, Gerhard Neumann, Jan Peters
Publications: Check here!
Link: ROMANS project

TACMAN (2014-2017; EU FP7 STREP)

TACMAN addresses the key problem of developing an information processing and control technology enabling robot hands to exploit tactile sensitivity and thus become as dexterous as human hands. The current availability of the required technology now allows us to considerably advance in-hand manipulation. TACMAN’s goal is to develop fundamentally new approaches which can replace manual labor under inhumane conditions by endowing robots with such tactile manipulation abilities, by transferring insights from human neuroscientific studies into machine learning algorithms. TACMAN will provide an innovative new technology that is key for bringing industrial manufacturing back to Europe. Consider the case of the iPhone, where most mechanical manipulation of the major components is achieved by manual human labor under terrible work conditions and not by advanced industrial robots—despite that millions of iPhones are industrially assembled per month. The reason for this absence of appropriate automation is the lack of manipulation skills of current robots. Commercially available robotic hand-arm systems move more accurately and faster than humans, and their sensors see more and at a higher precision—even the smallest forces and torques can be detected. Despite these impressive sensori-motor abilities, current robots are terrible at manipulation when compared to humans. Neuro- science provides a clear reason for the superiority of human hands: During manipulation, humans make substantial use of the data from tactile sensors, i.e., the information obtained through the feeling in the human’s fingers. Robot hands are lacking this key ability! Hence, the rationale of TACMAN is that this performance gap in manipulation ability can be filled by (1) making such tactile sensory comprehensible, and (2) use the information provided by such sensors intelligently for behavior generation. TACMAN aims to integrate the most robust available tactile sensors into the control of existing modern robot hands, and, based on this control law, develop tactile sensor-based manipulation solutions. To make this innovation tractable in a three year project, we aim only on recognising and handling objects that are already in the hand. The structure of the project is designed to allow quick scaling from straightforward, well-captured scenarios employing a single finger to complex multi-fingered manipulation.

Team Leader: Elmar Rueckert]
Contacts: Elmar Rueckert, Herke van Hoof, Filipe Veiga, Daniel Tanneberg, Jan Peters
Publications: Check here!
Link: TACMAN

Learning Sequential Skills for Robot Manipulation Tasks (2014-2017; Industry Project)

Robot manipulation is commonly conceived as a high-potential future business area due to the numerous potential applications. Among them are factory assembly, medical applications, service robotics, offshore robotics, disaster robot applications and others. This project will create new concepts and techniques for robot learning of manipulation skills from a human teacher. In recent and current work, we are investigating movement representations and learning of simple movements, which we represent in so called Movement Primitives. The particular focus of this joint project with the Honda Research Institute at Offenbach, Germany, is to learn the coordination of such primitives, in order to realize complex sequential and parallel movement behaviour. An illustrative example is the replacement of a light bulb: The robot’s movement skill can be composed of elementary primitives, such as reaching towards the lamp, aligning the fingers with the bulb, grasping the bulb or turning it in the thread. The sequential skill is coordinating these primitives with a flexible arbitration scheme: It needs to maintain the causal order of the primitives (e.g. reach – pre-shape – grasp), while coordinating the timing of primitives that are active in parallel (co-articulation of left and right hand for bi-manual skills). In case of larger disturbances, the skill needs to adapt the sequential flow to account for the changed situation (e.g. pick up bulb if it drops out of the hand). This project is a collaboration with Michael Gienger and his team at Honda Research Institute at Offenbach, Germany.

Team Leader: Jan Peters
Contacts: Simon Manschitz, Jan Peters
Publications: Check here!

3rd Hand (2013-2017; EU FP7 STREP)

Robots have been essential for keeping industrial manufacturing in Europe. Most factories have large numbers of robots in a fixed setup and few programs that produce the exact same product hundreds of thousands times. The only common interaction between the robot and the human worker has become the so-called “emergency stop button”. As a result, re-programming robots for new or personalized products has become a key bottleneck for keeping manufacturing jobs in Europe. The core requirement to date has been the production in large numbers or at a high price. Robot-based small series production requires a major breakthrough in robotics: the development of a new class of semi-autonomous robots that can decrease this cost substantially. Such robots need to be aware of the human worker, alleviating him from the monotonous repetitive tasks while keeping him in the loop where his intelligence makes a substantial difference. In the 3rd Hand project, we pursue this breakthrough by developing a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks and enable a effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration, a new semi-autonomous robotic system that is able to: i) learn cooperative tasks from demonstration; ii) learn from instruction; and iii) transfer knowledge between tasks and environments. We will demonstrate its efficiency in the collaborative assembly of an IKEA-like shelf where the robot acts as a semi-autonomous 3rd-Hand.

Team Leader: Guilherme Maeda
Contacts: Oliver Kroemer, Guilherme Maeda, Rudolf Lioutikov, Jan Peters
Publications: Check here!
Link: 3rd Hand

CoDyCo (2013-2017; EU FP7 STREP)

The CoDyCo project is an EU STREP project centered on "Whole-body Compliant Dynamical Contacts in Cognitive Humanoids". The aim of CoDyCo is to advance the current control and cognitive understanding about robust, goaldirected whole-body motion interaction with multiple contacts. CoDyCo will go beyond traditional approaches: (1) proposing methodologies for performing coordinated interaction tasks with complex systems; (2) combining planning and compliance to deal with predictable and unpredictable events and contacts; (3) validating theoretical advances in real-world interaction scenarios. First, CoDyCo will advance the state-of-the-art in the way robots coordinate physical interaction and physical mobility. Traditional industrial applications involve robots with limited mobility. Consequently, interaction (e.g. manipulation) was treated separately from whole-body posture (e.g. balancing), assuming the robot firmly connected to the ground. Foreseen applications involve robots with augmented autonomy and physical mobility. Within this novel context, physical interaction influences stability and balance. To allow robots to surpass barriers between interaction and posture control, CoDyCo will be grounded in principles governing whole-body coordination with contact dynamics. Second, CoDyCo will go beyond traditional approaches in dealing with all perceptual and motor aspects of physical interaction, unpredictability included. Recent developments in compliant actuation and touch sensing allow safe and robust physical interaction from unexpected contact including humans. The next advancement for cognitive robots, however, is the ability not only to cope with unpredictable contact, but also to exploit predictable contact in ways that will assist in goal achievement. Third, the achievement of the project objectives will be validated in real-world scenarios with the iCub humanoid robot engaged in whole-body goal-directed tasks. The evaluations will show the iCub exploiting rigid supportive contacts, learning to compensate for compliant contacts, and utilizing assistive physical interaction

Team Leader: Elmar Rückert
Contacts: Alexandros Paraschos, Roberto Calandra, Elmar Rückert, Serena Ivaldi, Jan Peters
Publications: Check here!
Link: CoDyCo project

CompLACS (2011-2015; EU FP7 STREP)

The CompLACS project is also an EU STREP project which can be described as follows: Cognitive architectures capable of operating autonomously in complex environments will require a constant interaction with this environment (e.g. with multiple users, in the case of web agents) and a high degree of modularity (e.g. user profiling module interacting with text generation modules, or recommendation systems, for example). Understanding the behavior of complex adaptive systems, where multiple parts are both driven by data and co-adapting, is a key question for the design of real world intelligent cognitive systems, that is "Composing learning systems for Artificial Cognitive Systems" or CompLACS. The project aims to develop key enabling machine learning technologies necessary for building artificial cognitive systems as well as developing a principled method of breaking down cognitive system design into well-specified components that can be matched against specified sub-systems together with guarantees on the behaviour of the resulting composition.

Team Leader: Gerhard Neumann
Contacts: Gerhard Neumann, Christian Daniel, Marc Deisenroth, Jan Peters
Publications: Check here!
Link: CompLACS project

RILCCA (2012-2013; Industry Project)

The Robot Interaction Learning of Cooperative and Competitive Actions (RILCCA) project is funded by a grant of the Daimler-and-Benz Foundation for Postdocs and Juniorprofessors, and can be described as follows: In this project, we will develop new algorithms that allow anthropomorphic robots to learn how to engage in joint actions with a human partner in order to learn manipulation tasks. The focus lies on learning models-of-interaction from observed data, e.g., from a recorded rapport between two persons. Using optical tracking technology the movements of a pair of persons are first recorded and then processed using machine learning algorithms. The result is a model of how each person adapted his or her behavior to the the movements of the respective other. Once a model is learned, it can be used by a robot to engage in a similar interaction with a human counter part. For example, by observing how two workmen collaborate on a maintenance task using a motion tracking setup, a robot can learn what actions and responses are needed to assist in a similar maintenance task.

Contact: Heni Ben Amor
Publications: Check here!

GeRT (2010-2013; EU FP7 STREP)

The GeRT project is an EU STREP project. GeRT stands for Generalizing Robot manipulation Tasks. Its' goal is to enable a robot to autonomously generalize its manipulation skills from known objects to previously unmanipulated objects in order to achieve everyday manipulation tasks. To achieve this aim, GeRT employs a set of demonstration programs for the same abstract task with different objects and varying scene arrangements. These programs are coded by hand and executed on the robotic system. The results from these example programs form the base for generalizing the planning operators and for learning pre and post conditions of operations. We are part of WP3 and WP4 as well as leader for WP5.

Team Leader: Jan Peters
Contacts: Oliver Kroemer, Heni Ben Amor, Jan Peters
Publications: Check here!
Link: GeRT project

  

zum Seitenanfang