Triesch Lab - Videos

GOAL-Robots

This project aims to develop a new paradigm to build open-ended learning robots called `Goal-based Open ended Autonomous Learning' (GOAL). GOAL rests upon two key insights. First, to exhibit an autonomous open-ended learning process, robots should be able to self-generate goals, and hence tasks to practice. Second, new learning algorithms can leverage self-generated goals to dramatically accelerate skill learning. The new paradigm will allow robots to acquire a large repertoire of flexible skills in conditions unforeseeable at design time with little human intervention, and then to exploit these skills to efficiently solve new user-defined tasks with no/little additional learning. This innovation will be essential in the design of future service robots addressing pressing societal needs. For more information see the projects page or go to www.goal-robots.eu

Project Overview

As an overview of our accomplishments from the second year of the funding period, we have compiled a 5 minute video. The video is targeting a general audience ranging from interested lay people to colleagues in robotics. This video summarizes the motivation behind the GOAL-Robots project, illustrates our research platforms, and explains our latest progress towards building robots capable of open-ended learning through defining their own learning goals and practicing the skills necessary for accomplishing these goals. 

Goal Robots Overview video

A Skinned Agent

Throughout the project we explore environments of different complexity and examine how vision, proprioception and interaction with the environment can lead to the acquisition of novel skills. Ultimately, the agent is supposed to set its own goals in terms of interesting or unexpected behaviour. 

A Skinned Agent

Learning Vergence and Smooth Pursuit

Together with their long-term collaborators from the Hong Kong University of Science and Technology (HKUST) we developed methods for the autonomous learning of a repertoire of active vision skills. The work is formulated in the active efficient coding (AEC) framework, a generalization of classic efficient coding ideas to active perception. AEC postulates that active perception systems should not only adapt their representations to the statistics of sensory signals, but they should use their behavior, in particular movements of their sense organs, to promote efficient encoding of sensory signals. Along these lines, they have put together active binocular vision systems for the iCub robot that learn a repertoire of visual skills that can form the basis for interacting with objects: fixating “interesting” locations in the world based on measures of surprise or learning progress (Zhu et al., 2017), precisely coordinating both eyes for stereoscopic vision, and tracking objects moving in three dimensions (Lelais et al., 2019).

Learning Vergence and Smooth Pursuit

The Vault

Here we have some older videos, but they might still be of interest for newcomers who want to join the group and learn about what we are (and were) interested in at the lab.