Shuang Li Introduction to Robotics ULTRACEPT Work Package 4

UHAM Researchers Present at the International Conference on Intelligent Robots and Systems

Shuang Li is a fourth-year PhD student in Computer Science at Universität Hamburg. Her research interests are dexterous manipulation, vision-based teleoperation imitation learning in robotics. Shuang has been working on the project Transregio SFB “Cross-modal learning” and is involved in the ULTRACEPT Work Package 4. Shuang is the course leader of ‘Introduction to Robotics’.

Hongzhuo Liang is a fifth-year PhD student in Computer Science at Universität Hamburg. His research interests are robotic grasping manipulation based on multimodel perception. Hongzhuo has been working on the project Transregio SFB “Cross-modal learning” for STEP2DYNA (691154) and ULTRACEPT Work Package 4.

The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) is one of the largest and most impacting robotics research conferences worldwide. Established in 1988 and held annually, IROS provides an international forum for the international robotics research community to explore the frontier of science and technology in intelligent robots and smart machines.

Researchers Shuang Li and Hongzhuo Liang from ULTRACEPT partner the Universität of Hamburg,  attended and presented at IROS 2020. In addition to technical sessions and multi-media presentations, the IROS conference also held panel discussions, forums, workshops, tutorials, exhibits, and technical tours to enrich the fruitful discussions among conference attendees.

Due to COVID-19, the conference was hosted online with free access to every Technical Talk, Plenary, and Keynote and over sixty Workshops, Tutorials and Competitions. This went online on 24th October 2020 and was available until 24th January 2021.

A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU

 

Shuang Li Introduction to Robotics ULTRACEPT Work Package 4
Shuang Li presenting ‘A Moble Robot Hand-Arm Teleoperation System by Vision and IMU

 

At IROS 2020, Shuang Li presented her conference paper:

S. Li et al., “A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2020, pp. 10900-10906, doi: 10.1109/IROS45743.2020.9340738.

Video footage of Shuang’s work can be viewed on the UHAM Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Abstract

In this paper, we present a multimodal mobile teleoperation system that consists of a novel vision-based hand pose regression network (Transteleop) and an IMU (inertial measurement units) based arm tracking method. Transteleop observes the human hand through a low-cost depth camera and generates not only joint angles but also depth images of paired robot hand poses through an image-to-image translation process. A key-point based reconstruction loss explores the resemblance in appearance and anatomy between human and robotic hands and enriches the local features of reconstructed images. A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system. Network evaluation results on a test dataset and a variety of complex manipulation tasks that go beyond simple pick-and-place operations show the efficiency and stability of our multimodal teleoperation system.

Further information about this paper, including links to the code can be found here.

Robust Robotic Pouring using Audition and Haptics

 

Hongzhuo Liang Robust Robust Robotic Pouring using Audition and Haptics ULTRACEPT Work Package 4
Hongzhuo Liang presenting Robust Robust Robotic Pouring using Audition and Haptics

 

At IROS 2020, Hongzhuo Liang presented his conference paper:

H. Liang et al., “Robust Robotic Pouring using Audition and Haptics,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2020, pp. 10880-10887, doi: 10.1109/IROS45743.2020.9340859.

Video footage of Hongzhuo’s work can be viewed on the UHAM Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Abstract

Robust and accurate estimation of liquid height lies as an essential part of pouring tasks for service robots. However, vision-based methods often fail in occluded conditions, while audio-based methods cannot work well in a noisy environment. We instead propose a multimodal pouring network (MP-Net) that is able to robustly predict liquid height by conditioning on both audition and haptics input. MP-Net is trained on a self-collected multimodal pouring dataset. This dataset contains 300 robot pouring recordings with audio and force/torque measurements for three types of target containers. We also augment the audio data by inserting robot noise. We evaluated MP-Net on our collected dataset and a wide variety of robot experiments. Both network training results and robot experiments demonstrate that MP-Net is robust against noise and changes to the task and environment. Moreover, we further combine the predicted height and force data to estimate the shape of the target container.

Further information about this paper, including links to the code can be found here.

Leave a Reply

Your email address will not be published. Required fields are marked *