All posts by Gail Swift

Qian Feng: Centre-of-Mass-based Robust Grasp Planning for Unknown Objects, Using Tactile-Visual Sensors

Qian Feng is an external PhD student at the Technical University of Munich and working at project partner Agile Robots and contributing to ULTRACEPT’s Work Package 4.

The IEEE International Conference on Robotics and Automation (ICRA) is an annual academic conference covering advances in robotics. It is one of the premier conferences in its field, with an ‘A’ rating from the Australian Ranking of ICT Conferences obtained in 2010 and an ‘A1’ rating from the Brazilian ministry of education in 2012.

Qian Feng attended the IEEE International Conference on Robotics and Automation (ICRA) 2020. The conference was originally scheduled to take place in Paris, France, but due to COVID-19, the conference was held virtually from 31 May 2020 until 31 August 2020.

Qian Feng ULTRACEPT IEEE Conference
Qian Feng presenting online at ICRA 2020

Qian presented his conference paper:

Q. Feng, Z. Chen, J. Deng, C. Gao, J. Zhang and A. Knoll, Center-of-Mass-based Robust Grasp Planning for Unknown Objects Using Tactile-Visual Sensors,” 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 610-617, doi: 10.1109/ICRA40945.2020.9196815.

Abstract

An unstable grasp pose can lead to slip, thus an unstable grasp pose can be predicted by slip detection. A re-grasp is required afterward in order to correct the grasp pose and finish the task. In this work, we propose a novel re-grasp planner with multi-sensor modules to plan grasp adjustments with the feedback from a slip detector. Then a re-grasp planner is trained to estimate the location of centre of mass, which helps robots find an optimal grasp pose. The dataset in this work consists of 1,025 slip experiments and 1,347 re-grasps collected by one pair of tactile sensors, an RGB-D camera, and one Franka Emika robot arm equipped with joint force/torque sensors. We show that our algorithm can successfully detect and classify the slip for 5 unknown test objects with an accuracy of 76.88% and a re-grasp planner increases the grasp success rate by 31.0%, compared to the state-of-the-art vision-based grasping algorithm.

Qian Feng ULTRACEPT IEEE Conference slip detector
Qian Feng: Slip Detector
Qian Feng ULTRACEPT IEEE Conference Grasp Success Rate on Test Objects
Qian Feng: Grasp Success Rate on Test Objects

 

When asked about his experience presenting and attending ICRA 2020, Qian said:

“Thanks to the virtual conference we were still able to present our work. It also meant that more people were able to join the conference to learn about and discuss our research. Everyone was able to access the presentation and get involved in the discussion in the virtual conference for 2 months, instead of the originally scheduled 5 minutes of discussion for the on-site conference. During this conference I shared my work with many researchers from the same field and exchanged ideas. I really enjoyed the conference and learnt a lot from the other attendees.”

UHAM Researchers Present at the International Conference on Intelligent Robots and Systems

Shuang Li is a fourth-year PhD student in Computer Science at Universität Hamburg. Her research interests are dexterous manipulation, vision-based teleoperation imitation learning in robotics. Shuang has been working on the project Transregio SFB “Cross-modal learning” and is involved in the ULTRACEPT Work Package 4. Shuang is the course leader of ‘Introduction to Robotics’.

Hongzhuo Liang is a fifth-year PhD student in Computer Science at Universität Hamburg. His research interests are robotic grasping manipulation based on multimodel perception. Hongzhuo has been working on the project Transregio SFB “Cross-modal learning” for STEP2DYNA (691154) and ULTRACEPT Work Package 4.

The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) is one of the largest and most impacting robotics research conferences worldwide. Established in 1988 and held annually, IROS provides an international forum for the international robotics research community to explore the frontier of science and technology in intelligent robots and smart machines.

Researchers Shuang Li and Hongzhuo Liang from ULTRACEPT partner the Universität of Hamburg,  attended and presented at IROS 2020. In addition to technical sessions and multi-media presentations, the IROS conference also held panel discussions, forums, workshops, tutorials, exhibits, and technical tours to enrich the fruitful discussions among conference attendees.

Due to COVID-19, the conference was hosted online with free access to every Technical Talk, Plenary, and Keynote and over sixty Workshops, Tutorials and Competitions. This went online on 24th October 2020 and was available until 24th January 2021.

A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU

 

Shuang Li Introduction to Robotics ULTRACEPT Work Package 4
Shuang Li presenting ‘A Moble Robot Hand-Arm Teleoperation System by Vision and IMU

 

At IROS 2020, Shuang Li presented her conference paper:

S. Li et al., “A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2020, pp. 10900-10906, doi: 10.1109/IROS45743.2020.9340738.

Video footage of Shuang’s work can be viewed on the UHAM Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Abstract

In this paper, we present a multimodal mobile teleoperation system that consists of a novel vision-based hand pose regression network (Transteleop) and an IMU (inertial measurement units) based arm tracking method. Transteleop observes the human hand through a low-cost depth camera and generates not only joint angles but also depth images of paired robot hand poses through an image-to-image translation process. A key-point based reconstruction loss explores the resemblance in appearance and anatomy between human and robotic hands and enriches the local features of reconstructed images. A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system. Network evaluation results on a test dataset and a variety of complex manipulation tasks that go beyond simple pick-and-place operations show the efficiency and stability of our multimodal teleoperation system.

Further information about this paper, including links to the code can be found here.

Robust Robotic Pouring using Audition and Haptics

 

Hongzhuo Liang Robust Robust Robotic Pouring using Audition and Haptics ULTRACEPT Work Package 4
Hongzhuo Liang presenting Robust Robust Robotic Pouring using Audition and Haptics

 

At IROS 2020, Hongzhuo Liang presented his conference paper:

H. Liang et al., “Robust Robotic Pouring using Audition and Haptics,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2020, pp. 10880-10887, doi: 10.1109/IROS45743.2020.9340859.

Video footage of Hongzhuo’s work can be viewed on the UHAM Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Abstract

Robust and accurate estimation of liquid height lies as an essential part of pouring tasks for service robots. However, vision-based methods often fail in occluded conditions, while audio-based methods cannot work well in a noisy environment. We instead propose a multimodal pouring network (MP-Net) that is able to robustly predict liquid height by conditioning on both audition and haptics input. MP-Net is trained on a self-collected multimodal pouring dataset. This dataset contains 300 robot pouring recordings with audio and force/torque measurements for three types of target containers. We also augment the audio data by inserting robot noise. We evaluated MP-Net on our collected dataset and a wide variety of robot experiments. Both network training results and robot experiments demonstrate that MP-Net is robust against noise and changes to the task and environment. Moreover, we further combine the predicted height and force data to estimate the shape of the target container.

Further information about this paper, including links to the code can be found here.

Yannick Jonetzko presents a paper in International Conference on Cognitive Systems and Information Processing 2020 (ICCSIP)

Yannick Jonetzko is a PhD candidate at the Universität Hamburg working on the usage of tactile sensors in multimodal environments. In 2018 he visited the Tsinghua University as part of the STEP2DYNA project and is now involved in the ULTRACEPT project and contributing to Work Package 4.

The International Conference on Cognitive Systems and Information Processing 2020 (ICCSIP) took place on  25th – 27th  December 2020 and was attended by ULTRACEPT researcher Yannick Jonetzko from project partner the Universität Hamburg. Due to the current travel restrictions, the conference was held online and Yannick’s work was presented via a pre-recorded video.

In the past few years, ICCSIP has matured into a well-established series of international conferences on cognitive information processing and related fields over the world. At their 2020 conference, over 60 researchers presented their work in multiple sessions on algorithms, applications, vision, manipulation, bioinformatics, and autonomous vehicles.

Yannick presented his conference paper Multimodal Object Analysis with Auditory and Tactile Sensing using Recurrent Neural Networks.

Abstract

Robots are usually equipped with many different sensors that need to be integrated. While most research is focused on the integration of vision with other senses, we successfully integrate tactile and auditory sensor data from a complex robotic system. Herein, we train and evaluate a neural network for the classification of the content of eight optically identical medicine containers. To investigate the relevance of the tactile modality in classification under realistic conditions, we apply different noise levels to the audio data. Our results show significantly higher robustness to acoustic noise with the combined multimodal network than with the unimodal audio-based counterpart.

UPM’s Azreen Azman Completes a Twelve Month Secondment in the United Kingdom

Azreen Azman is an associate professor at the Universiti Putra Malaysia in Kuala Lumpur.  He has just completed a 6 month secondment at the University of Lincoln and a 6 month secondment at Visomorphic Technology Ltd as part of the ULTRACEPT project funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skolodowska-Curie grant agreement. He has been involved in Work Packages 2 and 3.

Hazard perception and collision detection are important components for the safety of an autonomous car, and it becomes more challenging in low light environment. During the twelve month secondment period my focus was to investigate the method for the detection of objects on the road in low light conditions by using captured images or video in order to recognise hazards or avoid collision.

Azreen Azman attends the first project meeting at University of Lincoln
Project Meeting University of Lincoln with Prof. Yue and Asoc Prof Shyamala

One of the first tasks Azreen conducted in Lincoln was to collect audio-visual data in different road conditions. Azreen had the opportunity to join his colleagues Siavash Bahrami and Assoc Prof Shyamala Doraisamy from UPM who were also carrying out ULTRACEPT secondments at UoL and conducting audio-visual recordings of the road at the Millbrook Proving Ground in Bedford, United Kingdom. This provided a controlled environment in addition to other recordings conducted on normal roads.

Azreen Azman preparing for a recording session on a normal road
Azreen Azman preparing for a recording session on a normal road
Azreen Azman preparing for a recording session at the Millbrook Proving Ground in Bedford
Azreen Azman preparing for a recording session at the Millbrook Proving Ground in Bedford

It is anticipated that the performance of deep-learning based object detection algorithms such as R-CNN variants and YoLo diminishes as the input images become darker, due to the reduced amount of light and increased noise in the captured images. In Azreen’s preliminary experiment which used the Faster R-CNN model trained and tested on a collection of self-collected road images, the object detection performance is significantly reduced to almost 81% for dark and noisy images, as compared to the daylight images.

To overcome the problem, an image enhancement and noise reduction method was applied to the dark images prior to the object detection module. In his investigations, Azreen trained the LLNet, a deep autoencoder based image enhancement and noise reduction method for dark image enhancement.  As a result, the Faster R-CNN is able to detect 29% more objects on the enhanced images as compared to the dark images. The performance of the deep learning-based LLNet is better than the conventional Histogram Equalisation (HE) and Retinex methods. However, the patches prediction and image reconstruction steps are computationally expensive for real-time applications.

Azreen Azman A sample of dark and noisy image
A sample of dark and noisy image
Azreen Azman improved image by using LLNet
A sample of an improved image by using LLNet

In August 2020, Azreen began his secondment at Visomorphic Technology Ltd, an industry partner for the ULTRACEPT project. In collaboration with the team, he continued working on the model to improve its efficiency for real-time application. His focus was to adopt the principles of the nocturnal insect vision system for image enhancement and object detection.

Azreen Azman at Visomorphic Technology Ltd office
Azreen working at Visomorphic Technology Ltd

During Azreen’s stay in the UK, he attended and presented at the annual ULTRACEPT mid-term project meeting which was held in February 2020 and hosted in Cambridge. Azreen presented his work ‘Detection of objects on the road in low light condition using deep learning’. He also participated in ULTRACEPT Sandpit Session 1 facilitated by Qinbing Fu.

In addition, Azreen attended the first Lincoln Conference on Intelligent Robots and Systems organised by Lincoln Centre of Autonomous Systems (L-CAS) and the Keynote Session delivered by Prof. Graham Kendall from the University of Nottingham on Hyper-heuristics, both held in October 2020.

Azreen Azman Atttending the ULTRACEPT Mid-term Meeting
Azreen Azman Attending the ULTRACEPT Mid-term Meeting

‘The secondment has given me the opportunities and resources to conduct my research for the project and to improve my skills and networking though various meetings and discussions. Despite the challenges faced due to the ongoing pandemic, both of my hosts (University of Lincoln and Visomorphic Technology Ltd) have provided me with the support to work remotely while continuously engaging with other researchers virtually. I would like to thank the sponsors including Universiti Putra Malaysia  and the ULTRACEPT’s Marie Sklodowska-Curie secondment grant for these opportunities.’ Azreen Azman