Category Archives: CONFERENCES

Yicheng Zhang presents ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’ at ISAIC 2021

Yicheng Zhang is a PhD student at the University of Lincoln and working on ULTRACEPT’s Work Package 3.

Recently Yicheng Zhang attended the 2nd International Symposium on Automation, Information and Computing (ISAIC 2021) organized by Beijing Jiaotong University. Due to the current travel restrictions, this year’s conference was moved online from 3rd to 6th of December 2021.

The ISAIC is a flagship annual international conference on computational intelligence, promoting all aspects of theory, algorithm design, applications and related emerging techniques. As a tradition, the ISAIC 2021 will co-locate a large number of topics within or related to computational intelligence, thereby providing a unique platform for promoting cross-fertilization and collaboration. ISAIC 2021 featured keynote speeches, invited speeches, oral presentations and poster sessions.

At the event, Yicheng presented his conference paper Yicheng Zhang, Cheng Hu, Mei Liu, Hao Luan, Fang Lei, Heriberto Cuayahuitl and Shigang Yue ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’. An open access version can be accessed here.

Yicheng Zhang presents 'Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network' at ISAIC 2021
Yicheng Zhang presents ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’
Abstract

It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks

Yunlei Shi is a 4th year full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots.  In 2020 he was seconded to Tsinghua University as part of the STEP2DYNA project. His work continues in the ULTRACEPT project where he contributes to Work Package 4. 

Yunlei Shi attended the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) to present his research. IROS 2021 is the first-ever conference organized by a Central European country and, more remarkably, by the country that introduced the word “robot” to the world. The IROS 2021 was held online from 27th September to 1st October 2021, in Prague, Czech Republic.

Yunlei represented Agile Robots, Universität Hamburg, and the ULTRACEPT project by presenting his conference paper Yunlei Shi, Zhaopeng Chen, Yansong Wu, Dimitri Henkel, Sebastian Riedel, Hongxu Liu, Qian Feng, Jianwei Zhang “Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks”, (IROS) 2021, Prague, Czech Republic. Yunlei was grateful for the opportunity to attend this fantastic conference with support from ULTRACEPT.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks
Yunlei Shi presenting at IROS 2021

Abstract

Collaborative robots are expected to be able to work alongside humans and in some cases directly replace existing human workers, thus effectively responding to rapid assembly line changes. Current methods for programming contact-rich tasks, especially in heavily constrained space, tend to be fairly inefficient. Therefore, faster and more intuitive approaches to robot teaching are urgently required. This work focuses on combining visual servoing based learning from demonstration (LfD) and force-based learning by exploration (LbE), to enable fast and intuitive programming of contact-rich tasks with minimal user effort required. Two learning approaches were developed and integrated into a framework, and one relying on human to robot motion mapping (the visual servoing approach) and one on force-based reinforcement learning. The developed framework implements the non-contact demonstration teaching method based on visual servoing approach and optimizes the demonstrated robot target positions according to the detected contact state. The framework has been compared with two most commonly used baseline techniques, pendant-based teaching and hand-guiding teaching. The efficiency and reliability of the framework have been validated through comparison experiments involving the teaching and execution of contact-rich tasks. The framework proposed in this paper has performed the best in terms of teaching time, execution success rate, risk of damage, and ease of use.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks
Fig. 2. A robot arm and a suction gripper performing a contact-rich tending task. (a) Gross motion is learned from human demonstration. (b) Fine motion is learned from exploration. (c) Example of a contact-rich tending task

Learn more about this conference paper by watching the demonstration on the TAMS Youtube channel.

Siavash Bahrami Awarded Best Student Paper at International Conference ICCST2021

Siavash Bahrami is a PhD candidate at Universiti Putra Malaysia (UPM), working on multimodal deep neural networks using acoustic and visual data for developing an active road safety system intended for autonomous and semi-autonomous vehicles. Siavash is contributing to ULTRACEPTs work package 2 and completed secondments at partners the University of Lincoln and Visomorphic LTD.

The Ninth International Conference on Computational Science and Technology 2021 (ICCST2021) is an international scientific conference for research in the field of advanced computational science and technology. The conference was held virtually in Labuan, Malaysia, on the 28th – 29th August 2021.

Siavash Bahrami presenting at ICCST2021
Siavash Bahrami presenting online at ICCST2021

Siavash Bahrami was awarded ‘Best Student Paper’ for his paper titled “CNN Architectures for Road Surface Wetness Classification from Acoustic Signals” presented during the Eighth International Conference on Computational Science and Technology (ICCST2021). The data utilised for training and testing the proposed CNN architectures were collected during Siavash’s ULTRACEPT secondments in the UK. Despite the strains caused by the global pandemic, with the assistance of UoL and UPM project members, Siavash managed to complete his secondment and collect the data needed for both his PhD thesis and the ULTRACEPT project work package 2.

Best Student Paper Award ICCDT 2021 Siavash Bahrami
Best Student Paper Award ICCDT 2021 Siavash Bahrami

The classification of road surface wetness is important for both the development of future driverless vehicles and the development of existing vehicle active safety systems. Wetness on the road surface has an impact on road safety and is one of the leading causes of weather-related accidents. Although machine learning algorithms such as recurrent neural networks (RNN), support vector machines (SVM), artificial neural networks (ANN) and convolutional neural networks (CNN) have been studied for road surface wetness classification, the improvement of classification performances are still widely being investigated whilst keeping network and computational complexity low. In this paper, we propose new CNN architectures towards further improving classification results of road surface wetness detection from acoustic signals. Two CNN architectures with differing layouts for its dropout layers and max-pooling layers have been investigated. The positions and the number of the max-pooling layers were varied. To avoid overfitting, we used 50% dropout layers before the final dense layers with both architectures. The acoustic signals of tyre to road interaction were recorded via mounted microphones on two distinct cars in an urban environment. Mel-frequency cepstral coefficients (MFCCs) features were extracted from the recordings as inputs to the models. Experimentation and comparative performance evaluations against several neural networks architectures were performed. Recorded acoustic signals were segmented into equal frames and thirteen MFCCs were extracted for each frame to train the CNNs. Results show that the proposed CMCMDD1 architecture achieved the highest accuracy of 96.36% with the shortest prediction time.

Siavash and supervisor Dr Shyamala Doraisamy recording road sounds data whilst on secondment at UoL
Siavash and UPM supervisor Dr Shyamala Doraisamy recording road sounds data whilst on secondment at UoL
CMCMDD architecture with two layers of convolution and kernel size
CMCMDD architecture with two layers of convolution and kernel size

References:

Siavash Bahrami, Shyamala Doraisamy, Azreen Azman, Nurul Amelina Nasharuddin, and Shigang Yue. 2020. Acoustic Feature Analysis for Wet and Dry Road Surface Classification Using Two-stream CNN. In 2020 4th International Conference on Computer Science and Artificial Intelligence (CSAI 2020). Association for Computing Machinery, New York, NY, USA, 194–200. https://doi.org/10.1145/3445815.3445847

Mu Hua Presents ‘Investigating Refractoriness in Collision Perception Neural Model’ at IJCNN 2021

Mu Hua is a post-graduate student at the University of Lincoln and working on ULTRACEPT’s work package 1.

IJCNN 2021

University of Lincoln researcher Mu Hua attended and presented at the International Joint Conference on Neural Networks 2021 (IJCNN 2021) which was held from 18th to 22nd July 2021. Although originally scheduled to be held in Shenzhen, China, due to the ongoing international travel disruption caused by Covid-19, the conference was moved online.

IJCNN 2021 is the flagship annual conference of the International Neural Network Society (INNS) – the premiere organisation for individuals interested in a theoretical and computational understanding of the brain and applying that knowledge to develop new and more effective forms of machine intelligence. INNS was formed in 1987 by the leading scientists in the Artificial Neural Networks (ANN) field. The conference promotes all aspects of neural networks theory, analysis and applications.

This year IJCNN received 1183 papers submitted from over 77 different countries. Of these, 1183 papers, 59.3% were accepted. All of them are included in the program as virtual oral presentations. The top ten countries where the submitting authors come from are (in descending order): China, United Sates, India, Brazil, Australia, United Kingdom, Germany, Japan, Italy, Brazil, Japan, Italy and France. The event was attended by more than 1166 participants and featured special sessions, plenary talks, competitions, tutorials, and workshops.

Representing the University of Lincoln, Mu Hua presented his paper Mu Hua, Qinbing Fu, Wenting Duan, Shigang Yue “Investigating Refractoriness in Collision Perception Neural Network”, (IJCNN 2021) with a poster demonstrating that numerical modelling refractory period, a common neuronal phenomenon, can a promising way to enhance the stability of currently LGMD neural network for collision perception.

Figure 1: (a) Refractoriness schematic diagram. The orange curve shows the change of membrane potential. Depolarization and repolarization are represented by dashed line with arrow. ARP corresponds to depolarization and part of repolarization while RRP is covered by hyper-polarization. (b) shows the curve of ( Pt(x, y) − Lt(x, y) ) when a single stimulus is applied at 1st frame, which resembles the real membrane potential curve during RP.
Figure 2: Snapshots of 389th frame from original video and Gaussian-noise-contaminated video. The orange curve represents LGMD membrane potential with our proposed RP mechanism, comparatively blue one without RP. While most of the blue curve stays at 1, orange curve can be easily distinguished for the peak at 401st frame with violent fluctuation within first 40 frames.

Abstract

Currently, collision detection methods based on visual cues are still challenged by several factors including ultra-fast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant movement detectors (LGMDs) in locust’s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ‘link (L) layer’ located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.

This paper can be freely accessed on the University of Lincoln Institutional Repository Eprints.

Nikolas Andreakos Presents Paper in 30TH Annual Computational Neuroscience Meeting (CNS*2021)

Nikolas Andreakos is a PhD candidate at the University of Lincoln, who is working on developing computational models of associative memory formation and recognition in the mammalian hippocampus.

Recently Nikolas attended the 30th Annual Computational Neuroscience Meeting (CNS*2021). Due to the current travel restrictions, this year’s conference was moved online from 3rd to 7th of July 2021.

CNS 2021 online conference image

The purpose of the Organization for Computational Neurosciences is to create a scientific and educational forum for students, scientists, other professionals, and the general public to learn about, to share, contribute to, and advance the state of knowledge in computational neuroscience.

Computational neuroscience combines mathematical analyses and computer simulations with experimental neuroscience, to develop a principled understanding of the workings of nervous systems and apply it in a wide range of technologies.

The Organization for Computational Neurosciences promotes meetings and courses in computational neuroscience and organizes the Annual CNS Meeting which serves as a forum for young scientists to present their work and to interact with senior leaders in the field.

Poster Presentation

Nikolas presented his research Modelling the effects of perforant path in the recall performance of a CA1 microcircuit with excitatory and inhibitory neurons.

CNS 2021 online conference poster
Nikolas Andreakos CNS 2021 poster

Abstract

From recollecting childhood memories to recalling if we turn off the oven before we left the house, memory defines who we are. Losing it can be very harmful to our survival. Recently we quantitatively investigated the biophysical mechanisms leading to memory recall improvement of a computational CA1 microcircuit model of the hippocampus [1]. In the present study, we investigated the synergistic effects of the EC excitatory input (sensory input) and the CA3 excitatory input (contextual information) on the recall performance of the CA1 microcircuit. Our results showed that when the EC input was exactly the same as the CA3 input then the recall performance of our model was strengthened. When the two inputs were dissimilar (degree similarity: 40% – 0%), then the recall performance was reduced. These results were positively correlated with how many “active cells” represented a memory pattern. When the number of active cells increased and the degree of similarity between the two inputs decreased, then the recall performance of the model was reduced. The latter finding confirms previous results of ours where the number of cells coding a piece of information plays a significant role in the recall performance of our model.

References
1. Andreakos, N., Yue, S. & Cutsuridis, V. Quantitative investigation of memory recall performance of a computational microcircuit model of the hippocampus. Brain Inf 8, 9 (2021). https://doi.org/10.1186/s40708-021-00131-7

Nikolas Andreakos CNS 2021 poster presentation
Nikolas Andreakos CNS 2021 poster presentation

ULTRACEPT Researchers Present at IEEE ICRA 2021

The 2021 International Conference on Robotics and Automation (IEEE ICRA 2021) was held in Xi’an, China from 31st May to 4th June 2021. As one of the premier and top conferences in the field of robotics and automation, this great event has gathered thousands of excellent researchers from all over the world. Due to the pandemic, the conference was held in a hybrid format, including physical on-site and virtual cloud meetings. Four ULTRACEPT researchers attended this event, 3 in person and 1 online.

Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot

Yunlei Shi: Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot
Yunlei Shi: Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot

Agile Robots researcher Yunlei Shi attended ICRA 2021 online and presented his paper ‘Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot’.

Yunlei Shi is a full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots, contributing to ULTRACEPT’s Work Package 4. In 2020 he visited Tsinghua University as part of the STEP2DYNA project.

Yunlei Shi presenting at ICRA 2020
Yunlei Shi presenting online at ICRA 2020

Yunlei presented his conference paper:

Yunlei Shi, Zhaopeng Chen, Hongxu Liu, Sebastian Riedel, Chunhui Gao, Qian Feng, Jun Deng, Jianwei Zhang, “Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot”, (ICRA) 2021, Xi’ an, China.

Abstract

Contact-rich manipulation tasks are commonly found in modern manufacturing settings. However, manually designing a robot controller is considered hard for traditional control methods as the controller requires an effective combination of modalities and vastly different characteristics. In this paper, we first consider incorporating operational space visual and haptic information into a reinforcement learning (RL) method to solve the target uncertainty problems in unstructured environments. Moreover, we propose a novel idea of introducing a proactive action to solve a partially observable Markov decision process (POMDP) problem. With these two ideas, our method can either adapt to reasonable variations in unstructured environments or improve the sample efficiency of policy learning. We evaluated our method on a task that involved inserting a random-access memory (RAM) using a torque-controlled robot and tested the success rates of different baselines used in the traditional methods. We proved that our method is robust and can tolerate environmental variations.

Representation of policies and controller scheme. The blue region is the real-time controller, and the wheat region is the non-real-time trained policy.
Representation of policies and controller scheme. The blue region is the real-time controller, and the wheat region is the non-real-time trained policy.

More details about this paper can be viewed in this video on the Universität Hamburg’s Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Yunlei was very happy to attend this fantastic conference with support from the project ULTRACEPT.

A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics

Three researchers from the University of Lincoln; Tian Liu, Xuelong Sun, and Qinbing Fu, attended ICRA 2021 in person to present their co-authored paper, ‘A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics’. 

ULTRACEPT researchers Tian Liu, Xuelong Sun and Qinbing Fu attending ICRA 2021
ULTRACEPT researchers Tian Liu, Xuelong Sun and Qinbing Fu attending ICRA 2021

We three were very happy to physically attend this fantastic conference with the support from the project ULTRACEPT.

We have one co-authored paper that presents our developed vision-pheromone-communication platform which was published in the proceedings of this conference. Tian Liu delivered the presentation which outlined our platform and it attracted some attention of attendees through interesting questions asked by the audience. We think that this event has provided us a great opportunity to raise publicity about our platform for future swarm robotics and social insects studies.

Tian Liu presenting at ICRA 2021
Tian Liu presenting at ICRA 2021

A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics, Tian Liu, Xuelong Sun, Cheng Hu, Qinbing Fu, and Shigang Yue, University of Lincoln

Keywords: Biologically-Inspired Robots, Multi-Robot Systems, Swarm Robotics

Abstract: This paper describes a versatile platform for swarm robotics research. It integrates multiple pheromone communication with a dynamic visual scene along with real-time data transmission and localization of multiple-robots. The platform has been built for inquiries into social insect behavior and bio-robotics. By introducing a new research scheme to coordinate olfactory and visual cues, it not only complements current swarm robotics platforms which focus only on pheromone communications by adding visual interaction but also may fill an important gap in closing the loop from bio-robotics to neuroscience. We have built a controllable dynamic visual environment based on our previously developed ColCOSPhi (a multi-pheromones platform) by enclosing the arena with LED panels and interacting with the micro mobile robots with a visual sensor. In addition, a wireless communication system has been developed to allow the transmission of real-time bi-directional data between multiple micro robot agents and a PC host. A case study combining concepts from the internet of vehicles (IoV) and insect-vision inspired model has been undertaken to verify the applicability of the presented platform and to investigate how complex scenarios can be facilitated by making use of this platform.

We have grasped many interesting ideas and inspirations from colleagues in the robotics field from not only the excellent talks but also high-quality robots’ exhibitions from famed companies in the industry.

Conference presentations at ICRA 2021
Conference presentations attended by the researchers at ICRA 2021
Demonstration at the ICRA 2021 conference
Demonstration at the ICRA 2021 conference

On the last day of the conference, we attended a wonderful tour of the Shaanxi History Museum and the Terra-Cotta Warriors, from which we have leaned a lot about the impressive history and culture of Qin dynasty. Further, this also makes us rethink the important role played by science and technology in assisting archaeological excavation and cultural relic protection.

Thanks to the supportive ULTRACEPT project, we really enjoyed the whole event bringing us not only new knowledge about the robotics and history, but enlightening inspirations which will potentially motivate our future researches. In addition, our group’s researching works also have been propagated via this top international conference.

Qian Feng: Centre-of-Mass-based Robust Grasp Planning for Unknown Objects, Using Tactile-Visual Sensors

Qian Feng is an external PhD student at the Technical University of Munich and working at project partner Agile Robots and contributing to ULTRACEPT’s Work Package 4.

The IEEE International Conference on Robotics and Automation (ICRA) is an annual academic conference covering advances in robotics. It is one of the premier conferences in its field, with an ‘A’ rating from the Australian Ranking of ICT Conferences obtained in 2010 and an ‘A1’ rating from the Brazilian ministry of education in 2012.

Qian Feng attended the IEEE International Conference on Robotics and Automation (ICRA) 2020. The conference was originally scheduled to take place in Paris, France, but due to COVID-19, the conference was held virtually from 31 May 2020 until 31 August 2020.

Qian Feng ULTRACEPT IEEE Conference
Qian Feng presenting online at ICRA 2020

Qian presented his conference paper:

Q. Feng, Z. Chen, J. Deng, C. Gao, J. Zhang and A. Knoll, Center-of-Mass-based Robust Grasp Planning for Unknown Objects Using Tactile-Visual Sensors,” 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 610-617, doi: 10.1109/ICRA40945.2020.9196815.

Abstract

An unstable grasp pose can lead to slip, thus an unstable grasp pose can be predicted by slip detection. A re-grasp is required afterward in order to correct the grasp pose and finish the task. In this work, we propose a novel re-grasp planner with multi-sensor modules to plan grasp adjustments with the feedback from a slip detector. Then a re-grasp planner is trained to estimate the location of centre of mass, which helps robots find an optimal grasp pose. The dataset in this work consists of 1,025 slip experiments and 1,347 re-grasps collected by one pair of tactile sensors, an RGB-D camera, and one Franka Emika robot arm equipped with joint force/torque sensors. We show that our algorithm can successfully detect and classify the slip for 5 unknown test objects with an accuracy of 76.88% and a re-grasp planner increases the grasp success rate by 31.0%, compared to the state-of-the-art vision-based grasping algorithm.

Qian Feng ULTRACEPT IEEE Conference slip detector
Qian Feng: Slip Detector
Qian Feng ULTRACEPT IEEE Conference Grasp Success Rate on Test Objects
Qian Feng: Grasp Success Rate on Test Objects

 

When asked about his experience presenting and attending ICRA 2020, Qian said:

“Thanks to the virtual conference we were still able to present our work. It also meant that more people were able to join the conference to learn about and discuss our research. Everyone was able to access the presentation and get involved in the discussion in the virtual conference for 2 months, instead of the originally scheduled 5 minutes of discussion for the on-site conference. During this conference I shared my work with many researchers from the same field and exchanged ideas. I really enjoyed the conference and learnt a lot from the other attendees.”

UHAM Researchers Present at the International Conference on Intelligent Robots and Systems

Shuang Li is a fourth-year PhD student in Computer Science at Universität Hamburg. Her research interests are dexterous manipulation, vision-based teleoperation imitation learning in robotics. Shuang has been working on the project Transregio SFB “Cross-modal learning” and is involved in the ULTRACEPT Work Package 4. Shuang is the course leader of ‘Introduction to Robotics’.

Hongzhuo Liang is a fifth-year PhD student in Computer Science at Universität Hamburg. His research interests are robotic grasping manipulation based on multimodel perception. Hongzhuo has been working on the project Transregio SFB “Cross-modal learning” for STEP2DYNA (691154) and ULTRACEPT Work Package 4.

The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) is one of the largest and most impacting robotics research conferences worldwide. Established in 1988 and held annually, IROS provides an international forum for the international robotics research community to explore the frontier of science and technology in intelligent robots and smart machines.

Researchers Shuang Li and Hongzhuo Liang from ULTRACEPT partner the Universität of Hamburg,  attended and presented at IROS 2020. In addition to technical sessions and multi-media presentations, the IROS conference also held panel discussions, forums, workshops, tutorials, exhibits, and technical tours to enrich the fruitful discussions among conference attendees.

Due to COVID-19, the conference was hosted online with free access to every Technical Talk, Plenary, and Keynote and over sixty Workshops, Tutorials and Competitions. This went online on 24th October 2020 and was available until 24th January 2021.

A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU

 

Shuang Li Introduction to Robotics ULTRACEPT Work Package 4
Shuang Li presenting ‘A Moble Robot Hand-Arm Teleoperation System by Vision and IMU

 

At IROS 2020, Shuang Li presented her conference paper:

S. Li et al., “A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2020, pp. 10900-10906, doi: 10.1109/IROS45743.2020.9340738.

Video footage of Shuang’s work can be viewed on the UHAM Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Abstract

In this paper, we present a multimodal mobile teleoperation system that consists of a novel vision-based hand pose regression network (Transteleop) and an IMU (inertial measurement units) based arm tracking method. Transteleop observes the human hand through a low-cost depth camera and generates not only joint angles but also depth images of paired robot hand poses through an image-to-image translation process. A key-point based reconstruction loss explores the resemblance in appearance and anatomy between human and robotic hands and enriches the local features of reconstructed images. A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system. Network evaluation results on a test dataset and a variety of complex manipulation tasks that go beyond simple pick-and-place operations show the efficiency and stability of our multimodal teleoperation system.

Further information about this paper, including links to the code can be found here.

Robust Robotic Pouring using Audition and Haptics

 

Hongzhuo Liang Robust Robust Robotic Pouring using Audition and Haptics ULTRACEPT Work Package 4
Hongzhuo Liang presenting Robust Robust Robotic Pouring using Audition and Haptics

 

At IROS 2020, Hongzhuo Liang presented his conference paper:

H. Liang et al., “Robust Robotic Pouring using Audition and Haptics,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2020, pp. 10880-10887, doi: 10.1109/IROS45743.2020.9340859.

Video footage of Hongzhuo’s work can be viewed on the UHAM Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Abstract

Robust and accurate estimation of liquid height lies as an essential part of pouring tasks for service robots. However, vision-based methods often fail in occluded conditions, while audio-based methods cannot work well in a noisy environment. We instead propose a multimodal pouring network (MP-Net) that is able to robustly predict liquid height by conditioning on both audition and haptics input. MP-Net is trained on a self-collected multimodal pouring dataset. This dataset contains 300 robot pouring recordings with audio and force/torque measurements for three types of target containers. We also augment the audio data by inserting robot noise. We evaluated MP-Net on our collected dataset and a wide variety of robot experiments. Both network training results and robot experiments demonstrate that MP-Net is robust against noise and changes to the task and environment. Moreover, we further combine the predicted height and force data to estimate the shape of the target container.

Further information about this paper, including links to the code can be found here.

Yannick Jonetzko presents a paper in International Conference on Cognitive Systems and Information Processing 2020 (ICCSIP)

Yannick Jonetzko is a PhD candidate at the Universität Hamburg working on the usage of tactile sensors in multimodal environments. In 2018 he visited the Tsinghua University as part of the STEP2DYNA project and is now involved in the ULTRACEPT project and contributing to Work Package 4.

The International Conference on Cognitive Systems and Information Processing 2020 (ICCSIP) took place on  25th – 27th  December 2020 and was attended by ULTRACEPT researcher Yannick Jonetzko from project partner the Universität Hamburg. Due to the current travel restrictions, the conference was held online and Yannick’s work was presented via a pre-recorded video.

In the past few years, ICCSIP has matured into a well-established series of international conferences on cognitive information processing and related fields over the world. At their 2020 conference, over 60 researchers presented their work in multiple sessions on algorithms, applications, vision, manipulation, bioinformatics, and autonomous vehicles.

Yannick presented his conference paper Multimodal Object Analysis with Auditory and Tactile Sensing using Recurrent Neural Networks.

Abstract

Robots are usually equipped with many different sensors that need to be integrated. While most research is focused on the integration of vision with other senses, we successfully integrate tactile and auditory sensor data from a complex robotic system. Herein, we train and evaluate a neural network for the classification of the content of eight optically identical medicine containers. To investigate the relevance of the tactile modality in classification under realistic conditions, we apply different noise levels to the audio data. Our results show significantly higher robustness to acoustic noise with the combined multimodal network than with the unimodal audio-based counterpart.

ULTRACEPT Researchers Attend IEEE ICARM 2020 Conference

The IEEE International Conference on Advanced Robotics and Mechatronics (ICARM) 2020 was held in Shenzhen, China, and attended by three University of Lincoln’s (UoL) ULTRACEPT researchers; Dr Qinbing Fu, Xuelong Sun, and Tian Liu. These researchers completed 12 month ULTRACEPT secondments with project partner Guangzhou University (GZHU) in China.

Tian Liu, Qinbing Fu, and Xuelong Sun attend IEEE ARM 2020 Conference
L to R: Tian Liu, Qinbing Fu, and Xuelong Sun attend IEEE ARM 2020 Conference

The IEEE ARM Conference took place between the 18th and 21st December 2020 and is the flagship conference on bio-mechatronics and bio-robotics systems as well as neuro-robotics systems. The conference provides an international forum for researchers, educators, engineers in general areas of mechatronics, robotics, automation, and sensors to disseminate their latest research results and exchange views on the future research directions of these fields.

The UoL researchers attended to promote their publications produced as part of both the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements STEP2DYNA (691154) and ULTRACEPT (778062) project.

Dr Qinbing Fu: Complementary Visual Neuronal Systems Model for Collision Sensing

Dr Qinbing Fu presented his research paper entitled “Complementary Visual Neuronal Systems Model for Collision Sensing” which was included in the conference proceedings on Monday morning. Dr Fu was also the chair of MoSHT3 Regular Session, based on the topic of Biomimetics.

Qinbing Fu presents his research at the IEEE ARM 2020 Conference
Qinbing Fu presents his research at the IEEE ARM 2020 Conference

Q. Fu and S. Yue, “Complementary Visual Neuronal Systems Model for Collision Sensing,” 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 2020, pp. 609-615, doi: 10.1109/ICARM49381.2020.9195303.

Abstract – Inspired by insects’ visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-field motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in flies, have been studied, intensively. The LGMDs have specific selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To fill this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD-2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented in ground micro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.

Qinbing Fu presents his research at the IEEE ARM 2020 Conference
Dr Qinbing Fu presents his research at the IEEE ARM 2020 Conference

When asked about the conference experience Dr Fu said;

2020 has been a very tough year for everyone around the world. The pandemic has absolutely affected people’s lives. As an academic researcher, it has become more difficult to exchange ideas closely with other colleagues. Almost all of the academic conferences across every discipline has moved to on-line presenting. This has made it challenging to disseminate research and exchange ideas.

China was suffering from the pandemic in early 2020. However, due to its successful control of COVID-19, after June 2020 most parts of life, including work, had returned to normal. As a result, the conference was successfully held in person as originally planned, although international guests were not able to attend due to travel restrictions.

The conference attendees appreciated how well the conference was organised in Shenzhen. Personally, I very much enjoyed attending this conference. Due to travel restrictions, it was not a large conference, but every detail was considered and arranged properly. There were many enjoyable moments and I learnt alot. The plenary presentations were very high quality. Another special, memorable experience for me was the opportunity to chair a session for the first time during the conference. It was awesome!

Xuelong Sun: Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles

Xuelong Sun presented his co-authored paper ‘Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles‘ as lead author, Hamid Isakhani, was unable to attend due to travel restrictions. Their paper was awarded Best Conference Paper Finalist.

Best conference paper finalist at the IEEE ARM 2020 Conference
Best conference paper finalist at the IEEE ARM 2020 Conference

H. Isakhani, S. Yue, C. Xiong, W. Chen, X. Sun and T. Liu, “Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles,” 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 2020, pp. 602-608, doi: 10.1109/ICARM49381.2020.9195392.

Abstract – Gliding is the most efficient flight mode that is explicitly appreciated by natural fliers. This is achieved by high-performance structures developed over millions of years of evolution. One such prehistoric insect, locust (Schistocerca gregaria) is a perfect example of a natural glider capable of endured transatlantic flights, which could potentially inspire numerous solutions to the problems in aerospace engineering. However, biomimicry of such aerodynamic properties is hindered by the limitations of conventional as well as modern fabrication technologies in terms of precision and availability, respectively. Therefore, we explore and propose novel combinations of economical manufacturing methods to develop various locust-inspired tandem wing prototypes (i.e. fore and hindwings), for further wind tunnel based aerodynamic studies. Additionally, we determine the flexural stiffness and maximum deformation rate of our prototypes and compare it to their counterparts in nature and literature, recommending the most suitable artificial bioinspired wing for gliding micro aerial vehicle applications.

Xuelong Sun presents his research at the IEEE ARM 2020 Conference
Xuelong Sun presents his research at the IEEE ARM 2020 Conference

When asked about the conference Xuelong said;

This has been a fantastic conference, although we are getting through this special year. The keynote speakers delivered very impressive talks concerning Controlling System, AI, and robotics, which offered great food for thought. I was very pleased that our paper was shortlisted for the BEST STUDENTS PAPER AWARD.

In my presentation, I reported on the work we have completed on manufactured bio-inspired wings for future flying robots which mimick locust. We emphasised that the methods applied are affordable, and that the manufactured wings feature high flexibility and rigidity. Although we didn’t win the award, we were finalist, which has encouraged us to keep moving forward with our future research.

manufactured the bio-inspired wings for future flying robots mimicking locust
Manufactured the bio-inspired wings for future flying robots mimicking locust

Tian Liu: Investigating Multiple Pheromones in Swarm Robots – A Case Study of Multi-Robot Deployment

Tian Lu presented his paper ‘Investigating Multiple Pheromones in Swarm Robots – A Case Study of Multi-Robot Deployment.’

T. Liu, X. Sun, C. Hu, Q. Fu, H. Isakhani and S. Yue, “Investigating Multiple Pheromones in Swarm Robots – A Case Study of Multi-Robot Deployment,” 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 2020, pp. 595-601, doi: 10.1109/ICARM49381.2020.9195311

Abstract – Social insects are known as the experts in handling complex task in a collective smart way although their small brains contain only limited computation resources and sensory information. It is believed that pheromones play a vital role in shaping social insects’ collective behaviours. One of the key points underlying the stigmergy is the combination of different pheromones in a specific task. In the swarm intelligence field, pheromone inspired studies usually focus one single pheromone at a time, so it is not clear how effectively multiple pheromones could be employed for a collective strategy in the real physical world. In this study, we investigate multiple pheromone-based deployment strategy for swarm robots inspired by social insects. The proposed deployment strategy uses two kinds of artificial pheromones; the attractive and the repellent pheromone that enables micro robots to be distributed in desired positions with high efficiency. The strategy is assessed systematically by both simulation and real robot experiments using a novel artificial pheromone platform ColCOSΦ. Results from the simulation and real robot experiments both demonstrate the effectiveness of the proposed strategy and reveal the role of multiple pheromones. The feasibility of the ColCOSΦ platform, and its potential for further robotic research on multiple pheromones are also verified. Our study of using different pheromones for one collective swarm robotics task may help or inspire biologists in real insects’ research.

Tian Liu presents his research at the IEEE ARM 2020 Conference
Tian Liu presents his research at the IEEE ARM 2020 Conference

When asked about the conference Tian Lu said;

In the pandemic of COVID-19, this conference is a rare opportunity to listen to the keynote speakers presentations about control, artificial intelligence, and bioinspiration psychically. I also presented my own research about multiple pheromones and experiment system ColCOSΦ, and had a friendly exchange with scholars in related fields. I believe this conference has enabled more people to learn about our research progress and results.