Category Archives: NEWS

Universität Hamburg Hosts ULTRACEPT Sandpit Session – Robotic Grasping based on Deep Learning: Towards the Robust Perception

To aid and support the continued collaboration and knowledge exchange of the ULTRACEPT researchers, the consortium hosts online quarterly ‘Sandpit Sessions’. The aim of these sessions is to provide researchers an opportunity to share their work in an informal forum where they can raise and discuss issues and challenges in order to gain support and feedback from the group.

Researchers Hongzhuo Liang and Shuang Li from the Universität Hamburg (UHAM) recently hosted a Sandpit Session on the 5th March 2021. The theme of the session was Robotic Grasping based on Deep Learning: Towards the Robust Perception. 28 attendees across the consortium participated.

Universität Hamburg Hosts ULTRACEPT Sandpit Session - Robotic Grasping based on Deep Learning: Towards the Robust Perception
Hongzhuo Liang presenting at the ULTRACEPT Sandpit Session

Sandpit Session 2: Robotic Grasping based on Deep Learning: Towards the Robust Perception

  • Date: Friday, 5th March 2021
  • Time: UK 10:00; China 18:00; Germany 11:00; Argentina 07:00; Malaysia 18:00; Japan 19:00.
  • Facilitators: Hongzhuo Liang, PhD student, Universität Hamburg and Shuang Li, PhD student, Universität Hamburg
  • Location: MS Teams
Sandpit Schedule
UK Time Item Presenters
10:00-10:05 Arrival and welcome

 

Shuang Li
10:05-10:35 Robotic Grasping based on Deep Learning: Towards the Robust Perception

Autonomous robotic grasping is a challenging task, which includes many respects of research areas, e.g., perception, robotics, and psychology. In this talk, I will review the state of art in robotic grasping. Then, I will report the current working progress of my grasping work: from two-finger grasping to multi-finger grasping based on deep learning methods. This session will encourage an open-minded way for facilitating quick idea-exchanging.

Hongzhuo Liang
10:35-11:10 Group discussion about the session topic

How do bio-inspired methods help to design a better robotic grasping agent?

A group discussion where attendees can raise questions and discuss the topic of research and potential cooperation that was presented.

Facilitated by Shuang Li
11.10-11.25 Open forum discussion

An opportunity for attendees to ask the group for advice regarding any challenges they are facing with their own research.

Facilitated by Shuang Li

 

11:25-11:30 Final comments & volunteer for a facilitator for the next session

We are planning our next sandpit session for May 2021.

Shuang Li
Universität Hamburg Hosts ULTRACEPT Sandpit Session - Robotic Grasping based on Deep Learning: Towards the Robust Perception
Hongzhuo Liang and Shuang Li discussing their research ideas with the attendees

More detailed information about Hongshuo Liang’s research publications and code can be found on his website.

To find out more about the fascinating work being carried out by the team at UHAM, check out their YouTube channel TAMS UHAM.

Qian Feng: Centre-of-Mass-based Robust Grasp Planning for Unknown Objects, Using Tactile-Visual Sensors

Qian Feng is an external PhD student at the Technical University of Munich and working at project partner Agile Robots and contributing to ULTRACEPT’s Work Package 4.

The IEEE International Conference on Robotics and Automation (ICRA) is an annual academic conference covering advances in robotics. It is one of the premier conferences in its field, with an ‘A’ rating from the Australian Ranking of ICT Conferences obtained in 2010 and an ‘A1’ rating from the Brazilian ministry of education in 2012.

Qian Feng attended the IEEE International Conference on Robotics and Automation (ICRA) 2020. The conference was originally scheduled to take place in Paris, France, but due to COVID-19, the conference was held virtually from 31 May 2020 until 31 August 2020.

Qian Feng ULTRACEPT IEEE Conference
Qian Feng presenting online at ICRA 2020

Qian presented his conference paper:

Q. Feng, Z. Chen, J. Deng, C. Gao, J. Zhang and A. Knoll, Center-of-Mass-based Robust Grasp Planning for Unknown Objects Using Tactile-Visual Sensors,” 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 610-617, doi: 10.1109/ICRA40945.2020.9196815.

Abstract

An unstable grasp pose can lead to slip, thus an unstable grasp pose can be predicted by slip detection. A re-grasp is required afterward in order to correct the grasp pose and finish the task. In this work, we propose a novel re-grasp planner with multi-sensor modules to plan grasp adjustments with the feedback from a slip detector. Then a re-grasp planner is trained to estimate the location of centre of mass, which helps robots find an optimal grasp pose. The dataset in this work consists of 1,025 slip experiments and 1,347 re-grasps collected by one pair of tactile sensors, an RGB-D camera, and one Franka Emika robot arm equipped with joint force/torque sensors. We show that our algorithm can successfully detect and classify the slip for 5 unknown test objects with an accuracy of 76.88% and a re-grasp planner increases the grasp success rate by 31.0%, compared to the state-of-the-art vision-based grasping algorithm.

Qian Feng ULTRACEPT IEEE Conference slip detector
Qian Feng: Slip Detector
Qian Feng ULTRACEPT IEEE Conference Grasp Success Rate on Test Objects
Qian Feng: Grasp Success Rate on Test Objects

 

When asked about his experience presenting and attending ICRA 2020, Qian said:

“Thanks to the virtual conference we were still able to present our work. It also meant that more people were able to join the conference to learn about and discuss our research. Everyone was able to access the presentation and get involved in the discussion in the virtual conference for 2 months, instead of the originally scheduled 5 minutes of discussion for the on-site conference. During this conference I shared my work with many researchers from the same field and exchanged ideas. I really enjoyed the conference and learnt a lot from the other attendees.”

UHAM Researchers Present at the International Conference on Intelligent Robots and Systems

Shuang Li is a fourth-year PhD student in Computer Science at Universität Hamburg. Her research interests are dexterous manipulation, vision-based teleoperation imitation learning in robotics. Shuang has been working on the project Transregio SFB “Cross-modal learning” and is involved in the ULTRACEPT Work Package 4. Shuang is the course leader of ‘Introduction to Robotics’.

Hongzhuo Liang is a fifth-year PhD student in Computer Science at Universität Hamburg. His research interests are robotic grasping manipulation based on multimodel perception. Hongzhuo has been working on the project Transregio SFB “Cross-modal learning” for STEP2DYNA (691154) and ULTRACEPT Work Package 4.

The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) is one of the largest and most impacting robotics research conferences worldwide. Established in 1988 and held annually, IROS provides an international forum for the international robotics research community to explore the frontier of science and technology in intelligent robots and smart machines.

Researchers Shuang Li and Hongzhuo Liang from ULTRACEPT partner the Universität of Hamburg,  attended and presented at IROS 2020. In addition to technical sessions and multi-media presentations, the IROS conference also held panel discussions, forums, workshops, tutorials, exhibits, and technical tours to enrich the fruitful discussions among conference attendees.

Due to COVID-19, the conference was hosted online with free access to every Technical Talk, Plenary, and Keynote and over sixty Workshops, Tutorials and Competitions. This went online on 24th October 2020 and was available until 24th January 2021.

A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU

 

Shuang Li Introduction to Robotics ULTRACEPT Work Package 4
Shuang Li presenting ‘A Moble Robot Hand-Arm Teleoperation System by Vision and IMU

 

At IROS 2020, Shuang Li presented her conference paper:

S. Li et al., “A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2020, pp. 10900-10906, doi: 10.1109/IROS45743.2020.9340738.

Video footage of Shuang’s work can be viewed on the UHAM Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Abstract

In this paper, we present a multimodal mobile teleoperation system that consists of a novel vision-based hand pose regression network (Transteleop) and an IMU (inertial measurement units) based arm tracking method. Transteleop observes the human hand through a low-cost depth camera and generates not only joint angles but also depth images of paired robot hand poses through an image-to-image translation process. A key-point based reconstruction loss explores the resemblance in appearance and anatomy between human and robotic hands and enriches the local features of reconstructed images. A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system. Network evaluation results on a test dataset and a variety of complex manipulation tasks that go beyond simple pick-and-place operations show the efficiency and stability of our multimodal teleoperation system.

Further information about this paper, including links to the code can be found here.

Robust Robotic Pouring using Audition and Haptics

 

Hongzhuo Liang Robust Robust Robotic Pouring using Audition and Haptics ULTRACEPT Work Package 4
Hongzhuo Liang presenting Robust Robust Robotic Pouring using Audition and Haptics

 

At IROS 2020, Hongzhuo Liang presented his conference paper:

H. Liang et al., “Robust Robotic Pouring using Audition and Haptics,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2020, pp. 10880-10887, doi: 10.1109/IROS45743.2020.9340859.

Video footage of Hongzhuo’s work can be viewed on the UHAM Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Abstract

Robust and accurate estimation of liquid height lies as an essential part of pouring tasks for service robots. However, vision-based methods often fail in occluded conditions, while audio-based methods cannot work well in a noisy environment. We instead propose a multimodal pouring network (MP-Net) that is able to robustly predict liquid height by conditioning on both audition and haptics input. MP-Net is trained on a self-collected multimodal pouring dataset. This dataset contains 300 robot pouring recordings with audio and force/torque measurements for three types of target containers. We also augment the audio data by inserting robot noise. We evaluated MP-Net on our collected dataset and a wide variety of robot experiments. Both network training results and robot experiments demonstrate that MP-Net is robust against noise and changes to the task and environment. Moreover, we further combine the predicted height and force data to estimate the shape of the target container.

Further information about this paper, including links to the code can be found here.

Yannick Jonetzko presents a paper in International Conference on Cognitive Systems and Information Processing 2020 (ICCSIP)

Yannick Jonetzko is a PhD candidate at the Universität Hamburg working on the usage of tactile sensors in multimodal environments. In 2018 he visited the Tsinghua University as part of the STEP2DYNA project and is now involved in the ULTRACEPT project and contributing to Work Package 4.

The International Conference on Cognitive Systems and Information Processing 2020 (ICCSIP) took place on  25th – 27th  December 2020 and was attended by ULTRACEPT researcher Yannick Jonetzko from project partner the Universität Hamburg. Due to the current travel restrictions, the conference was held online and Yannick’s work was presented via a pre-recorded video.

In the past few years, ICCSIP has matured into a well-established series of international conferences on cognitive information processing and related fields over the world. At their 2020 conference, over 60 researchers presented their work in multiple sessions on algorithms, applications, vision, manipulation, bioinformatics, and autonomous vehicles.

Yannick presented his conference paper Multimodal Object Analysis with Auditory and Tactile Sensing using Recurrent Neural Networks.

Abstract

Robots are usually equipped with many different sensors that need to be integrated. While most research is focused on the integration of vision with other senses, we successfully integrate tactile and auditory sensor data from a complex robotic system. Herein, we train and evaluate a neural network for the classification of the content of eight optically identical medicine containers. To investigate the relevance of the tactile modality in classification under realistic conditions, we apply different noise levels to the audio data. Our results show significantly higher robustness to acoustic noise with the combined multimodal network than with the unimodal audio-based counterpart.

UPM’s Azreen Azman Completes a Twelve Month Secondment in the United Kingdom

Azreen Azman is an associate professor at the Universiti Putra Malaysia in Kuala Lumpur.  He has just completed a 6 month secondment at the University of Lincoln and a 6 month secondment at Visomorphic Technology Ltd as part of the ULTRACEPT project funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skolodowska-Curie grant agreement. He has been involved in Work Packages 2 and 3.

Hazard perception and collision detection are important components for the safety of an autonomous car, and it becomes more challenging in low light environment. During the twelve month secondment period my focus was to investigate the method for the detection of objects on the road in low light conditions by using captured images or video in order to recognise hazards or avoid collision.

Azreen Azman attends the first project meeting at University of Lincoln
Project Meeting University of Lincoln with Prof. Yue and Asoc Prof Shyamala

One of the first tasks Azreen conducted in Lincoln was to collect audio-visual data in different road conditions. Azreen had the opportunity to join his colleagues Siavash Bahrami and Assoc Prof Shyamala Doraisamy from UPM who were also carrying out ULTRACEPT secondments at UoL and conducting audio-visual recordings of the road at the Millbrook Proving Ground in Bedford, United Kingdom. This provided a controlled environment in addition to other recordings conducted on normal roads.

Azreen Azman preparing for a recording session on a normal road
Azreen Azman preparing for a recording session on a normal road
Azreen Azman preparing for a recording session at the Millbrook Proving Ground in Bedford
Azreen Azman preparing for a recording session at the Millbrook Proving Ground in Bedford

It is anticipated that the performance of deep-learning based object detection algorithms such as R-CNN variants and YoLo diminishes as the input images become darker, due to the reduced amount of light and increased noise in the captured images. In Azreen’s preliminary experiment which used the Faster R-CNN model trained and tested on a collection of self-collected road images, the object detection performance is significantly reduced to almost 81% for dark and noisy images, as compared to the daylight images.

To overcome the problem, an image enhancement and noise reduction method was applied to the dark images prior to the object detection module. In his investigations, Azreen trained the LLNet, a deep autoencoder based image enhancement and noise reduction method for dark image enhancement.  As a result, the Faster R-CNN is able to detect 29% more objects on the enhanced images as compared to the dark images. The performance of the deep learning-based LLNet is better than the conventional Histogram Equalisation (HE) and Retinex methods. However, the patches prediction and image reconstruction steps are computationally expensive for real-time applications.

Azreen Azman A sample of dark and noisy image
A sample of dark and noisy image
Azreen Azman improved image by using LLNet
A sample of an improved image by using LLNet

In August 2020, Azreen began his secondment at Visomorphic Technology Ltd, an industry partner for the ULTRACEPT project. In collaboration with the team, he continued working on the model to improve its efficiency for real-time application. His focus was to adopt the principles of the nocturnal insect vision system for image enhancement and object detection.

Azreen Azman at Visomorphic Technology Ltd office
Azreen working at Visomorphic Technology Ltd

During Azreen’s stay in the UK, he attended and presented at the annual ULTRACEPT mid-term project meeting which was held in February 2020 and hosted in Cambridge. Azreen presented his work ‘Detection of objects on the road in low light condition using deep learning’. He also participated in ULTRACEPT Sandpit Session 1 facilitated by Qinbing Fu.

In addition, Azreen attended the first Lincoln Conference on Intelligent Robots and Systems organised by Lincoln Centre of Autonomous Systems (L-CAS) and the Keynote Session delivered by Prof. Graham Kendall from the University of Nottingham on Hyper-heuristics, both held in October 2020.

Azreen Azman Atttending the ULTRACEPT Mid-term Meeting
Azreen Azman Attending the ULTRACEPT Mid-term Meeting

‘The secondment has given me the opportunities and resources to conduct my research for the project and to improve my skills and networking though various meetings and discussions. Despite the challenges faced due to the ongoing pandemic, both of my hosts (University of Lincoln and Visomorphic Technology Ltd) have provided me with the support to work remotely while continuously engaging with other researchers virtually. I would like to thank the sponsors including Universiti Putra Malaysia  and the ULTRACEPT’s Marie Sklodowska-Curie secondment grant for these opportunities.’ Azreen Azman

 

Dr Qinbing Fu Completes 12 Month Secondment in China

Dr Qinbing Fu received his PhD at University of Lincoln, in October 2018. Following a secondment under the STEP2DYNA project, Dr Fu carried out a further secondment under the ULTRACEPT project from August 2019 to August 2020 at partner Guangzhou University. Here he undertook research contributing to work packages 1 and 4. Dr Fu then went on to work as a postdoctoral researcher with Professor Shigang Yue until January 2021. Dr Fu’s ULTRACEPT contributions have involved directing the research into computational modelling of motion vision neural systems and applications on robotics. His research achievements and outputs for this project thus far is outlined in this blog post.

Research Outcomes

In support of the ULTRACEPT project, Dr Fu has published seven research papers including five journal papers and two conference papers. He was the first author on five of the publications and co-authored the other two. His main achievements have included:

  • The modelling of LGMD-1 and LGMD-2 collision perception neural network models with applications on robot and vehicle scenarios;
  • The modelling of Drosophila motion vision neural system for decoding the direction of foreground translating object in moving cluttered background;
  • A review on the related field of research;
  • Multiple neural system models integration for collision sensing.
Dr Qinbing Fu's ULTRACEPT research activities. Using the Colias robots for modelling work on collision detection visual systems.
Using the Colias robots for modelling work on collision detection visual systems.

Dr Fu’s research outputs can be found on his personal web pages on Google Scholar and ResearchGate. In addition, Qinbing directed promising research on building visually dynamic walls in an arena to test the on-board visual system. These research ideas have been collated and summarised in his research papers.

Dr Fu’s research contributions have fully supported ULTRACEPT’s WP1 and WP4. This includes modelling work on collision detection visual systems with systematic experiments on vehicle scenarios and also the integration of multiple neural system models for motion perception.

Using the Colias robots for modelling work on collision detection visual systems.
Using the Colias robots for modelling work on collision detection visual systems.

Secondment at Guangzhou University, China

Dr Fu carried out his ULTRACEPT secondment at project partner GZHU in China where he worked with Professor Jigen Peng. During this period he developed his capability on several aspects, becoming a more mature researcher in the academic community. This included: aspiring to progressive research ideas, collaboration with group members on completing research papers, coordinating teamwork, disseminating the project, good communication experience with global partners, and writing project proposals. Undoubtedly, the ULTRACEPT secondment for Dr Fu has been very successful.

Dissemination Activities

Dr Fu has undertaken a number of dissemination activities to promote the ULTRACEPT research outcomes. On the 28th July 2020, he presented his research at the ULTRACEPT online Workshop 2 on the topic of “Adaptive Inhibition Matters to Robust Collision Perception in Highly Variable Environments”. At this event, he exchanged ideas with project partners.

Qinbing Fu presents at ULTRACEPT Workshop 2
Dr Fu presents at ULTRACEPT Workshop 2

Dr Fu also facilitated a ULTRACEPT online Sandpit Session on 27 November 2020, during where he gave a talk on “Past, Present, and Future Modelling on Bio-Inspired Collision Detection Visual Systems: Towards the Robust Perception”.

Dr Fu presents at ULTRACEPT's First Sand Pit Session
Dr Fu presents at ULTRACEPT’s First Sand Pit Session

On 18th December 2020, Dr Fu attended the 2020 IEEE International Conference on Advanced Robotics and Mechatronics (ICARM) held in Shenzhen, China where he presented his research paper entitled “Complementary visual neuronal systems model for collision sensing”. He also chaired a Session on “Biomimetics” during this conference.

Qinbing Fu presents his research at the IEEE ARM 2020 Conference
Dr Qinbing Fu presents his research at the IEEE ARM 2020 Conference
Qinbing Fu presents his research at the IEEE ARM 2020 Conference
Qinbing Fu presents his research at the IEEE ARM 2020 Conference

ULTRACEPT Researchers Attend IEEE ICARM 2020 Conference

The IEEE International Conference on Advanced Robotics and Mechatronics (ICARM) 2020 was held in Shenzhen, China, and attended by three University of Lincoln’s (UoL) ULTRACEPT researchers; Dr Qinbing Fu, Xuelong Sun, and Tian Liu. These researchers completed 12 month ULTRACEPT secondments with project partner Guangzhou University (GZHU) in China.

Tian Liu, Qinbing Fu, and Xuelong Sun attend IEEE ARM 2020 Conference
L to R: Tian Liu, Qinbing Fu, and Xuelong Sun attend IEEE ARM 2020 Conference

The IEEE ARM Conference took place between the 18th and 21st December 2020 and is the flagship conference on bio-mechatronics and bio-robotics systems as well as neuro-robotics systems. The conference provides an international forum for researchers, educators, engineers in general areas of mechatronics, robotics, automation, and sensors to disseminate their latest research results and exchange views on the future research directions of these fields.

The UoL researchers attended to promote their publications produced as part of both the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements STEP2DYNA (691154) and ULTRACEPT (778062) project.

Dr Qinbing Fu: Complementary Visual Neuronal Systems Model for Collision Sensing

Dr Qinbing Fu presented his research paper entitled “Complementary Visual Neuronal Systems Model for Collision Sensing” which was included in the conference proceedings on Monday morning. Dr Fu was also the chair of MoSHT3 Regular Session, based on the topic of Biomimetics.

Qinbing Fu presents his research at the IEEE ARM 2020 Conference
Qinbing Fu presents his research at the IEEE ARM 2020 Conference

Q. Fu and S. Yue, “Complementary Visual Neuronal Systems Model for Collision Sensing,” 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 2020, pp. 609-615, doi: 10.1109/ICARM49381.2020.9195303.

Abstract – Inspired by insects’ visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-field motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in flies, have been studied, intensively. The LGMDs have specific selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To fill this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD-2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented in ground micro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.

Qinbing Fu presents his research at the IEEE ARM 2020 Conference
Dr Qinbing Fu presents his research at the IEEE ARM 2020 Conference

When asked about the conference experience Dr Fu said;

2020 has been a very tough year for everyone around the world. The pandemic has absolutely affected people’s lives. As an academic researcher, it has become more difficult to exchange ideas closely with other colleagues. Almost all of the academic conferences across every discipline has moved to on-line presenting. This has made it challenging to disseminate research and exchange ideas.

China was suffering from the pandemic in early 2020. However, due to its successful control of COVID-19, after June 2020 most parts of life, including work, had returned to normal. As a result, the conference was successfully held in person as originally planned, although international guests were not able to attend due to travel restrictions.

The conference attendees appreciated how well the conference was organised in Shenzhen. Personally, I very much enjoyed attending this conference. Due to travel restrictions, it was not a large conference, but every detail was considered and arranged properly. There were many enjoyable moments and I learnt alot. The plenary presentations were very high quality. Another special, memorable experience for me was the opportunity to chair a session for the first time during the conference. It was awesome!

Xuelong Sun: Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles

Xuelong Sun presented his co-authored paper ‘Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles‘ as lead author, Hamid Isakhani, was unable to attend due to travel restrictions. Their paper was awarded Best Conference Paper Finalist.

Best conference paper finalist at the IEEE ARM 2020 Conference
Best conference paper finalist at the IEEE ARM 2020 Conference

H. Isakhani, S. Yue, C. Xiong, W. Chen, X. Sun and T. Liu, “Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles,” 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 2020, pp. 602-608, doi: 10.1109/ICARM49381.2020.9195392.

Abstract – Gliding is the most efficient flight mode that is explicitly appreciated by natural fliers. This is achieved by high-performance structures developed over millions of years of evolution. One such prehistoric insect, locust (Schistocerca gregaria) is a perfect example of a natural glider capable of endured transatlantic flights, which could potentially inspire numerous solutions to the problems in aerospace engineering. However, biomimicry of such aerodynamic properties is hindered by the limitations of conventional as well as modern fabrication technologies in terms of precision and availability, respectively. Therefore, we explore and propose novel combinations of economical manufacturing methods to develop various locust-inspired tandem wing prototypes (i.e. fore and hindwings), for further wind tunnel based aerodynamic studies. Additionally, we determine the flexural stiffness and maximum deformation rate of our prototypes and compare it to their counterparts in nature and literature, recommending the most suitable artificial bioinspired wing for gliding micro aerial vehicle applications.

Xuelong Sun presents his research at the IEEE ARM 2020 Conference
Xuelong Sun presents his research at the IEEE ARM 2020 Conference

When asked about the conference Xuelong said;

This has been a fantastic conference, although we are getting through this special year. The keynote speakers delivered very impressive talks concerning Controlling System, AI, and robotics, which offered great food for thought. I was very pleased that our paper was shortlisted for the BEST STUDENTS PAPER AWARD.

In my presentation, I reported on the work we have completed on manufactured bio-inspired wings for future flying robots which mimick locust. We emphasised that the methods applied are affordable, and that the manufactured wings feature high flexibility and rigidity. Although we didn’t win the award, we were finalist, which has encouraged us to keep moving forward with our future research.

manufactured the bio-inspired wings for future flying robots mimicking locust
Manufactured the bio-inspired wings for future flying robots mimicking locust

Tian Liu: Investigating Multiple Pheromones in Swarm Robots – A Case Study of Multi-Robot Deployment

Tian Lu presented his paper ‘Investigating Multiple Pheromones in Swarm Robots – A Case Study of Multi-Robot Deployment.’

T. Liu, X. Sun, C. Hu, Q. Fu, H. Isakhani and S. Yue, “Investigating Multiple Pheromones in Swarm Robots – A Case Study of Multi-Robot Deployment,” 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 2020, pp. 595-601, doi: 10.1109/ICARM49381.2020.9195311

Abstract – Social insects are known as the experts in handling complex task in a collective smart way although their small brains contain only limited computation resources and sensory information. It is believed that pheromones play a vital role in shaping social insects’ collective behaviours. One of the key points underlying the stigmergy is the combination of different pheromones in a specific task. In the swarm intelligence field, pheromone inspired studies usually focus one single pheromone at a time, so it is not clear how effectively multiple pheromones could be employed for a collective strategy in the real physical world. In this study, we investigate multiple pheromone-based deployment strategy for swarm robots inspired by social insects. The proposed deployment strategy uses two kinds of artificial pheromones; the attractive and the repellent pheromone that enables micro robots to be distributed in desired positions with high efficiency. The strategy is assessed systematically by both simulation and real robot experiments using a novel artificial pheromone platform ColCOSΦ. Results from the simulation and real robot experiments both demonstrate the effectiveness of the proposed strategy and reveal the role of multiple pheromones. The feasibility of the ColCOSΦ platform, and its potential for further robotic research on multiple pheromones are also verified. Our study of using different pheromones for one collective swarm robotics task may help or inspire biologists in real insects’ research.

Tian Liu presents his research at the IEEE ARM 2020 Conference
Tian Liu presents his research at the IEEE ARM 2020 Conference

When asked about the conference Tian Lu said;

In the pandemic of COVID-19, this conference is a rare opportunity to listen to the keynote speakers presentations about control, artificial intelligence, and bioinspiration psychically. I also presented my own research about multiple pheromones and experiment system ColCOSΦ, and had a friendly exchange with scholars in related fields. I believe this conference has enabled more people to learn about our research progress and results.

Tian Liu Completes 12 Month Secondment at Guangzhou University

Tian Liu enrolled as a PhD Scholar at the University of Lincoln in 2018. In 2018-2019 he visited Guangzhou University as part of the STEP2DYNA project funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skolodowska-Curie grant agreement. During this secondment Tian Liu developed the ColCOSΦ experiment platform for social insects and swarm robotic researching. Tian investigated how multiple virtual pheromones impact on the swarm robots. More recently, Tian completed a 12 month secondment under ULTRACEPT at Guangzhou University.

Tian Liu recently completed his second 12 month secondment at project partner Guangzhou University in China as part of the ULTRACEPT project funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skolodowska-Curie grant agreement. Tian visited Guangzhou from November 2019 to November 2020 and has been involved in Work Package 1 and 4.

Tian reflects on what he has achieved during his time in Guangzhou

Most social insects, such as ants, only have a tiny brain. However, they can complete very difficult and complex tasks with a large number of individuals cooperating. Examples include building a large nest or collecting food through rugged routes. They are able to do this because the pheromones act as an important communication medium.

During this 12 month secondment, I continued to focus my attention on swarm robots with multiple pheromones. I believed that it is the interaction of multiple pheromones that enables insects to perform such demanding tasks, rather than the single pheromone mechanism which is now so widely studied. I worked with ULTRACEPT researcher Xuelong Sun and Dr. Cheng Hu to develop the ColCOSΦ, which can easily implement multiple pheromone research experiments. We verified the application and evaluation of the effects of multi-pheromones in swarm robotics by implementing several case studies which simulated ants foraging and carrying out hunting and deployment tasks.

I showcased the outcomes of this research at both ICARM2019 and ICARM2020 international conferences.

Tian Liu Completes 12 Month Secondment at Guangzhou University

Tian Liu presenting ICARM 2020
Tian Liu presenting ICARM 2020

Due to its excellent scalability, we also use it for research experiments in related fields. For example, the platform can simulate traffic scenarios so we can test our LGMD model (a collision detection model) by using the micro robot (Colias) in a low-cost way.

Tian Liu Completes 12 Month Secondment at Guangzhou University

Besides olfactory, the visual information is also a very important input for insects, so we implemented a changeable visual environment on the ColCOSΦ for investigating how to make full use of both olfactory and visual information in a swarm task. The research was collated into two articles which have been submitted to ICRA2021 with fellow ULTRACEPT researchers Xuelong Sun, Dr Qinbing Fu and Dr Cheng Hu.

Tian Liu Completes 12 Month Secondment at Guangzhou University

The secondment has been an excellent experience for me and my colleagues and provided me the opportunity to collaborate with my project colleagues.

Many thanks to ULTRACEPT project for supporting my research and for allowing me to work with these outstanding research scholars.

Siavash Bahrami presents at ICIMT 2020

Siavash Bahrami is a PhD candidate at Universiti Putra Malaysia (UPM), who is working on multimodal deep neural networks using acoustic and visual data for developing an active road safety system intended for autonomous and semi-autonomous vehicles. As part of the ULTRACEPT project working on work package 2, Siavash recently completed a 6 month secondment at the University of Lincoln and another at Visomorphic LTD.

Siavash Bahrami presents at ICIMT 2020
Siavash presenting at ICIMT 2020

Recently Siavash presented a paper titled “Acoustic Feature Analysis for Wet and Dry Road Surface Classification Using Two-stream CNN” during the 12th International Conference on Information and Multimedia Technology (ICIMT 2020). The data utilised for training and testing the proposed CNN architectures were collected during Siavash’s secondment in the UK. Despite the strains caused by the global pandemic Siavash managed to complete his secondment and collect all the data that was needed for his PhD thesis and the ULTRACEPT project with the help of UoL and UPM project members.

Siavash Bahrami presents at ICIMT 2020
Siavash presenting at ICIMT 2020

ICIMT 2020 was scheduled to take place between 11th and 13th December 2020 in Zhuhai, China, but due to the pandemic it was instead held as a virtual conference. The aim of ICIMT is to provide a platform for researchers, engineers, academics and industrial professionals from all over the world to present their research results and development activities in Information and Multimedia Technology. This conference provides opportunity for delegates to exchange new ideas and applications to establish business or research relations and to find global partners for future collaboration.

Professor Shigang Yue (ULTRACEPT Project Coordinator) and Associate Professor Dr. Shyamala Doraisamy (ULTRACEPT Project partner Lead for UPM) each chaired one of the conference sessions.

Siavash Bahrami presents at ICIMT 2020
Siavash presentation at ICIMT 2020

Acoustic Feature Analysis for Wet and Dry Road Surface Classification Using Two-stream CNN – Abstract

Road surface wetness affects road safety and is one of the main reasons for weather-related accidents. Study on road surface classification is not only vital for future driverless vehicles but also important to the development of current vehicle active safety systems. In recent years, studies on road surface wetness classification using acoustic signals have been on the rise. Detection of road surface wetness from acoustic signals involve analysis of signal changes over time and frequency-domain caused by interaction of the tyre and the wet road surface to determine the suitable features. In this paper, two single stream CNN architectures have been investigated. The first architecture uses MFCCs and the other uses temporal and spectral features as the input for road surface wetness detection. A two-stream CNN architecture that merges the MFCCs and spectral feature sets by concatenating the outputs of the two streams is proposed for further improving classification performance of road surface wetness detection. Acoustic signals of wet and dry road surface conditions were recorded with two microphones instrumented on two different cars in a controlled environment. Experimentation and comparative performance evaluations against single stream architectures and the two-stream architecture were performed. Results shows that the accuracy performance of the proposed two-stream CNN architecture is significantly higher compared to single stream CNN for road surface wetness detection.

The team at UPM recording road sounds
The team at UPM recording road sounds

Read more about Siavash’s ULTRACEPT work in his blog post here.

Yair Barnatan from UBA attends XXXV Annual Meeting of the Argentinian Society for Neuroscience Research

Yair Barnatan is an ULTRACEPT PhD student, at the University of Buenos Aires, working in the field of neuroethology. He is currently focused on neuronal processing of optic flow in the crustacean visual system, unravelling which and how neurons are involved in this process.

Yair attended the XXXV Annual Meeting of the Argentinian Society for Neuroscience Research, SAN 2020. This event was held virtually due to the global pandemic from 7th to 9th October, 2020.

This congress covered a wide variety of neuroscience topics, such as sensory and motor systems, neurodegenerative diseases and learning and memory. In that meeting, Yair presented a poster entitled “Functional evidence of the crustacean lobula plate as optic flow processing center” (Barnatan, Y., Tomsic, D. & Sztarker, J.)

Yair Barnatan from UBA attends XXXV Annual Meeting of the Argentinian Society for Neuroscience Research
Yair Barnatan from UBA attends XXXV Annual Meeting of the Argentinian Society for Neuroscience Research

Abstract

When an animal rotates it produces wide field image motion over its retina, termed optic flow (OF). OF blurs the image compromising the ability to see. Image shifts are stabilized by compensatory behaviors collectively termed optomotor response (OR). In most vertebrates and decapod crustaceans such reflex behavior involves mainly eye movements that consists in a slow tracking phase of the wide field image motion followed by a fast-resetting phase. We used the mud crab Neohelice granulata to tackle a major question in crustacean’s visual processing: which region of the brain is the neural substrate for processing OF? It has long been known that dipteran lobula plate (3rd optic neuropil) is the center involved in processing OF information. Recently, a crustacean lobula plate was characterized by neuroanatomical techniques, sharing many canonical features with the dipteran neuropil. In this work we present a functional evaluation of the role of crab’s lobula plate on the compensatory eye movements to rotational OF by performing electrolytic lesion experiments. We show that lesioning the lobula plate greatly impairs OR while keeping intact other visually guided behaviors, such as avoidance response upon an approaching stimulus. Even when OR is present in some lobula plate lesioned animals, these show reduced speed of eye tracking. Altogether, these results present strong evidence about an evolutionary conserved site for processing optic flow shared by crustacean and insects.

Yair Barnatan from UBA attends XXXV Annual Meeting of the Argentinian Society for Neuroscience Research