All posts by comitchell

A Robust Visual System for Looming Cue Detection Against Translating Motion

University of Lincoln PhD scholar Fang Lei recently published her paper F. Lei, Z. Peng, M. Liu, J. Peng, V. Cutsuridis and S. Yue, “A Robust Visual System for Looming Cue Detection Against Translating Motion,” in IEEE Transactions on Neural Networks and Learning Systems, doi: 10.1109/TNNLS.2022.3149832. Fang has been involved in both the STEP2DYNA and ULTRACEPT projects funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skolodowska-Curie grant agreement.

About the paper

Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models cannot distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This paper presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON-OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events — the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.

Model

The proposed LGMD1 model is shown in Fig. 1. The model separates the ON and OFF channels for processing visual signals. The computational architecture of the model consists of six layers, which integrate neural information-processing mechanisms for extracting cues for looming motion.

LGMD1 model
Figure 1 LGMD1 model

Results

Some experimental results of LGMD1 model’s neural response on real datasets are presented by the video.

Yunlei Shi presents poster at ROBIO 20/21

Yunlei Shi is a 4th year full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots.  In 2020 he was seconded to Tsinghua University as part of the STEP2DYNA project. His work continues in the ULTRACEPT project where he contributes to Work Package 4. 

Yunlei Shi attended the 2021 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 20/21) which was held 27th to the 31st December 2021 at the Four Points by Sheraton Hainan, Sanya, China. The conference was held both in person and online. Yunlei was grateful for the opportunity to attend this fantastic conference with support from ULTRACEPT.

The theme of ROBIO 20-21 was “Robotics and Biomimetics to meet societal grand challenges” reflecting the fast-growing and timely interests in research, development and applications and their impacts on the world. Due to the COVID-19 pandemic, ROBIO 2020 and 2021 were combined and held jointly. ROBIO 20-21 The conference highlighted the research results, new engineering development, and applications related to meeting the societal grand challenges such as COVID-19 pandemic.

Yunlei represented Agile Robots, Universität Hamburg, and the ULTRACEPT project by presenting his conference poster Yunlei Shi, Zhaopeng Chen, Lin Cong, Yansong Wu, Martin Craiu-M¨uller, Chengjie Yuan, Chunyang Chang, Lei Zhang, Jianwei Zhang. Maximizing the Use of Environmental Constraints: A Pushing Based Hybrid Position/Force Assembly Skill for Contact-Rich Tasks. Proceedings of the 2021 IEEE International Conference on Robotics and Biomimetics December 27-31, 2021, Sanya, China https://10.1109/ROBIO54168.2021.9739349

Yunlei Shi poster for ROBIO 20/21

Yicheng Zhang presents ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’ at ISAIC 2021

Yicheng Zhang is a PhD student at the University of Lincoln and working on ULTRACEPT’s Work Package 3.

Recently Yicheng Zhang attended the 2nd International Symposium on Automation, Information and Computing (ISAIC 2021) organized by Beijing Jiaotong University. Due to the current travel restrictions, this year’s conference was moved online from 3rd to 6th of December 2021.

The ISAIC is a flagship annual international conference on computational intelligence, promoting all aspects of theory, algorithm design, applications and related emerging techniques. As a tradition, the ISAIC 2021 will co-locate a large number of topics within or related to computational intelligence, thereby providing a unique platform for promoting cross-fertilization and collaboration. ISAIC 2021 featured keynote speeches, invited speeches, oral presentations and poster sessions.

At the event, Yicheng presented his conference paper Yicheng Zhang, Cheng Hu, Mei Liu, Hao Luan, Fang Lei, Heriberto Cuayahuitl and Shigang Yue ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’. An open access version can be accessed here.

Yicheng Zhang presents 'Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network' at ISAIC 2021
Yicheng Zhang presents ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’
Abstract

It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks

Yunlei Shi is a 4th year full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots.  In 2020 he was seconded to Tsinghua University as part of the STEP2DYNA project. His work continues in the ULTRACEPT project where he contributes to Work Package 4. 

Yunlei Shi attended the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) to present his research. IROS 2021 is the first-ever conference organized by a Central European country and, more remarkably, by the country that introduced the word “robot” to the world. The IROS 2021 was held online from 27th September to 1st October 2021, in Prague, Czech Republic.

Yunlei represented Agile Robots, Universität Hamburg, and the ULTRACEPT project by presenting his conference paper Yunlei Shi, Zhaopeng Chen, Yansong Wu, Dimitri Henkel, Sebastian Riedel, Hongxu Liu, Qian Feng, Jianwei Zhang “Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks”, (IROS) 2021, Prague, Czech Republic. Yunlei was grateful for the opportunity to attend this fantastic conference with support from ULTRACEPT.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks
Yunlei Shi presenting at IROS 2021

Abstract

Collaborative robots are expected to be able to work alongside humans and in some cases directly replace existing human workers, thus effectively responding to rapid assembly line changes. Current methods for programming contact-rich tasks, especially in heavily constrained space, tend to be fairly inefficient. Therefore, faster and more intuitive approaches to robot teaching are urgently required. This work focuses on combining visual servoing based learning from demonstration (LfD) and force-based learning by exploration (LbE), to enable fast and intuitive programming of contact-rich tasks with minimal user effort required. Two learning approaches were developed and integrated into a framework, and one relying on human to robot motion mapping (the visual servoing approach) and one on force-based reinforcement learning. The developed framework implements the non-contact demonstration teaching method based on visual servoing approach and optimizes the demonstrated robot target positions according to the detected contact state. The framework has been compared with two most commonly used baseline techniques, pendant-based teaching and hand-guiding teaching. The efficiency and reliability of the framework have been validated through comparison experiments involving the teaching and execution of contact-rich tasks. The framework proposed in this paper has performed the best in terms of teaching time, execution success rate, risk of damage, and ease of use.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks
Fig. 2. A robot arm and a suction gripper performing a contact-rich tending task. (a) Gross motion is learned from human demonstration. (b) Fine motion is learned from exploration. (c) Example of a contact-rich tending task

Learn more about this conference paper by watching the demonstration on the TAMS Youtube channel.

Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections

ULTRACEPT University of Lincoln researcher Jiannan Zhao recently published a paper titled “Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections” on IEEE Transactions on Neural Networks and Learning Systems. IEEE Transactions on Neural Networks and Learning Systems is one of the top-tier journals that publish technical articles dealing with the theory, design, and applications of neural networks and related learning systems. It has a significant influence on artificial neural networks and learning systems.

Research Summary

Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect’s visual neuron, LGMD is considered to be an ideal basis for building UAV’s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, this research proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts’ synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. A series of experiments systematically analysed the proposed model during UAV agile flights. Our results demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.

Research Highlights

To overcome whole-field-of-view image motion during UAV agile flight, this research proposed novel synaptic computing strategies to filter on image angular velocity. Due to the proposed spatial-temporal distributed synaptic interconnections, the model of the LGMD neuron is able to select looming pattern with linear synaptic computation only. The neural model is depicted in Figure 1, Video trials performance is shown in Figure 2 and UAV onboard experiments are demonstrated in the supplementary videos.

Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections
FIGURE 1. D-LGMD neural model. DPC: distributed presynaptic connection, FFI: feed forward inhibition, FFI-GD: FFI mediated grouping and decay, MP: membrane potential. This research focuses on remodeling the DPC structure, where temporally distributed presynaptic connections and radially distributed temporal latency is considered to form the selectivity on looming patterns.
Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections
FIGURE 2. The proposed model performed excellently in video trails captured during UAV agile flight. (a) Raw input. (b)Corresponding frame difference (c)Filtered results by the DPC linear connections. (d)Further enhanced results by nonlinear ReLu and threshold processes.

Supplementary Video

Further demonstration and analyses is provided in the video.

University of Lincoln Researchers Awarded Second Place at International Robot Competition

University of Lincoln researchers Tian Liu, Xuelong Sun, and Jiannan Zhao recently competed at the 2021 International Competition of Autonomous Running Robots (Running Robot). Running Robot is an international competition co-launched by Beijing Association for Science and Technology, Beijing Institute of Electronics, School of Integrated Circuits, Tsinghua University, Beijing Science and Technology Association Service Center, Korea Advanced Institute of Science and Technology, etc. The competition has been successfully held for two terms, attracting more than 40 well-known universities and more than 110 teams from countries and regions, including Germany, Britain, South Korea, Pakistan, Russia and China. The competition attracted the attention of CCTV, China International TV, China Education TV, Beijing TV, Xinhua news agency, China News Agency, people’s daily, and other domestic media.

University of Lincoln Researchers Awarded Second Place at International Robot Competition
Team LinBot 

Given the pandemic situation, alongside a physical competition held in Beijing from 15-17th October 2021, there was also a virtual competition making use of the robotic simulation software, Webots. Xuelong Sun, Tian Liu and Jiannan Zhao from the University of Lincoln participated in this competition under the team name ‘LinBot’. Their excellent performance in the virtual competition secured them an impressive second place. Within only eight minutes, robots are asked to fulfill multiple tasks on the road as quickly as possible. LinBot completed all the tasks in about 7 minutes.

Xuelong said, “by solving all the challenging problems in this competition, I have learned a lot about biped robot controlling, object recognition, computer vision etc. And importantly how to cooperate with others in a team. Thanks for the support and help from the ULTRACEPT project and all the colleagues in the university.”

University of Lincoln Researchers Awarded Second Place at International Robot Competition
Team LinBot awarded second place

Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic

ULTRACEPT researcher Dr Qinbing Fu recently published his journal article Fu, Qinbing, Sun, Xuelong, liu, Tian et al, Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic. Frontiers in Robotics and AI, 8 . p. 529872, 2021. ISSN 22969144. In this post, Dr Fu shares with us the highlights of this research.

Research Summary

Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This research investigates the robustness of two state-of-the-art neural network models inspired by the locust’s LGMD-1 and LGMD-2 visual pathways as fast and low energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This research also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.

Research Highlights

This research corroborates the robustness of LGMDs (Figure 1) neuronal systems model to timely alert potential crashes in dynamic multi-robot scenes. To sharpen up the acuity of LGMDs inspired visual systems in collision sensing, an original hybrid LGMD-1 and LGMD-2 neural networks model (Figure 2) is proposed with non-linear mapping from network outputs to alert firing rate, which works effectively.

Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic
FIGURE 1. Schematic illustration of the LGMD-1 and the LGMD-2 neuromorphology. Visual stimuli are received by the pre-synaptic dendrite structures of both neurons. The feed-forward inhibition (FFI) pathway connects the LGMD-1. The DCMD (descending contra-lateral movement detector) is a one-to-one post-synaptic target neuron to the LGMD-1 conveying spikes to motion control neural system. The post-synaptic partner neuron to the LGMD-2 yet remains unknown.
Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic
FIGURE 2. Schematic illustration of the proposed feed-forward collision prediction visual neural networks. There are three layers pre-synaptic to the two neurons, the photoreceptor (P), lamina (LA) and medulla (ME) layers. The pre-synaptic neural networks of LGMD-1 and LGMD-2 share the same visual processing in the first two, P and LA layers. The processing yet differs in the third ME layer for the purpose of separating their different selectivity. The ME layer consists of ON/OFF channels wherein the ON channels are rigorously suppressed in the LGMD-2’s circuit (dashed lines). The delayed information is formed by convolving surrounding non-delayed signals in space. The FFI is an individual inhibition pathway to merely the LGMD-1. The PM is a mediation pathway to the medulla layer of the LGMD-2. The two LGMDs pool their pre-synaptic signals respectively to generate spikes that are passed to their post-synaptic neurons. Notably, the non-linearly mapped, hybrid firing rate is the network output deciding the corresponding collision avoidance response.

This research complements previous experimentation on the proposed bio-inspired computation approach to collision prediction in more critical, real-physical scenarios.

This research exhibits an innovative, tractable, and affordable robotic approach to evaluate online visual systems in different dynamic scenes.

Research Platform

This research applies our developed robotic platform as shown below. The autonomous mobile robot used in this study is called Colias-IV (Hu et al., 2018), which includes mainly two components that provide different functions, namely the Colias Basic Unit (CBU), and the Colias Sensing Unit (CSU).

Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic
FIGURE 3. Overview of the robotic platform consisting of multiple-pheromone module and micro-mobile robots. The pheromone module is composed of a camera system connecting a computer and a TV arena. The micro-mobile robot comprises a visual sense board implementing the proposed visual systems, and a motion board for route following and emergency braking. Four colour sensors are marked in the bottom view of the robot used for sensing optically rendered pheromone cues displayed upon the LCD screen. The ID-pattern on top of robot is used to run a real time localisation system.

Supplementary Video

We have a supplemental video to explain this novel research outcomes.

Workshop 4

Focussing on noise test refinements of the circuits and chip components design, reporting developments on multiple visual systems and multiple modality computation systems coordination, integration and realisation, relevant to WP1, WP3 and WP4.

The ULTRACEPT Workshop 4 was hosted by ULTRACEPT partner Universitat of Hamburg (UHAM). It took place over two days on the 25th and 26th October 2021. Due to ongoing travel restrictions, the workshop was hosted online. 36 researchers attended the session.

ULTRACEPT Workshop 4 attendees
ULTRACEPT Workshop 4 attendees

Details of the agenda are set out below.

Day 1

Date: Monday, 25 October 2021

Time: Germany 11:00; UK 10:00; China 17:00; Buenos Aires 06:00; Malaysia 17:00; Japan 18:00

Facilitator: Prof Jianwei Zhang

German time Item Lead
11:00-11:05 Arrival and welcome Prof Jianwei Zhang
11:05-12:05 Bio-inspiration and bio-understanding in collision avoidance

Dr. Liang Li, Max Planck Institute of Animal Behaviour.

45 minutes presentation & 15 minutes Q&A

Abstract: Animals that move within groups or through complex habitats must frequently contend with obstacles in their path. However, what variables animals perceive through onboard sensors and how the perceived information is processed for motor control are largely unexplored. To solve this, we need inter-disciplinary studies between biology and robotics, including applying biological mechanisms in robotics and using robots to study biology. In this talk, I will first introduce bio-inspired formation control in collective fish-like robots to avoid collisions. Following this, I will report how bumblebees avoid environmental obstacles and navigate through narrow gaps. The mechanisms of visual-motor control can greatly inspire engineers to build intelligent robots to avoid collisions in complex environments. And then, I will report several ongoing studies of using real and virtual robots to study how the robots can help us understand the leader-follower behavior without collisions. Finally, I would like to report how the robots can generate biological hypotheses of sensory-motor control in schooling fish. Over this talk, I would like to highlight that applying robotics to study biology is as important as researches of bio-inspiration because it can help us generate hypotheses, explore potential mechanisms, and verify sensory-motor control in biological systems. Once we have a clear understanding of the mechanisms in living organisms, we can easily apply them to robotics.

 

Dr Liang Li, Max Planck Institute of Animal Behaviour & University of Konstanz. Liang Li received B.E. degree in automation from Chongqing University, China in 2011, and the PhD degree in general mechanics and foundation of mechanics from Peking University, China in 2017. From February 2017 to June 2021, he was a Postdoctoral Research Fellow in the Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany. He is currently a Project Leader (Principal Investigator) with the Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany. His research interests include bio-inspired robots, collective behaviour in hybrid animal-robot systems, bio-fluid dynamics in fish school, and swarm intelligence in robots.

Dr Liang Li
12:05-12:30 Break
12:30-13:30 Multisensor based vehicle collision avoidance – algorithms, hardware design and applications

Prof. Ming Li, Wuhan University / In-Driving.

45 minutes presentation & 15 minutes Q&A

Prof. Li, is currently an associate professor in the Department of Computer Science of Wuhan University. He got his Phd degree in photogrammetry and remote sensing from Wuhan University in 2007. From 2011 to 2012, he studied environmental modeling of unmanned systems at Jacobi University in Germany, jointly supported by German DAAD and China CSC as a visiting scholar. In 2013, he received funding from China CSC and continued his work in developing unmanned vehicle systems in Karlsruher Institut für Technologie in Germany.

He is engaged in the research of unmanned driving environment perception technology, organized the development of multi generation unmanned intelligent vehicle platforms. The first generation of unmanned vehicles SmartV II from his team won first place in the comprehensive test and second place in the total score in the “Future Challenge” competition. The second-generation unmanned vehicle uses VeloSLAM to realize autonomous driving in a complex urban environment, for example, the complex Luxiang roundabout in Wuhan. The third-generation unmanned vehicle from his team is jointly developed with Dongfeng Technology Center. It has reached the mass production prototype test standard of the car factory and was reported as the first autonomous driving vehicle of Dongfeng by Hubei TV.

Prof Ming Li
13:30-14:00 BCI Technology for Human-robot Collaboration

Jianzhi Lyu, PhD student in computer science, Universitat of Hamburg

20 minutes presentation & 10 minutes Q&A

Abstract: To avoid collisions and make collaboration in a shared workspace safe, robots need to detect the human’s movement intention as early as possible, thus allowing for the time needed to replan and execute the robot’s trajectory. In this paper, we present a setup for studying how information recorded from a motion-tracking system and the electroencephalogram (EEG)of the human brain can be exploited for dynamically adjusting the robot’s trajectories. In particular, we employ a brain-computer interface (BCI) to detect the target of the human’s overt attention and develop a controller which minimizes interference with the human’s action yet maximizes performance in the robot’s task. Moreover, EEG data are used to evaluate the operator’s vigilance and adapt parameters of the robot movements accordingly.

Jianzhi Lyu
14:00-14:05 Day 1 close Prof Jianwei Zhang
Day 2

Date: Tuesday, 26 October 2021

Time: Germany 11:00; UK 10:00; China 17:00; Buenos Aires 06:00; Malaysia 17:00; Japan 18:00

Facilitator: Prof Jianwei Zhang

11:00 to 13:00: ULTRACEPT board meeting – only members and representatives to attend this meeting. Guests may join for the guest speaker at 13:30.
German time Item Lead
11:00-13:00 Board meeting
13:30-14:00 Omnidirectional Bipedal Walking in Cartesian Space through Reinforcement Learning and Optimized Quintic Splines

Marc Bestmann, PhD student in computer science, Universitat of Hamburg

20 minutes presentation & 10 minutes Q&A

Abstract: This presentation investigates design choices for reinforcement learning in the domain of bipedal walking. We demonstrate that an omnidirectional walk for a humanoid robot can be achieved by using a walk engine to generate reference actions. The used walk engine is based on parameterized quintic splines that are optimized with the Multi-objective Tree-structured Parzen Estimator (MOTPE). We show that using Cartesian policies improves the achieved reward in comparison to joint space based policies. Furthermore, it is demonstrated that the achieved reward is proportional to the reference motions quality. The learned policy is transferred to a different simulation and the real robot.

Marc Bestmann
14:00-14:10 Workshop event close Prof Jianwei Zhang

The workshop was formally opened by UHAM ULTRACEPT lead Prof Jianwei Zhang. Following this was a presentation from guest speaker Dr Liang Li from the Max Planck Institute of Animal Behaviour on Bio-inspiration and bio-understanding in collision avoidance.

ULTRACEPT Workshop 4 Liang Li presenting
ULTRACEPT Workshop 4 Liang Li presenting

Following this was a presentation from guest speaker Prof. Ming Li, Wuhan University / In-Driving. He presented on Multisensor based vehicle collision avoidance – algorithms, hardware design and applications.

ULTRACEPT workshop 4
ULTRACEPT workshop 4, Ming Li presenting

The final presentation for day 2 of the ULTRACEPT workshop was from UHAM PhD student Jianzhi Lyu. Jianzhi presented  BCI Technology for Human-robot Collaboration.

ULTRACEPT workshop 4 Jianzhi Lyu
ULTRACEPT workshop 4 presentation by Jianzhi Lyu

Day 2 of the workshop begun with an ULTRACEPT board meeting. This was followed by a presentation from UHAM PhD student Marc Bestmann who presented his work on Omnidirectional Bipedal Walking in Cartesian Space through Reinforcement Learning and Optimized Quintic Splines.

ULTRACEPT workshop 4, Marc Bestmann presenting
ULTRACEPT workshop 4, Marc Bestmann presenting

Siavash Bahrami Awarded Best Student Paper at International Conference ICCST2021

Siavash Bahrami is a PhD candidate at Universiti Putra Malaysia (UPM), working on multimodal deep neural networks using acoustic and visual data for developing an active road safety system intended for autonomous and semi-autonomous vehicles. Siavash is contributing to ULTRACEPTs work package 2 and completed secondments at partners the University of Lincoln and Visomorphic LTD.

The Ninth International Conference on Computational Science and Technology 2021 (ICCST2021) is an international scientific conference for research in the field of advanced computational science and technology. The conference was held virtually in Labuan, Malaysia, on the 28th – 29th August 2021.

Siavash Bahrami presenting at ICCST2021
Siavash Bahrami presenting online at ICCST2021

Siavash Bahrami was awarded ‘Best Student Paper’ for his paper titled “CNN Architectures for Road Surface Wetness Classification from Acoustic Signals” presented during the Eighth International Conference on Computational Science and Technology (ICCST2021). The data utilised for training and testing the proposed CNN architectures were collected during Siavash’s ULTRACEPT secondments in the UK. Despite the strains caused by the global pandemic, with the assistance of UoL and UPM project members, Siavash managed to complete his secondment and collect the data needed for both his PhD thesis and the ULTRACEPT project work package 2.

Best Student Paper Award ICCDT 2021 Siavash Bahrami
Best Student Paper Award ICCDT 2021 Siavash Bahrami

The classification of road surface wetness is important for both the development of future driverless vehicles and the development of existing vehicle active safety systems. Wetness on the road surface has an impact on road safety and is one of the leading causes of weather-related accidents. Although machine learning algorithms such as recurrent neural networks (RNN), support vector machines (SVM), artificial neural networks (ANN) and convolutional neural networks (CNN) have been studied for road surface wetness classification, the improvement of classification performances are still widely being investigated whilst keeping network and computational complexity low. In this paper, we propose new CNN architectures towards further improving classification results of road surface wetness detection from acoustic signals. Two CNN architectures with differing layouts for its dropout layers and max-pooling layers have been investigated. The positions and the number of the max-pooling layers were varied. To avoid overfitting, we used 50% dropout layers before the final dense layers with both architectures. The acoustic signals of tyre to road interaction were recorded via mounted microphones on two distinct cars in an urban environment. Mel-frequency cepstral coefficients (MFCCs) features were extracted from the recordings as inputs to the models. Experimentation and comparative performance evaluations against several neural networks architectures were performed. Recorded acoustic signals were segmented into equal frames and thirteen MFCCs were extracted for each frame to train the CNNs. Results show that the proposed CMCMDD1 architecture achieved the highest accuracy of 96.36% with the shortest prediction time.

Siavash and supervisor Dr Shyamala Doraisamy recording road sounds data whilst on secondment at UoL
Siavash and UPM supervisor Dr Shyamala Doraisamy recording road sounds data whilst on secondment at UoL
CMCMDD architecture with two layers of convolution and kernel size
CMCMDD architecture with two layers of convolution and kernel size

References:

Siavash Bahrami, Shyamala Doraisamy, Azreen Azman, Nurul Amelina Nasharuddin, and Shigang Yue. 2020. Acoustic Feature Analysis for Wet and Dry Road Surface Classification Using Two-stream CNN. In 2020 4th International Conference on Computer Science and Artificial Intelligence (CSAI 2020). Association for Computing Machinery, New York, NY, USA, 194–200. https://doi.org/10.1145/3445815.3445847

University of Lincoln Researcher Completes 12 Month Secondment in China

University of Lincoln Masters researcher Mu Hua, recently completed a 12 month secondment for ULTRACEPT at partner Guangzhou University in China.

During my one-year secondment at Guangzhou University, my research was based on the previous works on the locusts LGMD (Lobula Giant Movement Detector) neural networks for collision perception, including the LGMD1 from Prof Shigang Yue and LGMD2 from Dr Qinbing Fu. My work was mainly focused on improving the LGMDs neural network’s ability for ultra-fast approaching objects.

Benefiting from thousands of decades’ evolution, locusts have been equipped with a vision system that improves their success rate of evading their natural predators in the blink of an eye. Taking inspiration from nature through the computational models of LGMDs in locust’s visual pathways has had a positive impact on addressing these problems. However, it is still challenging for current LGMD neural networks to accurately and reliably recognize the imminent collision when the approaching object is ultra-fast (see Fig. 1). The green dashed line is the threshold we set to indicate whether collision is happening or not; the Blue curve is the current LGMD1 responses to the ultra-fast objects. The neuron fires spikes and generates a ‘false alert’ while the approaching black ball is far away.

Fig.1. Comparison between our proposed method and previous works against the same ultra-fast approaching black ball.
Fig.1. Comparison between our proposed method and previous works against the same ultra-fast approaching black ball.

Since the refractoriness, namely the refractory period which is a common mechanism within plenty of creatures’ neuron systems, is able to assist together with other sorts of mechanisms to help stabilize a neuron. It is then introduced to previous works of LGMDs neural networks for further improvement. On the left in Figure 2, we show a comparison between our new proposed LGMD1 neural network and the previous one from Shigang Yue. On the right, we demonstrate the comparison between our proposed method of LGMD2 and the previous one from Qinbing Fu.

Fig.2. Comparison between our proposed method and previous works. Left for LGMD1 neural network while the right is for LGMD2.
Fig.2. Comparison between our proposed method and previous works. Left for LGMD1 neural network while the right is for LGMD2.

To better understand the refractoriness mechanism and explain the rationality of integrating the LGMDs neural network, we sought guidance from professors, Prof. Jigen Peng and Prof. Huang, and our outstanding colleagues. Their inference from the perspective of mathematics supported the proposed method (see Fig.3).

Prf. Huang delivered a speech.
Prof. Huang delivering a presentation.

During my secondment, I obtained knowledge on both bio-plausible neural networks and coding and gained much experience in setting up experiments and analysing the experimental results. Many thanks to the ULTRACEPT project for supporting my research at Guangzhou University, and to my host, Prof. Jigen Peng for kindly providing me access to his well-equipped lab.

Researcher Mu Hua on secondment in Guangzhou
Researcher Mu Hua on secondment in Guangzhou