Yunlei Shi is a 4th year full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots. In 2020 he was seconded to Tsinghua University as part of the STEP2DYNA project. His work continues in the ULTRACEPT project where he contributes to Work Package 4.
Yunlei Shi attended the 2021 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 20/21) which was held 27th to the 31st December 2021 at the Four Points by Sheraton Hainan, Sanya, China. The conference was held both in person and online. Yunlei was grateful for the opportunity to attend this fantastic conference with support from ULTRACEPT.
The theme of ROBIO 20-21 was “Robotics and Biomimetics to meet societal grand challenges” reflecting the fast-growing and timely interests in research, development and applications and their impacts on the world. Due to the COVID-19 pandemic, ROBIO 2020 and 2021 were combined and held jointly. ROBIO 20-21 The conference highlighted the research results, new engineering development, and applications related to meeting the societal grand challenges such as COVID-19 pandemic.
The ISAIC is a flagship annual international conference on computational intelligence, promoting all aspects of theory, algorithm design, applications and related emerging techniques. As a tradition, the ISAIC 2021 will co-locate a large number of topics within or related to computational intelligence, thereby providing a unique platform for promoting cross-fertilization and collaboration. ISAIC 2021 featured keynote speeches, invited speeches, oral presentations and poster sessions.
It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.
Yunlei Shi is a 4th year full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots. In 2020 he was seconded to Tsinghua University as part of the STEP2DYNA project. His work continues in the ULTRACEPT project where he contributes to Work Package 4.
Yunlei Shi attended the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) to present his research. IROS 2021 is the first-ever conference organized by a Central European country and, more remarkably, by the country that introduced the word “robot” to the world. The IROS 2021 was held online from 27th September to 1st October 2021, in Prague, Czech Republic.
Yunlei represented Agile Robots, Universität Hamburg, and the ULTRACEPT project by presenting his conference paper Yunlei Shi, Zhaopeng Chen, Yansong Wu, Dimitri Henkel, Sebastian Riedel, Hongxu Liu, Qian Feng, Jianwei Zhang “Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks”, (IROS) 2021, Prague, Czech Republic. Yunlei was grateful for the opportunity to attend this fantastic conference with support from ULTRACEPT.
Abstract
Collaborative robots are expected to be able to work alongside humans and in some cases directly replace existing human workers, thus effectively responding to rapid assembly line changes. Current methods for programming contact-rich tasks, especially in heavily constrained space, tend to be fairly inefficient. Therefore, faster and more intuitive approaches to robot teaching are urgently required. This work focuses on combining visual servoing based learning from demonstration (LfD) and force-based learning by exploration (LbE), to enable fast and intuitive programming of contact-rich tasks with minimal user effort required. Two learning approaches were developed and integrated into a framework, and one relying on human to robot motion mapping (the visual servoing approach) and one on force-based reinforcement learning. The developed framework implements the non-contact demonstration teaching method based on visual servoing approach and optimizes the demonstrated robot target positions according to the detected contact state. The framework has been compared with two most commonly used baseline techniques, pendant-based teaching and hand-guiding teaching. The efficiency and reliability of the framework have been validated through comparison experiments involving the teaching and execution of contact-rich tasks. The framework proposed in this paper has performed the best in terms of teaching time, execution success rate, risk of damage, and ease of use.
Learn more about this conference paper by watching the demonstration on the TAMS Youtube channel.
ULTRACEPT University of Lincoln researcher Jiannan Zhao recently published a paper titled “Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections” on IEEE Transactions on Neural Networks and Learning Systems. IEEE Transactions on Neural Networks and Learning Systems is one of the top-tier journals that publish technical articles dealing with the theory, design, and applications of neural networks and related learning systems. It has a significant influence on artificial neural networks and learning systems.
Research Summary
Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect’s visual neuron, LGMD is considered to be an ideal basis for building UAV’s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, this research proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts’ synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. A series of experiments systematically analysed the proposed model during UAV agile flights. Our results demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.
Research Highlights
To overcome whole-field-of-view image motion during UAV agile flight, this research proposed novel synaptic computing strategies to filter on image angular velocity. Due to the proposed spatial-temporal distributed synaptic interconnections, the model of the LGMD neuron is able to select looming pattern with linear synaptic computation only. The neural model is depicted in Figure 1, Video trials performance is shown in Figure 2 and UAV onboard experiments are demonstrated in the supplementary videos.
Supplementary Video
Further demonstration and analyses is provided in the video.
University of Lincoln researchers Tian Liu, Xuelong Sun, and Jiannan Zhao recently competed at the 2021 International Competition of Autonomous Running Robots (Running Robot). Running Robot is an international competition co-launched by Beijing Association for Science and Technology, Beijing Institute of Electronics, School of Integrated Circuits, Tsinghua University, Beijing Science and Technology Association Service Center, Korea Advanced Institute of Science and Technology, etc. The competition has been successfully held for two terms, attracting more than 40 well-known universities and more than 110 teams from countries and regions, including Germany, Britain, South Korea, Pakistan, Russia and China. The competition attracted the attention of CCTV, China International TV, China Education TV, Beijing TV, Xinhua news agency, China News Agency, people’s daily, and other domestic media.
Given the pandemic situation, alongside a physical competition held in Beijing from 15-17th October 2021, there was also a virtual competition making use of the robotic simulation software, Webots. Xuelong Sun, Tian Liu and Jiannan Zhao from the University of Lincoln participated in this competition under the team name ‘LinBot’. Their excellent performance in the virtual competition secured them an impressive second place. Within only eight minutes, robots are asked to fulfill multiple tasks on the road as quickly as possible. LinBot completed all the tasks in about 7 minutes.
Xuelong said, “by solving all the challenging problems in this competition, I have learned a lot about biped robot controlling, object recognition, computer vision etc. And importantly how to cooperate with others in a team. Thanks for the support and help from the ULTRACEPT project and all the colleagues in the university.”
Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This research investigates the robustness of two state-of-the-art neural network models inspired by the locust’s LGMD-1 and LGMD-2 visual pathways as fast and low energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This research also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.
Research Highlights
This research corroborates the robustness of LGMDs (Figure 1) neuronal systems model to timely alert potential crashes in dynamic multi-robot scenes. To sharpen up the acuity of LGMDs inspired visual systems in collision sensing, an original hybrid LGMD-1 and LGMD-2 neural networks model (Figure 2) is proposed with non-linear mapping from network outputs to alert firing rate, which works effectively.
This research complements previous experimentation on the proposed bio-inspired computation approach to collision prediction in more critical, real-physical scenarios.
This research exhibits an innovative, tractable, and affordable robotic approach to evaluate online visual systems in different dynamic scenes.
Research Platform
This research applies our developed robotic platform as shown below. The autonomous mobile robot used in this study is called Colias-IV (Hu et al., 2018), which includes mainly two components that provide different functions, namely the Colias Basic Unit (CBU), and the Colias Sensing Unit (CSU).
Supplementary Video
We have a supplemental video to explain this novel research outcomes.
Focussing on noise test refinements of the circuits and chip components design, reporting developments on multiple visual systems and multiple modality computation systems coordination, integration and realisation, relevant to WP1, WP3 and WP4.
The ULTRACEPT Workshop 4 was hosted by ULTRACEPT partner Universitat of Hamburg (UHAM). It took place over two days on the 25th and 26th October 2021. Due to ongoing travel restrictions, the workshop was hosted online. 36 researchers attended the session.
Details of the agenda are set out below.
Day 1
Date: Monday, 25 October 2021
Time: Germany 11:00; UK 10:00; China 17:00; Buenos Aires 06:00; Malaysia 17:00; Japan 18:00
Facilitator: Prof Jianwei Zhang
German time
Item
Lead
11:00-11:05
Arrival and welcome
Prof Jianwei Zhang
11:05-12:05
Bio-inspiration and bio-understanding in collision avoidance
Dr. Liang Li, Max Planck Institute of Animal Behaviour.
45 minutes presentation & 15 minutes Q&A
Abstract: Animals that move within groups or through complex habitats must frequently contend with obstacles in their path. However, what variables animals perceive through onboard sensors and how the perceived information is processed for motor control are largely unexplored. To solve this, we need inter-disciplinary studies between biology and robotics, including applying biological mechanisms in robotics and using robots to study biology. In this talk, I will first introduce bio-inspired formation control in collective fish-like robots to avoid collisions. Following this, I will report how bumblebees avoid environmental obstacles and navigate through narrow gaps. The mechanisms of visual-motor control can greatly inspire engineers to build intelligent robots to avoid collisions in complex environments. And then, I will report several ongoing studies of using real and virtual robots to study how the robots can help us understand the leader-follower behavior without collisions. Finally, I would like to report how the robots can generate biological hypotheses of sensory-motor control in schooling fish. Over this talk, I would like to highlight that applying robotics to study biology is as important as researches of bio-inspiration because it can help us generate hypotheses, explore potential mechanisms, and verify sensory-motor control in biological systems. Once we have a clear understanding of the mechanisms in living organisms, we can easily apply them to robotics.
Dr Liang Li, Max Planck Institute of Animal Behaviour & University of Konstanz. Liang Li received B.E. degree in automation from Chongqing University, China in 2011, and the PhD degree in general mechanics and foundation of mechanics from Peking University, China in 2017. From February 2017 to June 2021, he was a Postdoctoral Research Fellow in the Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany. He is currently a Project Leader (Principal Investigator) with the Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany. His research interests include bio-inspired robots, collective behaviour in hybrid animal-robot systems, bio-fluid dynamics in fish school, and swarm intelligence in robots.
Dr Liang Li
12:05-12:30
Break
12:30-13:30
Multisensor based vehicle collision avoidance – algorithms, hardware design and applications
Prof. Ming Li, Wuhan University / In-Driving.
45 minutes presentation & 15 minutes Q&A
Prof. Li, is currently an associate professor in the Department of Computer Science of Wuhan University. He got his Phd degree in photogrammetry and remote sensing from Wuhan University in 2007. From 2011 to 2012, he studied environmental modeling of unmanned systems at Jacobi University in Germany, jointly supported by German DAAD and China CSC as a visiting scholar. In 2013, he received funding from China CSC and continued his work in developing unmanned vehicle systems in Karlsruher Institut für Technologie in Germany.
He is engaged in the research of unmanned driving environment perception technology, organized the development of multi generation unmanned intelligent vehicle platforms. The first generation of unmanned vehicles SmartV II from his team won first place in the comprehensive test and second place in the total score in the “Future Challenge” competition. The second-generation unmanned vehicle uses VeloSLAM to realize autonomous driving in a complex urban environment, for example, the complex Luxiang roundabout in Wuhan. The third-generation unmanned vehicle from his team is jointly developed with Dongfeng Technology Center. It has reached the mass production prototype test standard of the car factory and was reported as the first autonomous driving vehicle of Dongfeng by Hubei TV.
Prof Ming Li
13:30-14:00
BCI Technology for Human-robot Collaboration
Jianzhi Lyu, PhD student in computer science, Universitat of Hamburg
20 minutes presentation & 10 minutes Q&A
Abstract: To avoid collisions and make collaboration in a shared workspace safe, robots need to detect the human’s movement intention as early as possible, thus allowing for the time needed to replan and execute the robot’s trajectory. In this paper, we present a setup for studying how information recorded from a motion-tracking system and the electroencephalogram (EEG)of the human brain can be exploited for dynamically adjusting the robot’s trajectories. In particular, we employ a brain-computer interface (BCI) to detect the target of the human’s overt attention and develop a controller which minimizes interference with the human’s action yet maximizes performance in the robot’s task. Moreover, EEG data are used to evaluate the operator’s vigilance and adapt parameters of the robot movements accordingly.
Jianzhi Lyu
14:00-14:05
Day 1 close
Prof Jianwei Zhang
Day 2
Date: Tuesday, 26 October 2021
Time: Germany 11:00; UK 10:00; China 17:00; Buenos Aires 06:00; Malaysia 17:00; Japan 18:00
Facilitator: Prof Jianwei Zhang
11:00 to 13:00: ULTRACEPT board meeting – only members and representatives to attend this meeting. Guests may join for the guest speaker at 13:30.
German time
Item
Lead
11:00-13:00
Board meeting
13:30-14:00
Omnidirectional Bipedal Walking in Cartesian Space through Reinforcement Learning and Optimized Quintic Splines
Marc Bestmann, PhD student in computer science, Universitat of Hamburg
20 minutes presentation & 10 minutes Q&A
Abstract: This presentation investigates design choices for reinforcement learning in the domain of bipedal walking. We demonstrate that an omnidirectional walk for a humanoid robot can be achieved by using a walk engine to generate reference actions. The used walk engine is based on parameterized quintic splines that are optimized with the Multi-objective Tree-structured Parzen Estimator (MOTPE). We show that using Cartesian policies improves the achieved reward in comparison to joint space based policies. Furthermore, it is demonstrated that the achieved reward is proportional to the reference motions quality. The learned policy is transferred to a different simulation and the real robot.
Marc Bestmann
14:00-14:10
Workshop event close
Prof Jianwei Zhang
The workshop was formally opened by UHAM ULTRACEPT lead Prof Jianwei Zhang. Following this was a presentation from guest speaker Dr Liang Li from the Max Planck Institute of Animal Behaviour on Bio-inspiration and bio-understanding in collision avoidance.
Following this was a presentation from guest speaker Prof. Ming Li, Wuhan University / In-Driving. He presented on Multisensor based vehicle collision avoidance – algorithms, hardware design and applications.
The final presentation for day 2 of the ULTRACEPT workshop was from UHAM PhD student Jianzhi Lyu. Jianzhi presented BCI Technology for Human-robot Collaboration.
Day 2 of the workshop begun with an ULTRACEPT board meeting. This was followed by a presentation from UHAM PhD student Marc Bestmann who presented his work on Omnidirectional Bipedal Walking in Cartesian Space through Reinforcement Learning and Optimized Quintic Splines.
Siavash Bahrami is a PhD candidate at Universiti Putra Malaysia (UPM), working on multimodal deep neural networks using acoustic and visual data for developing an active road safety system intended for autonomous and semi-autonomous vehicles. Siavash is contributing to ULTRACEPTs work package 2 and completed secondments at partners the University of Lincoln and Visomorphic LTD.
The Ninth International Conference on Computational Science and Technology 2021 (ICCST2021) is an international scientific conference for research in the field of advanced computational science and technology. The conference was held virtually in Labuan, Malaysia, on the 28th – 29th August 2021.
Siavash Bahrami was awarded ‘Best Student Paper’ for his paper titled “CNN Architectures for Road Surface Wetness Classification from Acoustic Signals” presented during the Eighth International Conference on Computational Science and Technology (ICCST2021). The data utilised for training and testing the proposed CNN architectures were collected during Siavash’s ULTRACEPT secondments in the UK. Despite the strains caused by the global pandemic, with the assistance of UoL and UPM project members, Siavash managed to complete his secondment and collect the data needed for both his PhD thesis and the ULTRACEPT project work package 2.
The classification of road surface wetness is important for both the development of future driverless vehicles and the development of existing vehicle active safety systems. Wetness on the road surface has an impact on road safety and is one of the leading causes of weather-related accidents. Although machine learning algorithms such as recurrent neural networks (RNN), support vector machines (SVM), artificial neural networks (ANN) and convolutional neural networks (CNN) have been studied for road surface wetness classification, the improvement of classification performances are still widely being investigated whilst keeping network and computational complexity low. In this paper, we propose new CNN architectures towards further improving classification results of road surface wetness detection from acoustic signals. Two CNN architectures with differing layouts for its dropout layers and max-pooling layers have been investigated. The positions and the number of the max-pooling layers were varied. To avoid overfitting, we used 50% dropout layers before the final dense layers with both architectures. The acoustic signals of tyre to road interaction were recorded via mounted microphones on two distinct cars in an urban environment. Mel-frequency cepstral coefficients (MFCCs) features were extracted from the recordings as inputs to the models. Experimentation and comparative performance evaluations against several neural networks architectures were performed. Recorded acoustic signals were segmented into equal frames and thirteen MFCCs were extracted for each frame to train the CNNs. Results show that the proposed CMCMDD1 architecture achieved the highest accuracy of 96.36% with the shortest prediction time.
University of Lincoln Masters researcher Mu Hua, recently completed a 12 month secondment for ULTRACEPT at partner Guangzhou University in China.
During my one-year secondment at Guangzhou University, my research was based on the previous works on the locusts LGMD (Lobula Giant Movement Detector) neural networks for collision perception, including the LGMD1 from Prof Shigang Yue and LGMD2 from Dr Qinbing Fu. My work was mainly focused on improving the LGMDs neural network’s ability for ultra-fast approaching objects.
Benefiting from thousands of decades’ evolution, locusts have been equipped with a vision system that improves their success rate of evading their natural predators in the blink of an eye. Taking inspiration from nature through the computational models of LGMDs in locust’s visual pathways has had a positive impact on addressing these problems. However, it is still challenging for current LGMD neural networks to accurately and reliably recognize the imminent collision when the approaching object is ultra-fast (see Fig. 1). The green dashed line is the threshold we set to indicate whether collision is happening or not; the Blue curve is the current LGMD1 responses to the ultra-fast objects. The neuron fires spikes and generates a ‘false alert’ while the approaching black ball is far away.
Since the refractoriness, namely the refractory period which is a common mechanism within plenty of creatures’ neuron systems, is able to assist together with other sorts of mechanisms to help stabilize a neuron. It is then introduced to previous works of LGMDs neural networks for further improvement. On the left in Figure 2, we show a comparison between our new proposed LGMD1 neural network and the previous one from Shigang Yue. On the right, we demonstrate the comparison between our proposed method of LGMD2 and the previous one from Qinbing Fu.
To better understand the refractoriness mechanism and explain the rationality of integrating the LGMDs neural network, we sought guidance from professors, Prof. Jigen Peng and Prof. Huang, and our outstanding colleagues. Their inference from the perspective of mathematics supported the proposed method (see Fig.3).
During my secondment, I obtained knowledge on both bio-plausible neural networks and coding and gained much experience in setting up experiments and analysing the experimental results. Many thanks to the ULTRACEPT project for supporting my research at Guangzhou University, and to my host, Prof. Jigen Peng for kindly providing me access to his well-equipped lab.
Mu Hua is a post-graduate student at the University of Lincoln and working on ULTRACEPT’s work package 1.
University of Lincoln researcher Mu Hua attended and presented at the International Joint Conference on Neural Networks 2021 (IJCNN 2021) which was held from 18th to 22nd July 2021. Although originally scheduled to be held in Shenzhen, China, due to the ongoing international travel disruption caused by Covid-19, the conference was moved online.
IJCNN 2021 is the flagship annual conference of the International Neural Network Society (INNS) – the premiere organisation for individuals interested in a theoretical and computational understanding of the brain and applying that knowledge to develop new and more effective forms of machine intelligence. INNS was formed in 1987 by the leading scientists in the Artificial Neural Networks (ANN) field. The conference promotes all aspects of neural networks theory, analysis and applications.
This year IJCNN received 1183 papers submitted from over 77 different countries. Of these, 1183 papers, 59.3% were accepted. All of them are included in the program as virtual oral presentations. The top ten countries where the submitting authors come from are (in descending order): China, United Sates, India, Brazil, Australia, United Kingdom, Germany, Japan, Italy, Brazil, Japan, Italy and France. The event was attended by more than 1166 participants and featured special sessions, plenary talks, competitions, tutorials, and workshops.
Representing the University of Lincoln, Mu Hua presented his paper Mu Hua, Qinbing Fu, Wenting Duan, Shigang Yue “Investigating Refractoriness in Collision Perception Neural Network”, (IJCNN 2021) with a poster demonstrating that numerical modelling refractory period, a common neuronal phenomenon, can a promising way to enhance the stability of currently LGMD neural network for collision perception.
Abstract
Currently, collision detection methods based on visual cues are still challenged by several factors including ultra-fast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant movement detectors (LGMDs) in locust’s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ‘link (L) layer’ located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.