Fang Lei enrolled as a PhD Scholar at the University of Lincoln in 2019. In early 2020 she visited Guangzhou University as part of the STEP2DYNA project funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skolodowska-Curie grant agreement. During this secondment Fang Lei was working on developing bio-inspired visual systems for collision detection in dim light environments. More recently, Fang continued this work during her 12 month secondment at Guangzhou University under ULTRACEPT from May 2020 to 2021.
During the secondment to Guangzhou University, I was working on developing bio-inspired visual systems for collision detection in dim light environments. For the autonomous navigation of vehicles or robots, it is a challenging task to detect moving objects in extremely low-light conditions due to very low signal-to-noise ratios (SNRs). However, nocturnal insects possess remarkable visual abilities in perceiving motion cues and detecting moving objects in very dim light environments. There are many studies on the night vision of insects’ visual systems, which provide us with a lot of inspirations for enhancing motion cues and modelling an artificial visual system to detect motion like looming objects. Fig. 1 shows an example image of looming motion in a dim light environment which is from the low-light video motion (LLVM) dataset obtained by the experimental devices (see Fig. 2).
In order to develop more ideas and experiences in my modelling work, I discussed this with other colleagues and Prof. Peng (see Fig. 3) and got very useful suggestions. We discussed mainly the biological modelling of direction selectivity of LGMD1. We also organized a group seminar every week to discuss the related problems we encounter in our research projects, and I also got a lot of valuable experiences on bio-inspired modelling by sharing our ideas.
For my research work, collision detection in a dim light environment includes the modelling work of direction selectivity of LGMD1 neuron and the motion cues enhancement. I have developed the new LGMD1 model which is effective in distinguishing looming motion from translating motion. I have published one conference paper and attended the online virtual conference (IJCNN 2021, see Fig. 4). I also submitted one journal paper to IEEE transactions on neural networks and learning systems (NNLS) which is under review. Additionally, I have finished the modelling work of motion cues enhancement and proposed a new model. Fig. 5 shows the enhancement results of the captured dark image sequences during testing experiments.
During this 12-month secondment, I have a better knowledge of bio-inspired modelling and obtain a lot of exercises of connection between theory and practice. I established good friendships with my colleagues through frequent communications in every week’s group seminar, which provide a basis for future cooperation. The secondment was a very precious experience for me. Many thanks to ULTRACEPT project for supporting my research work and providing me with the opportunity to work together with my colleagues.