The 2019 15th International Conference on Artificial Intelligence Applications and Innovations AIAI was held on the 24th -26th May 2019 in Crete, Greece at the Knossos Royal Beach Resort. The detailed program can be found here.
The conference was attended by the ULTRACEPT researchers Hongxin Wang and Jiannan Zhao from the University of Lincoln and Xingzao Ma from Lingnan Normal University.
AIAI Session 2: (AUV-LE) Autonomous Vehicles-Learning
Jiannan Zhao presented at this conference.
Room B: 10:30- 11:45
Jiannan Zhao, Xingzao Ma, Qinbing Fu, Cheng Hu, Shigang Yue
Abstract. Building a reliable and efficient collision avoidance system For unmanned aerial vehicles (UAVs) is still a challenging problem. This research takes inspiration from locusts, which can y in dense swarms for hundreds of miles without collision. In the locust’s brain, a visual pathway of LGMD-DCMD (lobula giant movement detector and descending contra-lateral motion detector) has been identified as collision perception system guiding fast collision avoidance for locusts, which is ideal for designing artificial vision systems. However, there is very few works investigating its potential in real-world UAV applications. In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation. Compared to previous works, we divided the UAV’s field of view into four subfields each handled by an LGMD neuron. Therefore, four individual competitive LGMDs (C-LGMD) compete for guiding the directional collision avoidance of UAV. With more degrees of freedom compared to ground robots and vehicles, the UAV can escape from collision along four cardinal directions (e.g. the object approaching from the left-side triggers a rightward shifting of the UAV). Our proposed method has been validated by both simulations and real-time quadcopter arena experiments.
AIAI Session 10: (AG-MV) Agents-Machine Vision
Hongxin Wang presented at this conference.
Room C: 12:00-13:15
Huatian Wang, Qinbing Fu, Hongxin Wang, Jigen Peng, Shigang Yue
Insects use visual cues to control their flight behaviours. By estimating the angular velocity of the visual stimuli and regulating it to a constant value, honeybees can perform a terrain following task which keeps the certain height above the undulated ground. For mimicking this behaviour in a bio-plausible computation structure, this paper presents a new angular velocity decoding model based on the honeybee’s behavioural experiments. The model consists of three parts, the texture estimation layer for spatial information extraction, the motion detection layer for temporal information extraction and the decoding layer combining information from pervious layers to estimate the angular velocity. Compared to previous methods on this field, the proposed model produces responses largely independent of the spatial frequency and contrast in grating experiments. The angular velocity based control scheme is proposed to implement the model into a bee simulated by the game engine Unity. The perfect terrain following above patterned ground and successfully flying over irregular textured terrain show its potential for micro unmanned aerial vehicles’ terrain following.