Tag Archives: Publication

Yicheng Zhang presents ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’ at ISAIC 2021

Yicheng Zhang is a PhD student at the University of Lincoln and working on ULTRACEPT’s Work Package 3.

Recently Yicheng Zhang attended the 2nd International Symposium on Automation, Information and Computing (ISAIC 2021) organized by Beijing Jiaotong University. Due to the current travel restrictions, this year’s conference was moved online from 3rd to 6th of December 2021.

The ISAIC is a flagship annual international conference on computational intelligence, promoting all aspects of theory, algorithm design, applications and related emerging techniques. As a tradition, the ISAIC 2021 will co-locate a large number of topics within or related to computational intelligence, thereby providing a unique platform for promoting cross-fertilization and collaboration. ISAIC 2021 featured keynote speeches, invited speeches, oral presentations and poster sessions.

At the event, Yicheng presented his conference paper Yicheng Zhang, Cheng Hu, Mei Liu, Hao Luan, Fang Lei, Heriberto Cuayahuitl and Shigang Yue ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’. An open access version can be accessed here.

Yicheng Zhang presents 'Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network' at ISAIC 2021
Yicheng Zhang presents ‘Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network’
Abstract

It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks

Yunlei Shi is a 4th year full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots.  In 2020 he was seconded to Tsinghua University as part of the STEP2DYNA project. His work continues in the ULTRACEPT project where he contributes to Work Package 4. 

Yunlei Shi attended the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) to present his research. IROS 2021 is the first-ever conference organized by a Central European country and, more remarkably, by the country that introduced the word “robot” to the world. The IROS 2021 was held online from 27th September to 1st October 2021, in Prague, Czech Republic.

Yunlei represented Agile Robots, Universität Hamburg, and the ULTRACEPT project by presenting his conference paper Yunlei Shi, Zhaopeng Chen, Yansong Wu, Dimitri Henkel, Sebastian Riedel, Hongxu Liu, Qian Feng, Jianwei Zhang “Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks”, (IROS) 2021, Prague, Czech Republic. Yunlei was grateful for the opportunity to attend this fantastic conference with support from ULTRACEPT.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks
Yunlei Shi presenting at IROS 2021

Abstract

Collaborative robots are expected to be able to work alongside humans and in some cases directly replace existing human workers, thus effectively responding to rapid assembly line changes. Current methods for programming contact-rich tasks, especially in heavily constrained space, tend to be fairly inefficient. Therefore, faster and more intuitive approaches to robot teaching are urgently required. This work focuses on combining visual servoing based learning from demonstration (LfD) and force-based learning by exploration (LbE), to enable fast and intuitive programming of contact-rich tasks with minimal user effort required. Two learning approaches were developed and integrated into a framework, and one relying on human to robot motion mapping (the visual servoing approach) and one on force-based reinforcement learning. The developed framework implements the non-contact demonstration teaching method based on visual servoing approach and optimizes the demonstrated robot target positions according to the detected contact state. The framework has been compared with two most commonly used baseline techniques, pendant-based teaching and hand-guiding teaching. The efficiency and reliability of the framework have been validated through comparison experiments involving the teaching and execution of contact-rich tasks. The framework proposed in this paper has performed the best in terms of teaching time, execution success rate, risk of damage, and ease of use.

Yunlei Shi: Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks
Fig. 2. A robot arm and a suction gripper performing a contact-rich tending task. (a) Gross motion is learned from human demonstration. (b) Fine motion is learned from exploration. (c) Example of a contact-rich tending task

Learn more about this conference paper by watching the demonstration on the TAMS Youtube channel.

Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections

ULTRACEPT University of Lincoln researcher Jiannan Zhao recently published a paper titled “Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections” on IEEE Transactions on Neural Networks and Learning Systems. IEEE Transactions on Neural Networks and Learning Systems is one of the top-tier journals that publish technical articles dealing with the theory, design, and applications of neural networks and related learning systems. It has a significant influence on artificial neural networks and learning systems.

Research Summary

Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect’s visual neuron, LGMD is considered to be an ideal basis for building UAV’s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, this research proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts’ synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. A series of experiments systematically analysed the proposed model during UAV agile flights. Our results demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.

Research Highlights

To overcome whole-field-of-view image motion during UAV agile flight, this research proposed novel synaptic computing strategies to filter on image angular velocity. Due to the proposed spatial-temporal distributed synaptic interconnections, the model of the LGMD neuron is able to select looming pattern with linear synaptic computation only. The neural model is depicted in Figure 1, Video trials performance is shown in Figure 2 and UAV onboard experiments are demonstrated in the supplementary videos.

Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections
FIGURE 1. D-LGMD neural model. DPC: distributed presynaptic connection, FFI: feed forward inhibition, FFI-GD: FFI mediated grouping and decay, MP: membrane potential. This research focuses on remodeling the DPC structure, where temporally distributed presynaptic connections and radially distributed temporal latency is considered to form the selectivity on looming patterns.
Enhancing LGMD’s Looming Selectivity for UAV with Spatial-temporal Distributed Presynaptic Connections
FIGURE 2. The proposed model performed excellently in video trails captured during UAV agile flight. (a) Raw input. (b)Corresponding frame difference (c)Filtered results by the DPC linear connections. (d)Further enhanced results by nonlinear ReLu and threshold processes.

Supplementary Video

Further demonstration and analyses is provided in the video.

Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic

ULTRACEPT researcher Dr Qinbing Fu recently published his journal article Fu, Qinbing, Sun, Xuelong, liu, Tian et al, Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic. Frontiers in Robotics and AI, 8 . p. 529872, 2021. ISSN 22969144. In this post, Dr Fu shares with us the highlights of this research.

Research Summary

Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This research investigates the robustness of two state-of-the-art neural network models inspired by the locust’s LGMD-1 and LGMD-2 visual pathways as fast and low energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This research also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.

Research Highlights

This research corroborates the robustness of LGMDs (Figure 1) neuronal systems model to timely alert potential crashes in dynamic multi-robot scenes. To sharpen up the acuity of LGMDs inspired visual systems in collision sensing, an original hybrid LGMD-1 and LGMD-2 neural networks model (Figure 2) is proposed with non-linear mapping from network outputs to alert firing rate, which works effectively.

Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic
FIGURE 1. Schematic illustration of the LGMD-1 and the LGMD-2 neuromorphology. Visual stimuli are received by the pre-synaptic dendrite structures of both neurons. The feed-forward inhibition (FFI) pathway connects the LGMD-1. The DCMD (descending contra-lateral movement detector) is a one-to-one post-synaptic target neuron to the LGMD-1 conveying spikes to motion control neural system. The post-synaptic partner neuron to the LGMD-2 yet remains unknown.
Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic
FIGURE 2. Schematic illustration of the proposed feed-forward collision prediction visual neural networks. There are three layers pre-synaptic to the two neurons, the photoreceptor (P), lamina (LA) and medulla (ME) layers. The pre-synaptic neural networks of LGMD-1 and LGMD-2 share the same visual processing in the first two, P and LA layers. The processing yet differs in the third ME layer for the purpose of separating their different selectivity. The ME layer consists of ON/OFF channels wherein the ON channels are rigorously suppressed in the LGMD-2’s circuit (dashed lines). The delayed information is formed by convolving surrounding non-delayed signals in space. The FFI is an individual inhibition pathway to merely the LGMD-1. The PM is a mediation pathway to the medulla layer of the LGMD-2. The two LGMDs pool their pre-synaptic signals respectively to generate spikes that are passed to their post-synaptic neurons. Notably, the non-linearly mapped, hybrid firing rate is the network output deciding the corresponding collision avoidance response.

This research complements previous experimentation on the proposed bio-inspired computation approach to collision prediction in more critical, real-physical scenarios.

This research exhibits an innovative, tractable, and affordable robotic approach to evaluate online visual systems in different dynamic scenes.

Research Platform

This research applies our developed robotic platform as shown below. The autonomous mobile robot used in this study is called Colias-IV (Hu et al., 2018), which includes mainly two components that provide different functions, namely the Colias Basic Unit (CBU), and the Colias Sensing Unit (CSU).

Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic
FIGURE 3. Overview of the robotic platform consisting of multiple-pheromone module and micro-mobile robots. The pheromone module is composed of a camera system connecting a computer and a TV arena. The micro-mobile robot comprises a visual sense board implementing the proposed visual systems, and a motion board for route following and emergency braking. Four colour sensors are marked in the bottom view of the robot used for sensing optically rendered pheromone cues displayed upon the LCD screen. The ID-pattern on top of robot is used to run a real time localisation system.

Supplementary Video

We have a supplemental video to explain this novel research outcomes.

Siavash Bahrami Awarded Best Student Paper at International Conference ICCST2021

Siavash Bahrami is a PhD candidate at Universiti Putra Malaysia (UPM), working on multimodal deep neural networks using acoustic and visual data for developing an active road safety system intended for autonomous and semi-autonomous vehicles. Siavash is contributing to ULTRACEPTs work package 2 and completed secondments at partners the University of Lincoln and Visomorphic LTD.

The Ninth International Conference on Computational Science and Technology 2021 (ICCST2021) is an international scientific conference for research in the field of advanced computational science and technology. The conference was held virtually in Labuan, Malaysia, on the 28th – 29th August 2021.

Siavash Bahrami presenting at ICCST2021
Siavash Bahrami presenting online at ICCST2021

Siavash Bahrami was awarded ‘Best Student Paper’ for his paper titled “CNN Architectures for Road Surface Wetness Classification from Acoustic Signals” presented during the Eighth International Conference on Computational Science and Technology (ICCST2021). The data utilised for training and testing the proposed CNN architectures were collected during Siavash’s ULTRACEPT secondments in the UK. Despite the strains caused by the global pandemic, with the assistance of UoL and UPM project members, Siavash managed to complete his secondment and collect the data needed for both his PhD thesis and the ULTRACEPT project work package 2.

Best Student Paper Award ICCDT 2021 Siavash Bahrami
Best Student Paper Award ICCDT 2021 Siavash Bahrami

The classification of road surface wetness is important for both the development of future driverless vehicles and the development of existing vehicle active safety systems. Wetness on the road surface has an impact on road safety and is one of the leading causes of weather-related accidents. Although machine learning algorithms such as recurrent neural networks (RNN), support vector machines (SVM), artificial neural networks (ANN) and convolutional neural networks (CNN) have been studied for road surface wetness classification, the improvement of classification performances are still widely being investigated whilst keeping network and computational complexity low. In this paper, we propose new CNN architectures towards further improving classification results of road surface wetness detection from acoustic signals. Two CNN architectures with differing layouts for its dropout layers and max-pooling layers have been investigated. The positions and the number of the max-pooling layers were varied. To avoid overfitting, we used 50% dropout layers before the final dense layers with both architectures. The acoustic signals of tyre to road interaction were recorded via mounted microphones on two distinct cars in an urban environment. Mel-frequency cepstral coefficients (MFCCs) features were extracted from the recordings as inputs to the models. Experimentation and comparative performance evaluations against several neural networks architectures were performed. Recorded acoustic signals were segmented into equal frames and thirteen MFCCs were extracted for each frame to train the CNNs. Results show that the proposed CMCMDD1 architecture achieved the highest accuracy of 96.36% with the shortest prediction time.

Siavash and supervisor Dr Shyamala Doraisamy recording road sounds data whilst on secondment at UoL
Siavash and UPM supervisor Dr Shyamala Doraisamy recording road sounds data whilst on secondment at UoL
CMCMDD architecture with two layers of convolution and kernel size
CMCMDD architecture with two layers of convolution and kernel size

References:

Siavash Bahrami, Shyamala Doraisamy, Azreen Azman, Nurul Amelina Nasharuddin, and Shigang Yue. 2020. Acoustic Feature Analysis for Wet and Dry Road Surface Classification Using Two-stream CNN. In 2020 4th International Conference on Computer Science and Artificial Intelligence (CSAI 2020). Association for Computing Machinery, New York, NY, USA, 194–200. https://doi.org/10.1145/3445815.3445847

Mu Hua Presents ‘Investigating Refractoriness in Collision Perception Neural Model’ at IJCNN 2021

Mu Hua is a post-graduate student at the University of Lincoln and working on ULTRACEPT’s work package 1.

IJCNN 2021

University of Lincoln researcher Mu Hua attended and presented at the International Joint Conference on Neural Networks 2021 (IJCNN 2021) which was held from 18th to 22nd July 2021. Although originally scheduled to be held in Shenzhen, China, due to the ongoing international travel disruption caused by Covid-19, the conference was moved online.

IJCNN 2021 is the flagship annual conference of the International Neural Network Society (INNS) – the premiere organisation for individuals interested in a theoretical and computational understanding of the brain and applying that knowledge to develop new and more effective forms of machine intelligence. INNS was formed in 1987 by the leading scientists in the Artificial Neural Networks (ANN) field. The conference promotes all aspects of neural networks theory, analysis and applications.

This year IJCNN received 1183 papers submitted from over 77 different countries. Of these, 1183 papers, 59.3% were accepted. All of them are included in the program as virtual oral presentations. The top ten countries where the submitting authors come from are (in descending order): China, United Sates, India, Brazil, Australia, United Kingdom, Germany, Japan, Italy, Brazil, Japan, Italy and France. The event was attended by more than 1166 participants and featured special sessions, plenary talks, competitions, tutorials, and workshops.

Representing the University of Lincoln, Mu Hua presented his paper Mu Hua, Qinbing Fu, Wenting Duan, Shigang Yue “Investigating Refractoriness in Collision Perception Neural Network”, (IJCNN 2021) with a poster demonstrating that numerical modelling refractory period, a common neuronal phenomenon, can a promising way to enhance the stability of currently LGMD neural network for collision perception.

Figure 1: (a) Refractoriness schematic diagram. The orange curve shows the change of membrane potential. Depolarization and repolarization are represented by dashed line with arrow. ARP corresponds to depolarization and part of repolarization while RRP is covered by hyper-polarization. (b) shows the curve of ( Pt(x, y) − Lt(x, y) ) when a single stimulus is applied at 1st frame, which resembles the real membrane potential curve during RP.
Figure 2: Snapshots of 389th frame from original video and Gaussian-noise-contaminated video. The orange curve represents LGMD membrane potential with our proposed RP mechanism, comparatively blue one without RP. While most of the blue curve stays at 1, orange curve can be easily distinguished for the peak at 401st frame with violent fluctuation within first 40 frames.

Abstract

Currently, collision detection methods based on visual cues are still challenged by several factors including ultra-fast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant movement detectors (LGMDs) in locust’s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ‘link (L) layer’ located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.

This paper can be freely accessed on the University of Lincoln Institutional Repository Eprints.

A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments

Hongxin Wang received his PhD degree in computer science from the University of Lincoln, UK, in 2020. Following a secondment under the STEP2DYNA project, Dr Wang carried out a further secondment under the ULTRACEPT project from April 2020 to April 2021 at partner Guangzhou University where he undertook research contributing to work packages 1 and 2. Dr Wang’s ULTRACEPT contributions have involved directing the research into computational modelling of motion vision neural systems for small target motion detection.

University of Lincoln researcher Hongxin Wang recently published a paper titled “A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments” on IEEE Transactions on Neural Networks and Learning Systems. IEEE Transactions on Neural Networks and Learning Systems is one of top-tier journals that publish technical articles dealing with the theory, design, and applications of neural networks and related learning systems. It has a significant influence on artificial neural networks and learning systems.

Examples of small moving targets
Fig. 1. (a) on the left and (b) on the right. Examples of small moving targets. (a) A unmanned aerial vehicle (UAV), and (b) a bird in the distance where their surrounding regions are enlarged in the red boxes. Both the UAV and bird appear as dim speckles with only a few pixels in size where most of visual features are difficult to discern. In particular, they all show extremely low contrast against the complex background.

Monitoring moving objects against complex natural backgrounds is a huge challenge to future robotic vision systems, let alone detecting small targets with only one or a few pixels in size, for example, an unmanned aerial vehicle (UAV) or a bird in the distance, as shown in Fig. 1.

Traditional motion detection methods, such as optical flow, background subtraction, and temporal differencing, perform well on large objects which permit visualization with a high degree of resolution, and which present a clear appearance and structure, such as pedestrians, bikes, and vehicles. However, such methods are ineffective against targets as small as a few pixels. This is because visual features, such as texture, color, shape, and orientation, are difficult to determine in such small objects and cannot be used for motion detection. Effective solutions to detect small target motion against cluttered moving backgrounds on natural images are still rare.

Research in the field of visual neuroscience has contributed toward the design of artificial visual systems for small target detection. As a result of millions of years of evolution, insects have developed accurate, efficient, and robust capabilities for the detection of small moving targets. The exquisite sensitivity of insects for small target motion is coming from a class of specific neurons called small target motion detectors (STMDs). Building a quantitative STMD model is the first step for not only further understanding of the biological visual system but also providing robust and economic solutions of small target detection for an artificial vision system.

In this article, we propose an STMD-based model with time-delay feedback (feedback STMD) and demonstrate its critical role in detecting small targets against cluttered backgrounds. We have conducted systematic analysis as well as extensive experiments. The results show that the feedback STMD largely suppresses slow-moving background false-positives, whilst retaining the ability to respond to small targets with higher velocities. The behavior of the developed feedback model is consistent with that of the animal visual systems in which high-velocity objects always receive more attention. Furthermore, it also enables autonomous robots to effectively discriminate potentially threatening fast-moving small targets from complex backgrounds, a feature required, for example, in surveillance.

Nikolas Andreakos Presents Paper in 30TH Annual Computational Neuroscience Meeting (CNS*2021)

Nikolas Andreakos is a PhD candidate at the University of Lincoln, who is working on developing computational models of associative memory formation and recognition in the mammalian hippocampus.

Recently Nikolas attended the 30th Annual Computational Neuroscience Meeting (CNS*2021). Due to the current travel restrictions, this year’s conference was moved online from 3rd to 7th of July 2021.

CNS 2021 online conference image

The purpose of the Organization for Computational Neurosciences is to create a scientific and educational forum for students, scientists, other professionals, and the general public to learn about, to share, contribute to, and advance the state of knowledge in computational neuroscience.

Computational neuroscience combines mathematical analyses and computer simulations with experimental neuroscience, to develop a principled understanding of the workings of nervous systems and apply it in a wide range of technologies.

The Organization for Computational Neurosciences promotes meetings and courses in computational neuroscience and organizes the Annual CNS Meeting which serves as a forum for young scientists to present their work and to interact with senior leaders in the field.

Poster Presentation

Nikolas presented his research Modelling the effects of perforant path in the recall performance of a CA1 microcircuit with excitatory and inhibitory neurons.

CNS 2021 online conference poster
Nikolas Andreakos CNS 2021 poster

Abstract

From recollecting childhood memories to recalling if we turn off the oven before we left the house, memory defines who we are. Losing it can be very harmful to our survival. Recently we quantitatively investigated the biophysical mechanisms leading to memory recall improvement of a computational CA1 microcircuit model of the hippocampus [1]. In the present study, we investigated the synergistic effects of the EC excitatory input (sensory input) and the CA3 excitatory input (contextual information) on the recall performance of the CA1 microcircuit. Our results showed that when the EC input was exactly the same as the CA3 input then the recall performance of our model was strengthened. When the two inputs were dissimilar (degree similarity: 40% – 0%), then the recall performance was reduced. These results were positively correlated with how many “active cells” represented a memory pattern. When the number of active cells increased and the degree of similarity between the two inputs decreased, then the recall performance of the model was reduced. The latter finding confirms previous results of ours where the number of cells coding a piece of information plays a significant role in the recall performance of our model.

References
1. Andreakos, N., Yue, S. & Cutsuridis, V. Quantitative investigation of memory recall performance of a computational microcircuit model of the hippocampus. Brain Inf 8, 9 (2021). https://doi.org/10.1186/s40708-021-00131-7

Nikolas Andreakos CNS 2021 poster presentation
Nikolas Andreakos CNS 2021 poster presentation

ULTRACEPT Researchers Present at IEEE ICRA 2021

The 2021 International Conference on Robotics and Automation (IEEE ICRA 2021) was held in Xi’an, China from 31st May to 4th June 2021. As one of the premier and top conferences in the field of robotics and automation, this great event has gathered thousands of excellent researchers from all over the world. Due to the pandemic, the conference was held in a hybrid format, including physical on-site and virtual cloud meetings. Four ULTRACEPT researchers attended this event, 3 in person and 1 online.

Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot

Yunlei Shi: Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot
Yunlei Shi: Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot

Agile Robots researcher Yunlei Shi attended ICRA 2021 online and presented his paper ‘Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot’.

Yunlei Shi is a full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots, contributing to ULTRACEPT’s Work Package 4. In 2020 he visited Tsinghua University as part of the STEP2DYNA project.

Yunlei Shi presenting at ICRA 2020
Yunlei Shi presenting online at ICRA 2020

Yunlei presented his conference paper:

Yunlei Shi, Zhaopeng Chen, Hongxu Liu, Sebastian Riedel, Chunhui Gao, Qian Feng, Jun Deng, Jianwei Zhang, “Proactive Action Visual Residual Reinforcement Learning for Contact-Rich Tasks Using a Torque-Controlled Robot”, (ICRA) 2021, Xi’ an, China.

Abstract

Contact-rich manipulation tasks are commonly found in modern manufacturing settings. However, manually designing a robot controller is considered hard for traditional control methods as the controller requires an effective combination of modalities and vastly different characteristics. In this paper, we first consider incorporating operational space visual and haptic information into a reinforcement learning (RL) method to solve the target uncertainty problems in unstructured environments. Moreover, we propose a novel idea of introducing a proactive action to solve a partially observable Markov decision process (POMDP) problem. With these two ideas, our method can either adapt to reasonable variations in unstructured environments or improve the sample efficiency of policy learning. We evaluated our method on a task that involved inserting a random-access memory (RAM) using a torque-controlled robot and tested the success rates of different baselines used in the traditional methods. We proved that our method is robust and can tolerate environmental variations.

Representation of policies and controller scheme. The blue region is the real-time controller, and the wheat region is the non-real-time trained policy.
Representation of policies and controller scheme. The blue region is the real-time controller, and the wheat region is the non-real-time trained policy.

More details about this paper can be viewed in this video on the Universität Hamburg’s Technical Aspects of Multimodal Systems (TAMS) YouTube channel.

Yunlei was very happy to attend this fantastic conference with support from the project ULTRACEPT.

A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics

Three researchers from the University of Lincoln; Tian Liu, Xuelong Sun, and Qinbing Fu, attended ICRA 2021 in person to present their co-authored paper, ‘A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics’. 

ULTRACEPT researchers Tian Liu, Xuelong Sun and Qinbing Fu attending ICRA 2021
ULTRACEPT researchers Tian Liu, Xuelong Sun and Qinbing Fu attending ICRA 2021

We three were very happy to physically attend this fantastic conference with the support from the project ULTRACEPT.

We have one co-authored paper that presents our developed vision-pheromone-communication platform which was published in the proceedings of this conference. Tian Liu delivered the presentation which outlined our platform and it attracted some attention of attendees through interesting questions asked by the audience. We think that this event has provided us a great opportunity to raise publicity about our platform for future swarm robotics and social insects studies.

Tian Liu presenting at ICRA 2021
Tian Liu presenting at ICRA 2021

A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics, Tian Liu, Xuelong Sun, Cheng Hu, Qinbing Fu, and Shigang Yue, University of Lincoln

Keywords: Biologically-Inspired Robots, Multi-Robot Systems, Swarm Robotics

Abstract: This paper describes a versatile platform for swarm robotics research. It integrates multiple pheromone communication with a dynamic visual scene along with real-time data transmission and localization of multiple-robots. The platform has been built for inquiries into social insect behavior and bio-robotics. By introducing a new research scheme to coordinate olfactory and visual cues, it not only complements current swarm robotics platforms which focus only on pheromone communications by adding visual interaction but also may fill an important gap in closing the loop from bio-robotics to neuroscience. We have built a controllable dynamic visual environment based on our previously developed ColCOSPhi (a multi-pheromones platform) by enclosing the arena with LED panels and interacting with the micro mobile robots with a visual sensor. In addition, a wireless communication system has been developed to allow the transmission of real-time bi-directional data between multiple micro robot agents and a PC host. A case study combining concepts from the internet of vehicles (IoV) and insect-vision inspired model has been undertaken to verify the applicability of the presented platform and to investigate how complex scenarios can be facilitated by making use of this platform.

We have grasped many interesting ideas and inspirations from colleagues in the robotics field from not only the excellent talks but also high-quality robots’ exhibitions from famed companies in the industry.

Conference presentations at ICRA 2021
Conference presentations attended by the researchers at ICRA 2021
Demonstration at the ICRA 2021 conference
Demonstration at the ICRA 2021 conference

On the last day of the conference, we attended a wonderful tour of the Shaanxi History Museum and the Terra-Cotta Warriors, from which we have leaned a lot about the impressive history and culture of Qin dynasty. Further, this also makes us rethink the important role played by science and technology in assisting archaeological excavation and cultural relic protection.

Thanks to the supportive ULTRACEPT project, we really enjoyed the whole event bringing us not only new knowledge about the robotics and history, but enlightening inspirations which will potentially motivate our future researches. In addition, our group’s researching works also have been propagated via this top international conference.

Qian Feng: Centre-of-Mass-based Robust Grasp Planning for Unknown Objects, Using Tactile-Visual Sensors

Qian Feng is an external PhD student at the Technical University of Munich and working at project partner Agile Robots and contributing to ULTRACEPT’s Work Package 4.

The IEEE International Conference on Robotics and Automation (ICRA) is an annual academic conference covering advances in robotics. It is one of the premier conferences in its field, with an ‘A’ rating from the Australian Ranking of ICT Conferences obtained in 2010 and an ‘A1’ rating from the Brazilian ministry of education in 2012.

Qian Feng attended the IEEE International Conference on Robotics and Automation (ICRA) 2020. The conference was originally scheduled to take place in Paris, France, but due to COVID-19, the conference was held virtually from 31 May 2020 until 31 August 2020.

Qian Feng ULTRACEPT IEEE Conference
Qian Feng presenting online at ICRA 2020

Qian presented his conference paper:

Q. Feng, Z. Chen, J. Deng, C. Gao, J. Zhang and A. Knoll, Center-of-Mass-based Robust Grasp Planning for Unknown Objects Using Tactile-Visual Sensors,” 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 610-617, doi: 10.1109/ICRA40945.2020.9196815.

Abstract

An unstable grasp pose can lead to slip, thus an unstable grasp pose can be predicted by slip detection. A re-grasp is required afterward in order to correct the grasp pose and finish the task. In this work, we propose a novel re-grasp planner with multi-sensor modules to plan grasp adjustments with the feedback from a slip detector. Then a re-grasp planner is trained to estimate the location of centre of mass, which helps robots find an optimal grasp pose. The dataset in this work consists of 1,025 slip experiments and 1,347 re-grasps collected by one pair of tactile sensors, an RGB-D camera, and one Franka Emika robot arm equipped with joint force/torque sensors. We show that our algorithm can successfully detect and classify the slip for 5 unknown test objects with an accuracy of 76.88% and a re-grasp planner increases the grasp success rate by 31.0%, compared to the state-of-the-art vision-based grasping algorithm.

Qian Feng ULTRACEPT IEEE Conference slip detector
Qian Feng: Slip Detector
Qian Feng ULTRACEPT IEEE Conference Grasp Success Rate on Test Objects
Qian Feng: Grasp Success Rate on Test Objects

 

When asked about his experience presenting and attending ICRA 2020, Qian said:

“Thanks to the virtual conference we were still able to present our work. It also meant that more people were able to join the conference to learn about and discuss our research. Everyone was able to access the presentation and get involved in the discussion in the virtual conference for 2 months, instead of the originally scheduled 5 minutes of discussion for the on-site conference. During this conference I shared my work with many researchers from the same field and exchanged ideas. I really enjoyed the conference and learnt a lot from the other attendees.”