Category Archives: CONFERENCES

ULTRACEPT Researcher Vassilis Cutsuridis Attends GEM 2023 Conference

ULTRACEPT Experienced Researcher Dr Vassilis Cutsuridis is a Senior Lecturer in Computer Science, and a member of the Machine Learning research group at the University of Lincoln. He recently attended the GEM Conference 2023 Generative Episodic Memory: Interdisciplinary perspectives from neuroscience, psychology, and philosophy. The conference took place 12th to 14th June 2023 in Bochum, Germany.

This conference is organized and funded by the DFG-funded research group FOR 2812 “Constructing scenarios of the past: A new framework in episodic memory”. Episodic memories are widely regarded as memories of personally experienced events. Early concepts about episodic memory were based on the storage model, according to which experiential content is preserved in memory and later retrieved. However, overwhelming empirical evidence suggests that the content of episodic memory is – at least to a certain degree – constructed in the act of remembering. Even though very few contemporary researchers would oppose this view of episodic memory as a generative process, it has not become the standard paradigm of empirical memory research. This is particularly true for studies of the neural correlates of episodic memory. Further hindering progress are large conceptual differences regarding episodic memory across different fields, such as neuroscience, philosophy, and psychology. This interdisciplinary conference therefore aimed to bring together researchers from all relevant fields to advance the state of the art in the research on generative episodic memory.

Dr Cutsuridis presented his research poster ‘Memory retrieval enhancement in a CA1 microcircuit model of
the hippocampus’ 

Abstract

Memory retrieval is important in how the already stored information can be accessed. Improving it would help in developing strategies for preventing memory loss. We selectively scaled excitatory and inhibitory responses of key CA1 neurons to evaluate memory retrieval as a function of stored patterns, pattern interference, contexts, network size, and engram cells in a computational circuit model of the hippocampus. Model excitatory and inhibitory cells fired at specific phases of a theta oscillation imposed by an external inhibitory signal targeting only inhibitory cells, which inhibited compartments of excitatory cells. Sensory and contextual inputs targeting cell dendrites caused cells to fire. Simulation results showed scaling of excitatory synapses in proximal but not basal dendrites of bistratified cells inhibiting pyramidal cells made retrieval perfect. Scaling of inhibitory synapses in pyramidal cells made retrieval worst. Decreases in the number of memory engram cells improved memory retrieval in a pathway-dependent way. Increases in network size and stored patterns had a minimal effect on memory retrieval. Memory interference had a detrimental effect on memory retrieval, which was reversible as the number of engram cells decreased. Changes in contextual information made memory retrieval worse confirming previous evidence that more familiar context facilitates memory retrieval.

ULTRACEPT Researchers attend ICIV 2023

ULTRACEPT Experienced Researchers Dr Julieta Sztarker, from partner Universidad De Buenos Aires, and Dr Claire Rind, from partner Newcastle University, attended the International Conference on Invertebrate Vision (ICIV) 2023. The conference was held in Bäckaskog Castle in Sweden on the 27th July to the 3rd of August 2023.

Dr Sztarker attended as a member of the Scientific Programme Committee, and presented her research ‘The neuropil processing optic flow in mud crabs is the lobula plate: optomotor responses are severely impaired in lesioned animals’.

ICIV programme

Vassilis Cutsuridis presents at BIOSTEC 2023

ULTRACEPT researcher Dr Vassilis Cutsuridis is a Senior Lecturer in Computer Science, and a member of the Machine Learning research group at the University of Lincoln. He recently attended the 16th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2023) to present his paper titled ‘Machine Learning Algorithms for Mouse LFP Data Classification in Epilepsy’. The conference took place 16th to the 18th February 2023 in Lisbon, Portugal.

The purpose of BIOSTEC is to bring together researchers and practitioners, including engineers, biologists, health professionals and informatics/computer scientists, interested in both theoretical advances and applications of information systems, artificial intelligence, signal processing, electronics and other engineering tools in knowledge areas related to biology and medicine. BIOSTEC is composed of five co-located conferences, each specialized in a different knowledge area.

Abstract

Successful preictal, interictal and ictal activity discrimination is extremely important for accurate seizure detection and prediction in epileptology. Here, we introduce an algorithmic pipeline applied to local field potentials (LFPs) recorded from layers II/III of the primary somatosensory cortex of young mice for the classification of endogenous (preictal), interictal, and seizure-like (ictal) activity events using time series analysis and machine learning (ML) models. Using the HCTSA time series analysis toolbox, over 4000 features were extracted from the LFPs after applying over 7700 operations. Iterative application of correlation analysis and random-forest-recursive-feature-elimination with cross validation method reduced the dimensionality of the feature space to 22 features and 27 features, in endogenous-to-interictal events discrimination, and interictal-to-ictal events discrimination, respectively. Application of nine ML algorithms on these reduced feature sets showed prei ctal activity can be discriminated from interictal activity by a radial basis function SVM with a 0.9914 Cohen kappa score with just 22 features, whereas interictal and seizure-like (ictal) activities can be discriminated by the same classifier with a 0.9565 Cohen kappa score with just 27 features. Our preliminary results show that ML application in cortical LFP recordings may be a promising research avenue for accurate seizure detection and prediction in focal epilepsy.

Nikolas Andreakos Presents at 16th International Symposium of Cognition, Logic and Language

Nikolas Andreakos is a PhD candidate at the University of Lincoln, who is developing computational models of associative memory formation and recognition in the mammalian hippocampus.

Recently Nikolas attended the 16th International Symposium of Cognition, Logic and Language. The conference took place as a hybrid model on the 25th August 2022, in Riga, Latvia and online.

16th International Symposium of Cognition, Logic and Language logo

About the Symposium 2022, Linkages between space and memory: processes and representations

Navigation and episodic memory are core features of human cognitive processing. Although they seem very different, research has revealed behavioural and neural linkages between the two domains. This symposium aims to integrate perspectives from research in the two areas, using techniques from psychology and neuroscience (as well as linguistics, computer science, AI, and philosophy, depending on participants’ interests). The symposium focused primarily on the following topics, but was not limited to them:

  • Spatial context and memory
  • Episodic memory and spatial learning
  • Developmental trajectory of navigation and episodic memory
  • Neural principles of navigation and episodic memory

Nikolas presented his research Improving recall in hippocampal neural network models.

Abstract

Relevant theory

Studies on memory capacity in neural networks have shown the number of storable memories scale with the number of neurons and synapses in the network. As the memory capacity limit is reached, then stored memories interfere, and recall performance is reduced. A well-established bio-inspired neural network model of the hippocampus was employed to improve its recall performance (RP) as more patterns of various overlaps are stored in its synapses when specific synaptic connections in the network were modulated.

Research design

The neural network model consisted of 100 multi-compartmental Hodgkin-Huxley based excitatory (pyramidal) cells and four types (axo-axonic, basket, bistratified, OLM) of inhibitory neurons firing at specific phases of a theta oscillation imposed by an external inhibitory signal targeting only the inhibitory cells in the network. Inhibitory cells inhibited specific compartments of network’s excitatory cells. Two excitatory inputs (sensory and contextual inputs) targeted dendritic compartments of cells in the network and caused cells to fire.

Methods

Simulations were performed in NEURON and analysed by MATLAB.

Nikolas Andreakos presents at 16th International Symposium of Cognition, Logic and Language

Results

Results showed when both sensory and contextual inputs were present during recall and were 100% similar, the network’s performance improved. When their similarity was reduced (40%) or were completely dissimilar, then RP dropped. Results also showed the number of cells coding for a memory (engram cells) plays a crucial role in RP. As the numbers of engram cells coding for a memory is increased, then RP get worse. This finding has serious implications to the nature of memory.

Nikolas Andreakos presents at 16th International Symposium of Cognition, Logic and Language

 

When asked about his experience, Nikolas said:

“I really enjoyed attending and presenting at the event, especially the discussion I had with Prof Ranganath who made some useful points about my research. Additionally, the atmosphere was friendly, and I had the opportunity to attend some really interesting presentations and extend my knowledge.

Watch Nikolas’ presentation here:

 

Universidad Buenos Aires Researcher Yair Barnatan Presents at ICN 2022, Portugal

The International Conference for Neuroethology (ICN) is the official regular meeting of the International Society for Neuroethology (ISN). ISN is a scholarly society devoted to Neuroethology, formed in Kassel, Germany in 1981. The 2022 conference was held from the 24th to the 29th July, 2022 in Lisbon, Portugal.

Neuroethology is a relatively young science that emerged in the late 1960s and early 1970s. It focuses on the study of how nervous systems generate natural behavior in animals. These regular conference meetings enable researchers in the field to share their research and progress the work in this field.

ULTRACEPT researcher Yair Barnatan from project partner Universidad Buenos Aires presented his research poster ‘Functional evidence of the role of the crab lobula plate as optic flow processing center’ at the Vision and Photoreception session.

Yair Barnatan from UBA presenting his poster at ICN2022
Yair Barnatan from UBA presenting his poster at ICN2022

Abstract

Functional evidence of the role of the crab lobula plate as optic flow processing center

Yair Barnatan; Dr. Daniel Tomsic; Dr. Julieta Sztarker.

Rotational motion produces a wide drift of the visual panorama over the retina of animals, termed optic flow (OF). Such motion is stabilized by compensatory behaviors (driven by the movement of the eyes, heads or the whole body depending on the animal) collectively termed optomotor response (OR). It has long been known that, in the visual system of flies, the lobula plate is the center involved in OF analysis and in guiding OR. Recently, a crustacean lobula plate was characterized by neuroanatomical techniques in the mud crab Neohelice granulata, sharing many canonical features with the dipteran neuropil. This lead to questioning if a common functional role is also shared. In this work we tackle that question by performing electrolytic lesions followed by behavioral testing. Results show that crabs with lesioned lobula plates fail to execute OR (or present a poor and unsynchronized response) in comparison to both control-lesioned (presenting a lesion of similar size but in another region of the optic neuropils) and non-lesioned animals. The lesion of the lobula plate cause a specific impairment in the OR, since avoidance responses to an approaching visual stimulus were not affected. These results present strong evidence supporting that a similar neuroanatomical structure in crabs and flies, the lobula plate, carries out the same function.

Yicheng Zhang Presents at ICANN 2022

Artificial Neural Networks and Machine Learning (ICANN) 2022 is the 31st International Conference on Artificial Neural Networks. The conference was organised by the Department of Computer Science and Creative Technologies of the University of the West of England at Frenchay Campus, Bristol, from 6 to 9 September 2022. It was held in a hybrid mode with delegates attending on-site and remotely via an immersive online space.

ICAN2022

This conference featured two main tracks: Brain inspired computing and Machine learning research, with strong cross-disciplinary interactions and applications. The event attracted a large number and wide range of new and established researchers from five continents and 27 countries in total. The research themes explored all innovative pathways in the wider area of Neural Networks and Machine Learning. 561 papers were submitted with 259 selected to be presented orally at the conference.

ICAN2022 ICAN2022

ULTRACEPT researcher Yicheng Zhang presented his research Zhang, Y. et al. (2022). O-LGMD: An Opponent Colour LGMD-Based Model for Collision Detection with Thermal Images at Night. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13531. Springer, Cham. https://doi.org/10.1007/978-3-031-15934-3_21. This was presented in session 41 of the conference.

O-LGMD: An Opponent Colour LGMD-Based Model for Collision Detection with Thermal Images at Night

O-LGMD: An Opponent Colour LGMD-Based Model for Collision Detection with Thermal Images at Night

Abstract

It is an enormous challenge for intelligent robots or vehicles to detect and avoid collisions at night because of poor lighting conditions. Thermal cameras capture night scenes with temperature maps, often showing different pseudo-colour modes to enhance the visual effects for the human eyes. Since the features of approaching objects could have been well enhanced in the pseudo-colour outputs of a thermal camera, it is likely that colour cues could help the Lobula Giant Motion Detector (LGMD) to pick up the collision cues effectively. However, there is no investigation published on this aspect and it is not clear whether LGMD-like neural networks can take pseudo-colour information as input for collision detection in extreme dim conditions. In this study, we investigate a few thermal pseudo-colour modes and propose to extract colour cues with a triple-channel LGMD-based neural network to directly process the pseudo-colour images. The proposed model consists of three sub-networks, each dealing with one specific opponent colour channel, i.e. black-white, red-green, or yellow-blue. A collision alarm is triggered if any channel’s output exceeds its threshold for a few successive frames. Our experiments demonstrate that the proposed bio-inspired collision detection system works well in quickly detecting colliding objects in direct collision course in extremely low lighting conditions. The proposed method showed its potential to be part of sensor systems for future robots or vehicles driving at night or in other extreme lighting conditions to help avoiding fatal collisions.

O-LGMD: An Opponent Colour LGMD-Based Model for Collision Detection with Thermal Images at Night

Yicheng was very grateful for the opportunity to attend this fantastic conference with support from ULTRACEPT.

ULTRACEPT Researchers Present at IEEE WCCI/IJCNN 2022

IEEE WCCI 2022 is the world’s largest technical event on computational intelligence, featuring the three flagship conferences of the IEEE Computational Intelligence Society (CIS) under one roof: the 2022 International Joint Conference on Neural Networks (IJCNN 2022), the 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2022), and the 2022 IEEE Congress on Evolutionary Computation (IEEE CEC 2022). The event was held online and in Padua, Italy.

ULTRACEPT researchers attended the event to share their research:

Shaping the Ultra-Selectivity of a Looming Detection Neural Network from Non-linear Correlation of Radial Motion

University of Lincoln researcher Mu Hua has finished his postgraduate program last July and now is an honorary researcher working on ULTRACEPT’s work package 1. He remotely attended the IEEE World Congress on Computational Intelligence 2022 and orally presented his latest work on lobula plate/lobula columnar type 2(LPLC2) neuropile discovered within neural pathway of fruit flies Drosophila.

Mu Hua presented his recent work on modelling the LPLC2 in his paper titled ‘Shaping the Ultra-Selectivity of a Looming Detection Neural Network from Non-linear Correlation of Radial Motion’.

H. Luan, M. Hua, J. Peng, S. Yue, S. Chen and Q. Fu, “Accelerating Motion Perception Model Mimics the Visual Neuronal Ensemble of Crab,” 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 2022, pp. 1-8, https://10.1109/IJCNN55064.2022.9892540.

With a pre-recorded video, he orally explained how the proposed LPLC2 neural network realises its ultra-selectivity for stimuli initial location of the whole receptive field, object surface brightness and its preference for approaching motion patterns through high-level non-linear combination of motion clues, demonstrating potential for distinguishing near miss so that true collision can be recognized correctly and efficiently.

 Schematic of visual system of Drosophila. ON channel is represented by solid line while OFF channel by dashed line.
Figure 1. Schematic of visual system of Drosophila. ON channel is represented by solid line while OFF channel by dashed line.

Figure 1 shows the schematic of visual system of Drosophila. ON channel is represented by solid line while OFF channel by dashed line. Blue, red, green and purple areas show different type DSNs. More specifically, the blue one represents neurons interesting in upwards motion whilst the purple one prefers downwards motion; the red line and green shows preference to leftwards and rightwards movement respectively. The dot-headed line shows pathway of visual signals accepted by photoreceptors in retina layer, which are firstly dealt with in lamina, and subsequently separated in medulla into parallel ON/OFF channel with polarised selectivity. After that, signals in ON are passed to various types of DSNs (T4s) in lobula plate layer channel for directional motion calculation.

These visual signals estimated by T4 interneurons are then combined with T5 neurons in OFF channel, and further filtered through to lobula plate tangential cells (LPTCs, represented by orange dots within slightly transparent areas). Note number of lines shall not be the actual number of neurons within Drosophila visual system.

 Illustration of directionally-selective neurons LPTCs being activated by edge expanding (top) and remaining silent against recession (bottom).
Figure 2. Illustration of directionally-selective neurons LPTCs being activated by edge expanding (top) and remaining silent against recession (bottom).

Figure 2 illustrates the directionally-selective neurons LPTCs being activated by edge expanding (top) and remaining silent against recession (bottom). The black circle represents one dark looming motion pattern. As it expands, four sorts of T4/T5 interneurons in four colours sense motions along one of the four cardinal directions. Directional information is then estimated within the T4 or T5 pathway. The ON channel motion estimation in T4 and OFF channel motion in T5 are then summarised by their post-synaptic structure LPTC neurons. The particular placement of LPTC neurons as shown is considered to impose impacts on the following non-linear combining calculation of LPLC2 neurons.

Snapshots of one of the experimental stimuli, where a square lays on a complex background.
Figure 3. Snapshots of one of the experimental stimuli, where a square lays on a complex background.

Snapshots of one of the experimental stimuli, where a square lays on a complex background.
Figure 3. Snapshots of one of the experimental stimuli, where a square lays on a complex background.

Figure 3 shows snapshots of one of the experimental stimuli, where a square lays on a complex background. From top to bottom, the motion pattern is approaching and reversely generates receding. Curves on the right show the output of proposed model and the classic LGMD1 neural network chosen for comparison. The original curve demonstrates that our proposed neural network is only activated by approaching motion pattern, which fits well biological findings.

Abstract

In this paper, a numerical neural network inspired by the lobula plate/lobula columnar type II (LPLC2), the ultraselective looming sensitive neurons identified within the visual system of Drosophila, is proposed utilising non-linear computation. This method aims to be one of the explorations toward solving the collision perception problem resulting from radial motion. Taking inspiration from the distinctive structure and placement of directionally selective neurons (DSNs) named T4/T5 interneurons and their post-synaptic neurons, the motion opponency along four cardinal directions is computed in a non-linear way and subsequently mapped into four quadrants. More precisely, local motion excites adjacent neurons ahead of the ongoing motion, whilst transferring inhibitory signals to presently-excited neurons with slight temporal delay. From comparative experimental results collected, the main contribution is established by sculpting the ultra-selective features of generating a vast majority of responses to dark centroid-emanated centrifugal motion patterns whilst remaining nearly silent to those starting from other quadrants of the receptive field (RF). The proposed method also distinguishes relatively dark approaching objects against the brighter backgrounds and light ones against dark backgrounds via exploiting ON/OFF parallel channels, which well fits the physiological findings. Accordingly, the proposed neural network consolidates the theory of non-linear computation in Drosophila’s visual system, a prominent paradigm for studying biological motion perception. This research also demonstrates the potential of being fused with attention mechanisms towards the utility in devices such as unmanned aerial vehicles (UAVs), protecting them from unexpected and imminent collision by calculating a safer flying pathway.

A Bio-inspired Dark Adaptation Framework for Low-light Image Enhancement

Fang Lei is a PhD Scholar at the University of Lincoln. Fang presented her poster promoting her research ‘A Bio-inspired Dark Adaptation Framework for Low-light Image Enhancement’.

F. Lei, “A Bio-inspired Dark Adaptation Framework for Low-light Image Enhancement,” 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 2022, pp. 1-8, https://10.1109/IJCNN55064.2022.9892877

Fang Lei's poster at WCCI2022
Fang Lei’s poster at WCCI2022

Abstract

In low light conditions, image enhancement is critical for vision-based artificial systems since details of objects in dark regions are buried. Moreover, enhancing the low-light image without introducing too many irrelevant artifacts is important for visual tasks like motion detection. However, conventional methods always have the risk of “bad” enhancement. Nocturnal insects show remarkable visual abilities at night time, and their adaptations in light responses provide inspiration for low-light image enhancement. In this paper, we aim to adopt the neural mechanism of dark adaptation for adaptively raising intensities whilst preserving the naturalness. We propose a framework for enhancing low-light images by implementing the dark adaptation operation with proper adaptation parameters in R, G and B channels separately. Specifically, the dark adaptation in this paper consists of a series of canonical neural computations, including the power law adaptation, divisive normalization and adaptive rescaling operations. Experiments show that the proposed bioinspired dark adaptation framework is more efficient and can better preserve the naturalness of the image compared to existing methods.

Model

The proposed bio-inspired dark adaptation framework is shown in Fig.1. The key idea of the dark adaptation is to adaptively raise the intensities of dark pixels by a series of canonical neural computations (see Fig.2).

Proposed dark adaptation framework for low light image enhancement
Figure 1. Proposed dark adaptation framework for low light image enhancement. The red (R), green (G), and blue (B) components of the input image are processed with the dark adaptation in three separate channels. Note that each channel has a different adaptation parameter.

Schematic illustration of the proposed dark adaptation
Figure 2. Schematic illustration of the proposed dark adaptation. There are N cells that correspond to N pixels in the input image, denoted by I1 ∼ IN. n denotes the sensation parameter, and its value depends on the wavelength of perceived light. Ii and I′i indicate the ith cell and its enhanced output after the dark adaptation processing. For clear illustration, we only give one cell’s enhanced result.

Results

We compare the proposed method with existing low light image enhancement methods, including comparisons of visual performance, lightness order error (LOE), and average running time. The experimental results are shown below.

Visual comparison among the competitors on the low-light image dataset.
Figure 3. Visual comparison among the competitors on the low-light image dataset.

Quantitative performance comparison on the low-light image dataset in terms of loe.
Table 1. Quantitative performance comparison on the low-light image dataset in terms of loe. loe has a factor 103. the lower the loe is, the better the enhancement preserves the naturalness of illumination.

Average running time comparison of the six enhancement methods on the low-light image dataset.
Table 2. Average running time comparison of the six enhancement methods on the low-light image dataset. The sizes of images are 1280 pixels (horizontal) × 960 pixels (vertical) and 4000 pixels (horizontal) × 3000 pixels (vertical).

Fang Lei presenting at WCCI2022
Fang Lei presenting at WCCI2022

Xuelong Sun presents research into the insect’s central complex in the midbrain for coordinating multimodal navigation behaviours at iNAV2022

The 4th Interdisciplinary Navigation Symposium, iNAV2022, was held fully virtual from the 14th to the 16th June  2022. This symposium mainly focused on the question: “How does the brain know where it is, where it is going, and how to get from one place to another?” Interestingly, this symposium took place in an 8bit 2D environment under the support of “Gather.town”, making it the most interactive virtual-academic-conference ever before in this special pandemic period.

ULRACEPT researcher Xuelong Sun presented a poster about his research on the insect’s central complex in the midbrain for coordinating multimodal navigation behaviours, ‘How the insect central complex could coordinate multimodal navigation’ Xuelong Sun; Shigang Yue; Michael Mangan. This poster appealed to several researchers sharing similar research interests who have further communicated with Xuelong for future directions of insect navigation. Xuelong has also answered some questions about the details of the neural networks applied in his work.

The Plenary talks were highly-qualified and overlapped with Xuelong’s research about insect navigation. Prof. Barbara Webb from the University of Edinburgh delivered a great talk about modelling the adaption in insect navigation which also mentioned Xuelong’s published paper. Xuelong said: “I am very happy that Webb mentioned my work. I have obtained many useful ideas and inspiration from the plenary talks and communications with peers, which will help my future research.”

ULTRACEPT Researchers Present at ICARM22

The IEEE International Conference on Advanced Robotics and Mechatronics (ICARM) is the flagship conference of both IEEE-SMC TC on Bio-mechatronics and Bio-robotics Systems, and IEEE-RAS TC on NeuroRobotics Systems. ICARM 2022 took place in the Steigenberger Hotel, Guilin, China from July 9th to 11th, 2022.  ULTRACEPT researchers Qinbing Fu, Xuelong Sun and Tian Liu attended this event with their co-authored paper titled: “Efficient bio-robotic estimation of visual dynamic complexity”.

ICARM22 presentation
ICARM22 presentation

Qinbing presented “Efficient bio-robotic estimation of visual dynamic complexity” in the regular session of the conference. This presentation gave a great introduction and demonstration of our multimodal swarm robotics platform named VColCOSP, which appealed to our academic peers who share similar researcher interests.

ICARM22 Qinbing Fu presenting
ULTRACEPT researcher Qinbing Fu presenting at ICARM22

Abstract

Visual dynamic complexity is ubiquitous, hidden attribute of the visual world that every motion-sensitive vision system is faced with. However, it is implicit and intractable which has never been quantitatively described due to difficulty in defending temporal features correlated to spatial image complexity. Learning from biological visual processing, we propose a novel bio-robotic approach to estimate visual dynamic complexity, effectively and efficiently, which can be used as a new metric for assessing dynamic vision systems implemented in robots. Here we apply a bio-inspired neural network model to quantitatively estimate such complexity associated with spatial-temporal frequency of moving visual scene. The model is implemented in an autonomous micro-mobile robot navigating freely in an arena encompassed by visual walls displaying moving scenes. The response of the embedded visual module can make reasonable prediction on surrounding dynamic complexity since it can be mapped monotonically to varying moving frequencies of visual scene. The experiments demonstrate this “predictor” is effective against different visual scenarios that can be established as a new metric for assessing visual systems. To prove its viability, we utilise it to investigate the performance boundary of a collision detection visual system in changing environment with increasing dynamic complexity.

The conference provided the ULTRACEPT researchers an opportunity to network and participate in knowledge exchange with researchers from other universities and academic institutions, including leading experts and professors in this field, paving the way for potential research cooperation in the future.

In this event, the ULTRACEPT group listened to high quality plenary talks relevant with our group’s topics. Xuelong said “the ideas presented by the speakers cover many aspects of the cutting-edge technologies of AI and autonomous robots such as swarm intelligence, embodiment, cognitive, etc.”. The group had an impressive experience and grasped some interesting and useful ideas and inspirations for future studies.

ICARM22 Xuelong Sun, Qinbing Fu, Tian Liu
ICARM22 attended by UTLRACEPT researchers Xuelong Sun, Qingbing Fu, Tian Liu

Yunlei Shi presents poster at ROBIO 20/21

Yunlei Shi is a 4th year full-time Ph.D. student at the Universität Hamburg and working at project partner Agile Robots.  In 2020 he was seconded to Tsinghua University as part of the STEP2DYNA project. His work continues in the ULTRACEPT project where he contributes to Work Package 4. 

Yunlei Shi attended the 2021 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 20/21) which was held 27th to the 31st December 2021 at the Four Points by Sheraton Hainan, Sanya, China. The conference was held both in person and online. Yunlei was grateful for the opportunity to attend this fantastic conference with support from ULTRACEPT.

The theme of ROBIO 20-21 was “Robotics and Biomimetics to meet societal grand challenges” reflecting the fast-growing and timely interests in research, development and applications and their impacts on the world. Due to the COVID-19 pandemic, ROBIO 2020 and 2021 were combined and held jointly. ROBIO 20-21 The conference highlighted the research results, new engineering development, and applications related to meeting the societal grand challenges such as COVID-19 pandemic.

Yunlei represented Agile Robots, Universität Hamburg, and the ULTRACEPT project by presenting his conference poster Yunlei Shi, Zhaopeng Chen, Lin Cong, Yansong Wu, Martin Craiu-M¨uller, Chengjie Yuan, Chunyang Chang, Lei Zhang, Jianwei Zhang. Maximizing the Use of Environmental Constraints: A Pushing Based Hybrid Position/Force Assembly Skill for Contact-Rich Tasks. Proceedings of the 2021 IEEE International Conference on Robotics and Biomimetics December 27-31, 2021, Sanya, China https://10.1109/ROBIO54168.2021.9739349

Yunlei Shi poster for ROBIO 20/21