Colloque BioComp 2023


Le lien pour l’inscription est le suivant (Date limite 06/11/2023 à midi):

Le lieu:


Le colloque se déroulera du mercredi 29 novembre (13h30) au vendredi 02 décembre (12h00).

Voir le programme ici.

Résumés des présentations

Timothée MASQUELIER (CerCo – Université Paul Sabatier)

Surrogate gradient learning in SNNs – theory and practice with PyTorch/SpikingJelly

The recent discovery of surrogate gradient learning (SGL) has been a game-changer for spiking neural networks (SNNs). In short, it reconciles SNNs with backpropagation, THE algorithm that caused the deep learning revolution. The trick is to approximate the true gradient by a surrogate one. Thanks to this trick, SNNs and conventional artificial neural networks (ANNs) can now be trained using the same algorithm, BP, and the same auto-differentiation-enabled tools (e.g., PyTorch or TensorFlow). We will work on a concrete example using PyTorch/SpikingJelly. Participants should have a Colab account and a basic knowledge of Python.

François DANNEVILLE (IEMN – Université de Lille)

Ultra-low power bioinspired Cochlea for biodiversity monitoring

This talk will relate the architecture, design, and experimental characterization of an ultra-low power consumption bioinspired artificial Cochlea, intended to biodiversity monitoring. So far, it has been calibrated to extract Cetacean acoustic features (sperm whales). The output are spikes (electrical pulses) available through a number of frequency channels (defining the cochlea parallel outputs), generated whenever some energy of the acoustic signal is sensed on a channel. The main application is classification through spiking neural networks, while some other potential applications are possible.

Rufin VANRULLEN (CerCo – Université Paul Sabatier)

Deep-Learning and Bio-Inspired Computation

Artificial neural networks (ANNs) were designed with loose inspiration from biology. Large-scale ANNs (so-called « deep learning » systems) have recently taken the spotlight, reaching (or surpassing) human-level performance in numerous tasks. I will give two examples of how biological inspiration can continue to improve these systems. First, the inclusion of feedback loops based on « predictive coding » principles in deep convolutional networks can improve their performance and robustness for visual classification, and make them susceptible to perceptual illusions just like human observers. Second, multimodal systems following the « global workspace » architecture of the brain can display perceptual and semantic grounding abilities, while being trained with much less supervision. I will finish by proposing future extensions of these architectures that could allow them to flexibly adapt to diverse cognitive situations, e.g. « system-2 AI ».

Manon DAMPFHOFFER (CEA, Grenoble)

Les réseaux de neurones impulsionnels bio-inspirés (Spiking Neural Networks) consomment-ils vraiment moins d’énergie que les réseaux de neurones artificiels classiques ?

Les réseaux de neurones impulsionnels (Spiking Neural Networks – SNN) promettent une consommation d’énergie plus faible que les réseaux de neurones artificiels traditionnels (ANNs) grâce à leur calcul basé sur des impulsions (spike), inspirés des réseaux de neurones biologiques. Ils sont donc de bons candidats pour des implémentations embarquées low-power sur du matériel neuromorphique. Cependant, cette efficacité énergétique n’a pas été rigoureusement démontrée jusqu’à présent. De nombreuses études ne tiennent pas compte des accès mémoire et utilisent des implémentations matérielles ANN naïves en comparaison. Nous proposerons une méthode d’analyse de la consommation énergétique d’un SNN sur du matériel neuromorphique par rapport à une implémentation ANN. Nous évaluerons le compromis précision / efficacité énergétique des ANNs et SNNs dans divers cas d’applications (image, audio). Enfin nous discuterons des différences entre les implémentations sur du matériel numérique et analogique.

Peter Ford DOMINEY (INSERM, Dijon)

Temporal processing hierarchies in brains and structured reservoir computing models

Structure in the environment is organized at multiple levels in a spatio-temporal processing hierarchy.  This is illustrated in the domain of human narrative, which is organized at least in terms of brief acoustic events, words, sentences, paragraphs, chapters, etc.  Recent neuroscience research has revealed that within the human brain, this corresponds to a hierarchy of temporal event coherence that can be mapped onto the physical structure of the cortex, with the hierarchy flowing roughly from rapid input-driven areas, to associative areas with progressively longer integration time constants.  In order to account for how the physical connectivity structure of the cortex accounts for this temporal hierarchy, we have developed reservoir computing models in which the recurrent connectivity matrix is constrained by a distance rule.  This enforces a physical hierarchy from input-driven to increasingly associative areas (dominated by recurrence), which, in turn, creates a temporal processing gradient.  Simulations with such models have allowed us to reproduce a temporal integration hierarchy similar to that observed in the human cortex.

The human cortex operates on such a connectivity distance rule, but it is also supplemented by additional long-distance connections that in a sense violate the local connectivity rule, allowing input driven areas to directly influence distant associative areas.  Our models demonstrate how exceptions to the human temporal hierarchy can be accounted for by such long-distance connections.  The deep understanding of the functional role of such hierarchies in cognitive processing remains to be determined, but it is highly likely that there is a fundamental link between connectivity structure, temporal dynamics, and higher cognitive function.  One key to unlocking this question may reside in the presence of high dimensional non-linear representations within such architectures.  We have recently characterized a hierarchy of this mixed selectivity in the human cortex, and explained its origin in the structured reservoir model.  Thus, we show how the distance rule connectivity yields a temporal integration hierarchy, and a corresponding mixed selectivity hierarchy, which may underly human cognitive processing.

In the context of neuromorphic computing, this research – with architectures that exploit local connectivity – may be of particular interest as certain neuromorphic materials (e.g. nanowire mrmristive networks) may naturally display this connectivity property.

Bruno MAISONNIER (AnotherBrain, Fondateur)

Comment s’inspirer des colonnes corticales pour faire un système qui fonctionne comme le cortex : faire une IA bio-inspirée à partir des colonnes corticales

Benoît MIRAMOND (LEAT – Université Côte d’Azur)

Next generation of Edge AI with spiking neural networks on event-based neuromorphic hardware

Neuromorphic computing has been studied for many years as a game changer to address low-power EDGE AI, assuming that the inspiration from the brain will natively come with a reduction in energy consumption. Neuromorphic computing mainly focuses on the encoding and the processing of the information with spikes. If this property takes an obvious place in the biological functioning, it is far from obvious that it is the only one to explain the efficiency of the brain.

It is therefore necessary to ask the question whether considering this characteristic in isolation brings a gain compared to the classical neural networks used in deep learning. This is the question that this presentation seeks to answer by restricting the study to standard machine learning tasks on different types of data: static, dynamic and event-based data.

The presentation is based on the main achievements of the eBRAIN group from LEAT lab in this domain both in terms of efficient training (ATIF neuron), quantization (QUALIA framework) and embedded deployment of SNN onto neuromorphic hardware (SPLEAT accelerator).

Laurent PERRINET (INT – Aix Marseille Université)

Event-based vision

Event-based cameras mimic the way biological retinas process visual information: each pixel independently reports brightness changes as asynchronous temporal events. This departs from conventional cameras that capture static frames at fixed intervals. I will first discuss how the biological retina detects light intensity changes and communicates this to the brain. Compared to traditional cameras, the event-based paradigm enables new vision applications with high-speed, low latency and energy-efficiency. I will highlight recent works applying event cameras to tasks such as visual odometry, motion detection or gesture recognition. The goal is to demonstrate the advantages for computer vision that emulate biological principles inspired by neurosciences.