Even with billions of neurons and trillions of synaptic connections, the biological brain is still capable of maintaining a sparse and highly adaptive activation profile. Our finely tuned biological sensors (especially the retina) also have an astonishingly high dynamic range. With all that structural complexity, the brain remains plastic and retains a life-long ability to learn continuously from heterogeneous sensory inputs. How can we reproduce the sparsity, adaptability and plasticity of biological brains in artificial neural networks (ANNs)?

It turns out that the answer is tantalisingly simple, but to find it we have to go back to the neuroscience roots of ANNs. Classical (non-spiking) ANNs, which are at the core of most modern deep learning models, completely ignore the internal state of neurons. In classical ANNs, neurons act as simply integrators of their inputs and do not maintain an internal state. In contrast, spiking neural networks are more biologically accurate approximations as they take into account both the membrane potential and the activation threshold of individual neurons. However, spiking networks assume that the threshold is static, which leads to some interesting problems during network operation. For example, if presynaptic neurons fire too infrequently, the membrane potential will never become sufficiently depolarised to cross the static threshold, which means that the neuron will remain silent.

Biological neurons display a very intricate dynamic interplay between the two core parameters (membrane potential and activation threshold). There is strong evidence that the activation threshold dynamically follows the membrane potential to keep the neuron mostly silent but ready to fire at any time if there is a sudden surge of inputs. This is what the MPATH model does – it emulates the adaptive interplay between the membrane potential and the activation threshold to produce a more realistic neural activation pattern. The MPATH model allows the ANN to accommodate changes in its input in real time, enabling continuous learning and adaptation. MPATH is also flexible in that it can be used in both classical and spiking regimes, and the nature of neural activation in MPATH can ensure the stability of Hebbian learning.

While the paper is under review, you can read the preprint on ArXiv.