Hardware Approximation of Exponential Decay for Spiking Neural Networks

Spiking neural networks (SNNs) may enable low-power intelligence on the edge by combining merits of deep learning with the computational paradigms found in the human neo-cortex. The choice of neuron model is an open research topic. Many spiking models implement neural dynamics from biology that involve one or more exponential decay functions. Previous work focused on accurate modeling of the exponential decay function on neuromorphic hardware to the last significant bit (LSB). In this paper, we explore the limits of error resilience in SNNs by aggressively approximating their exponential decay functions and allowing for loss within our bit precision. Three approximation techniques are presented and implemented with varying degrees of precision resulting in 10 different implementations. Their hardware cost and inference accuracy on benchmark networks and applications are compared. For reduced inference accuracy, we implemented fine-tuning. We also introduce hardware programmability to certain time constants as hyperparameters. Our results show resilience to lossy approximation, fast fine-tuning, and a low energy consumption of 47 fJ per operation.