\documentclass[12pt]{article}
\usepackage{amsmath,amssymb,amsfonts}
\begin{document}
The brain, a network of spiking neurons, can learn complex dynamics by adapting its spontaneous chaotic activity. One of the dominant approaches used to train such a network, the FORCE method, has recently been applied to spiking neural networks. This method employs a pool of randomly connected spiking neurons, called a reservoir, to create chaos and uses the recursive least square (RLS) method to change its dynamic to what is required to follow a teacher signal. Here, we propose a digital hardware architecture for spiking FORCE with some modifications to the original method. First, to reduce the memory usage in hardware implementation, we show that careful binarization of the reservoir weights could preserve its initial chaotic activity. Second, we generate the connection matrix on-the-fly instead of storing the whole matrix. Third, a single processor systolic array implementation of the RLS using the inverse QR decomposition is exploited to update the readout layer weights. This implementation is not only more hardware-friendly but also more numerically stable in reduced precision than the standard RLS implementation. Fourth, we implement the design in both single and custom-precision floating-point number systems. Finally, we implement a network of 510 Izhikevic neurons on a Xilinx Artix-7 FPGA with 32, 24, and 18 bits floating-point numbers. To confirm the correctness of our architecture, we successfully train our hardware using three different teacher signals.
\end{document}