A self-consistent analytical theory for rotator networks under stochastic forcing: Effects of intrinsic noise and common input
Résumé
Despite the incredible complexity of our brains’ neural networks, theoretical descriptions of neural dynamics have led to profound insights into possible network states and dynamics. It remains challenging to develop theories that apply to spiking networks and thus allow one to characterize the dynamic properties of biologically more realistic networks. Here, we build on recent work by van Meegen and Lindner who have shown that “rotator networks,” while considerably simpler than real spiking networks and, therefore, more amenable to mathematical analysis, still allow one to capture dynamical properties of networks of spiking neurons. This framework can be easily extended to the case where individual units receive uncorrelated stochastic input, which can be interpreted as intrinsic noise. However, the assumptions of the theory do not apply anymore when the input received by the single rotators is strongly correlated among units. As we show, in this case, the network fluctuations become significantly non-Gaussian, which calls for reworking of the theory. Using a cumulant expansion, we develop a self-consistent analytical theory that accounts for the observed non-Gaussian statistics. Our theory provides a starting point for further studies of more general network setups and information transmission properties of these networks.
Origine | Fichiers produits par l'(les) auteur(s) |
---|