Recurrent neural networks (RNNs) are the state-of-the-art machine learning techniques to process time series, with various training algorithms that, despite their success, are still not fully understood and whose biological plausibility is heavily contested. In this work we take a more theoretical approach and focus on the relationship between network structure and information processing. To do so we focus on Reservoir Computing, a specific paradigm of RNNs where only the last layer is trained. By taking a geometric perspective on that training we connect the dynamics of the RNN to their machine learning performance, both in terms of memory – here the amount of inputs that the network can store – and in terms of frequency specialization – representing the type of signals that the RNN can process successfully. To impose the previously mentioned dynamics onto our RNNs we use notions from control theory – namely feedback loops and poles – and we formalize them through ideas from random matrix theory relating the structure of large networks to the eigenvalues of their adjacency matrices. We conclude by referring to the biological plausibility of our results, relating the emergence of feedback loops of the appropriate length to synaptic time-dependent plasticity and the memory to previous experiments on context-dependent integration tasks.
Donnerstag, den 13. Juni 2019 um 13.30 Uhr, in Mathematikon, INF 205, Konferenzraum, 5. Stock Donnerstag, den 13. Juni 2019 at 13.30, in Mathematikon, INF 205, Konferenzraum, 5. Stock
Der Vortrag folgt der Einladung von The lecture takes place at invitation by Prof. Peter Albers