Seminario del 2020

Recurrent Neural Networks (RNN) are powerful tools to learn computational tasks from data. How the tasks to be performed are simultaneously encoded and the input/output data are represented in a high-dimensional network is an important question. I will consider two related problems, both inspired from computational neuroscience: (1) how multiple low-dimensional maps (environments) can be embedded in a RNN, (2) how multiplexed integrations of velocity signals can be carried out in the RNN to update positions in those maps. I will discuss the nature of the representations found in artificial RNN, and compare them to experimental recordings in the mammal brain.

indietro