This week we have *Dhruva Raman* from *Univ of Cambridge *giving a talk on *Fundamental Bounds on Learning Performance in Neural Circuits *at *10h30* on *19th April* in *Mila Auditorium*.
If you are interested in* meeting Dhruva*, please book a slot in this google sheet https://docs.google.com/spreadsheets/d/13k6e1rCDM1tD__MvxS-UDHMS9L6XsWcTvqs_atbmNRY/edit?usp=sharing
See you there! The Tea Talk Team
*TITLE* Fundamental Bounds on Learning Performance in Neural Circuits
*ABSTRACT* Biological neural circuits learn in spite of imperfect information on task performance and noisy biological components. How can these problems be mitigated? We use optimization theory to show how adding apparently redundant neurons and connections to a network can improve learning performance in the face of imperfect learning rules and corrupted error signals. The theory shows how large neural circuits can exploit additional connectivity to achieve faster and more precise learning. However, there is a limit to the benefit of adding connections. Biologically, synapses (connections strengths) are intrinsically unreliable. We show that excessive network size eventually outcompetes the benefits to learning performance. Consequently, there is an optimal size of network for a given task, which we can calculate in specific cases.
*BIO* Dhruva did his undergraduate (MMath) at the University of Warwick (2008-2012), spent a year at the systems biology doctoral training centre at the University of Oxford (2012-2013), and did his PhD in the Control Group at the University of Oxford under the supervision of Antonis Papachristodoulou (2013-2016). Since 2017 he has been a postdoc in the control group at the University of Cambridge under the supervision of Timothy O’Leary.