When:
Thursday, November 7, 2024
2:00 PM - 3:00 PM CT
Where: Technological Institute, F160, 2145 Sheridan Road, Evanston, IL 60208 map it
Audience: Faculty/Staff - Student - Post Docs/Docs - Graduate Students
Contact:
Joan West
(847) 491-3645
Group: Physics and Astronomy Complex Systems Seminars
Category: Academic
Artificial Neural Networks (ANNs) are composed of very simple elements that implement Boolean logic. The basic model, the McCulloch-Pitts (MP) neuron, was introduced in 1943. This triggered a long effort to combine MP neurons into feed-forward networks whose connectivity could be adjusted to implement desired input-output maps. In 1986, this effort led to the back-propagation algorithm that underlies all current applications to deep learning. But this is not the topic of this talk.
We will concentrate on Recurrent Neural Networks (RNNs) whose lateral connectivity generates N-dimensional dynamical systems. It was networks of this type that were studied by Hopfield in his 1982 and 1984 papers, and by Ackley, Hinton, and Sejnowski in their 1985 paper, and led to the 2024 Nobel Prize in Physics for Hopfield and Hinton.
The dynamics of these networks are controlled by a Hamiltonian that is fundamentally similar to that of a disordered Ising model: the Sherrington-Kirkpatrick (SK) spin glass. Statistical physicists working in models of the SK type used all the tools available to them – replicas, Langevin equations, path integrals – to analyze these novel models that exhibited novel behaviors: memory storage, memory retrieval, constraint satisfaction, generative models.
The fundamental idea behind the development of both layered and recurrent ANNs is that of ‘connectionism’: these networks store their long-term knowledge of the task they have been trained to perform as the strengths of the connections between simple ‘neural’ processing elements.
Sara Solla, Professor / Joint with Department of Neuroscience, Northwestern University
Host: Michelle Driscoll