首页 » 学术活动 » 短期课程 » 正文

数学神经科学暑期学校

2024-04-22 11:22:31

组织者:

Jose A. Carrillo, 牛津大学

陶乐天, 北京大学

周珍楠, 西湖大学


时间:

8月19日—8月23日


地点:

西湖大学云谷校区E10-212


课程表:

Time

Aug 19

周一

Aug 20

周二

Aug 21

周三

Aug 22

周四

Aug 23

周五

9:00 - 10:00

(第一天8:50)

Jose A. Canizo

Jose A. Canizo

Stephen Coombes

Stephen Coombes

Alejandro Ramos-Lora

10:15 - 11:15

Jose A. Canizo

Jose A. Canizo

Stephen Coombes

Stephen Coombes

学生汇报

10:15-12:00

14:00 - 15:00

Douglas Zhou

Nicolas Torres

Songting Li

Zhuocheng Xiao

自由讨论

15:15 - 16:15

Pierre Roux

Pierre Roux

Yu Hu

Yu Hu

16:30 - 17:30

Pierre Roux

Pierre Roux

Yu Hu

Yu Hu



课程简介(按课表顺序)

·主要讲座

Jose Alfredo Canizo 格拉纳达大学

Lecture 1

Title: Introduction to mathematical models for neuron dynamics

Abstract: This lecture will start by introducing the physiological behavior of neurons, and describing some well-known models for the electrical activity of a single neuron: the Hodgkin-Huxley model and the Fitzhugh-Nagumo simplification. Other models such as the integrate-and-fire one and its modifications will be discussed too. Models for several interlinked neurons will then be discussed. From this one can deduce "mesoscopic" or "mean-field" models, described by partial differential equations. Some of these models will be discussed, and their basic properties will be introduced. We will also discuss the general procedure to design mean-field models from their microscopic counterparts.

Lecture 2

Title: Mean-field models for neuron dynamics

Abstract: In this lecture we will focus on two types of mean-field models for neuron dynamics: the time-elapsed model, and the nonlinear integrate-and-fire mean field model. We will discuss their basic mathematical properties, and introduce the main tools used in their study: well-posedness of PDEs; semigroup theory; entropy methods; and perturbation methods. This lecture ties in well with the talk by Alejandro Ramos, who will discuss recent research results on this topic.




Pierre Roux 法国里昂中央理工大学

Lecture 1

Title: Partial differential equations modelling networks of neurons I. Fokker-Planck type models

Abstract: Understanding the macroscopic behavior of a large number of neurons is challenging both theoretically and experimentally. Proceeding from particle systems to mean-field limits, physicists and mathematicians have constructed a variety of models aimed at understanding specific aspects of brain activity, like oscillations, firing patterns and phase transitions. In this first lecture, we will focus on models in the form of partial differential equations of Fokker-Planck type. We will see how to obtain information about their mathematical structure, their limitations as models and what they can say about oscillations in neural networks and the role of grid cells in the spatial navigation of mammals.

Lecture 2

Title: Partial differential equations modelling networks of neurons II. Hyperbolic and kinetic models

Abstract: Understanding the macroscopic behavior of a large number of neurons is challenging both theoretically and experimentally. Proceeding from particle systems to mean-field limits, physicists and mathematicians have constructed a variety of models aimed at understanding specific aspects of brain activity, like oscillations, firing patterns and phase transitions. In this second lecture, we will focus on models in the form of hyperbolic and kinetic partial differential equations. We will talk about the different mathematical tools one can use to handle solutions which are not always regular and how to map even irregular behaviors to real-life phenomena.



Stephen Coombes 诺丁汉大学

Lecture 1

Title: Networks of Oscillators & Applications in Neuroscience

Abstract: The tools of weakly coupled phase oscillator theory have had a profound impact on the neuroscience community, providing insight into a variety of network behaviours ranging from central pattern generation to synchronisation. However, there are many instances where this theory is expected to break down, say in the presence of strong coupling.  To gain insight into the behaviour of neural networks when phase-oscillator descriptions are not appropriate we turn instead to the study of tractable piece-wise linear (pwl) systems.  I will describe a variety of pwl neural oscillators and show how to analyse periodic orbits.  Building on this approach I will show how to analyse network states, with a focus on synchrony.  I will make use of an extension of the master stability function approach utilising saltation matrices and show how this framework is very amenable to explicit calculations when considering networks of pwl oscillators.  I will also use the first lecture to set the scene with a brief introduction to neurodynamics (covering modelling challenges in neuroscience and the use of techniques from dynamical systems theory).

Lecture 2

Title: Modelling large scale brain dynamics

Abstract: Phenomenological neural mass and field models have been used since the 1970s to model the coarse-grained activity of large populations of neurons.  I will discuss a simple spiking neuron network model that admits to an exact mean-field description for synaptic interactions. This has many of the features of a neural mass and field model coupled to an additional dynamical equation for population synchrony. I will describe the ways in which this next generation neural mass and field model is able to recreate some of the rich repertoire of behavior seen in large-scale brain dynamics.  To set the scene for this new work I will also review historical results for neural field models of the type originally developed by Wilson & Cowan, Amari, and Nunez.



胡禹 香港科技大学

Title: Theoretical Methods for Linking Dynamics and Connectivity in Recurrent Neural Circuits

Abstract: A central theme in theoretical neuroscience is understanding the relationship between neural dynamics and connectivity within circuits. Generally, this is quite challenging due to the nonlinearity and recurrent connections. I will discuss two approaches to address this challenge. One is the linear response approximation for spiking neuron networks. The other is to seek a statistical-level relationship between dynamics and connectivity, where the connectivity is modeled as a random graph or with higher-order structures such as motifs. We will discuss applications of these methods in recent theoretical efforts, driven by experiments that record many neurons simultaneously, to describe neural population dynamics at the global level, such as the dimensionality and geometry of neural activity space.

Lecture 1

We will focus on the marginal description of network dynamics, that is, the average correlation across a recurrent network. To identify the relevant connectivity structures for the average correlation, we will introduce linear response theory for approximating neurons' correlation in spiking neuron networks and cumulant statistics to quantify connectivity motifs.

Lecture 2

We will turn to joint or global descriptions of network dynamics, such as the dimension and the covariance eigenvalue spectrum, which can only be observed given simultaneous recordings of the network. Under the same linear response framework, we will study how random connectivity and second-order motifs affect these joint descriptions using random matrix theory.


·学术报告

周栋焯 上海交通大学

Title: Causal connectivity measures for spiking neural network reconstruction

Abstract: Grasping causal connectivity within a network is essential for uncovering its functional dynamics. However, the inferred causal connections are fundamentally influenced by the choice of causality measure, which may not always correspond to the network's actual structural connectivity. The relationship between causal and structural connectivity, particularly how different causality measures influence the inferred causal links, warrants further exploration. In this talk, we first theoretically establish a mapping between structural and causal connectivity based on voltage signals from simulated Hodgkin-Huxley and integrate-and-fire neuronal networks. To extend our results to practical cases, we then examine nonlinear networks with pulse signal outputs, such as spiking neural networks, employing four prevalent causality measures: time-delayed correlation coefficient, time-delayed mutual information, Granger causality, and transfer entropy in the context of pairwise treatment. We conduct a theoretical analysis to elucidate the interconnections among these measures when applied to pulse signals. Through case studies involving both a simulated Hodgkin-Huxley network and an empirical mouse brain network, we validate the quantitative relationships between these causality measures. Our findings demonstrate a strong correspondence between the causal connectivity derived from these measures and the actual structural connectivity, thus establishing a direct link between them. We underscore that structural connectivity in networks with pulse outputs can be reconstructed pairwise, circumventing the need for global information from all network nodes and effectively avoiding the curse of dimensionality. Our approach provides a robust and practical methodology for reconstructing networks based on pulse outputs, offering significant implications for understanding and mapping neural circuitry.


Nicolas Torres 格拉纳达大学

Title: Analysis of an elapsed time model with discrete and distributed delays. New insights and theory

Abstract: The elapsed time equation is an age-structured model that describes dynamics of interconnected spiking neurons through the elapsed time since the last discharge, leading to many interesting questions on the evolution of the system from a mathematical and biological point of view. In this talk, we first deal with the case when transmission after a spike is instantaneous and the case when there exists a distributed delay that depends on previous history of the system, which is a more realistic assumption. Then we study the well-posedness and the numerical analysis of these elapsed time models. For existence and uniqueness we improve the previous works by relaxing some hypothesis on the non-linearity, including the strongly excitatory case, while for the numerical analysis we prove that the approximation given by the explicit upwind scheme converges to the solution of the non-linear problem. We show some numerical simulations to compare the behavior of the system under different parameters, leading to solutions with different asymptotic profiles. Moreover, we present some new perspectives that are interesting to determine the asymptotic behavior of the system


李松挺 上海交通大学

Title: Mathematical analysis of the structure and dynamics of large-scale cortical networks

Abstract: The cortex possesses complex structural and dynamical features. In this lecture, I will introduce our recent results on the mathematical modeling and analysis of the cortical network structure and dynamics. By quantifying the structural diversity of a cortical network using Shannon’s entropy, I will first show that the structure of the cortical network follows the maximum entropy principle under the constraints of limited wiring material and spatial embedding, which may serve as a universal principle across multiple species. In addition, based on the connectome data of the macaque monkey cortex, a large-scale network model is developed that can capture a prominent dynamical phenomenon known as time-scale localization, i.e., different cortical areas exhibit different intrinsic timescales. Finally, I will explain the underlying mechanism by performing the asymptotic analysis of the network model, and discuss some related biological interpretations.


肖卓成 上海纽约大学

Title: Minimizing information loss reduces spiking neuronal networks to differential equations

Abstract: Spiking neuronal networks (SNNs) are widely used in computational neuroscience, from biologically realistic modeling of regional networks in cortex to phenomenological modeling of the whole brain. Despite their prevalen1ce, a systematic mathematical theory for finite-sized SNNs remains elusive, even for homogeneous networks commonly used to represent local neural circuits. The primary challenges are twofold: 1) the rich, parameter-sensitive SNN dynamics, and 2) the singularity and irreversibility of spikes. These challenges pose significant difficulties when relating SNNs to systems of differential equations, leading previous studies to impose additional assumptions or to focus on individual dynamic regimes. In this study, we introduce a Markov approximation of homogeneous SNN dynamics to minimize information loss when translating SNNs into ordinary differential equations. Our only assumption for the Markov approximation is the fast self-decorrelation of synaptic conductances. The ordinary differential equation system derived from the Markov model (termed ``dsODE") effectively captures high-frequency partial synchrony, and the metastability of finite-neuron networks produced by interacting excitatory and inhibitory populations. Besides accurately predicting dynamical statistics, such as firing rates, our theory also quantitatively captures the geometry of attractors and bifurcation structures under varying parameters. Thus, our work provides a comprehensive mathematical framework that can systematically map parameters of single-neuronal physiology, network coupling, and external stimuli to homogeneous SNN dynamics.


Alejandro Ramos-Lora 格拉纳达大学

Title: Microscopic and mesoscopic descriptions of the NNLIF model. Blow-up and global behavior

Abstract: The Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models describe the activity of neural networks. These models have been studied at microscopic level, using Stochastic Diferential Equations, and at mesoscopic level, through the mean feld limits using Fokker-Planck type equations. In this talk, I will present numerical and analytical results addressing two important questions about this model: What happens after the blow-up? and, what is the global long-time behavior of. To answer the frst question, we have performed numerical simulations on the behavior of the classical and physical solutions to the Stochastic Diferential Equations. In order to answer the second question, we present a new discrete system whose analytical study allows us to predict the behavior of the non-linear system with large delay. In addition, we provide an analytical proof that the nonlinear system follows the behavior of the discrete system for loosely connected networks, together with extensive numerical evidence that supports the connection between the discrete and nonlinear systems in all cases.


·学生报告

豆旭桉 北京大学

袁   昕 北京大学

王若禹 北京大学

杨尚霖 加利福尼亚大学欧文分校

张瑞霖 北京大学

鲍萧戈 复旦大学

王明璋 上海交通大学