Organizers
Jose A. Carrillo, University of Oxford
Louis Tao, Peking University
Zhennan Zhou, Westlake University
Time and Venue
August 25 – August 29
E10-405, Yungu Campus, Westlake University
Schedule
Time |
Aug 25 Monday |
Aug 26 Tuesday |
Aug 27 Wednesday |
Aug 28 Thursday |
Aug 29 Friday |
9:00-9:15 |
Opening |
|
|
9:00-9:50 Nicolas Torres |
Free discussion |
9:15-10:15 |
Douglas Zhou |
Daniele Avitabile |
Zhennan Zhou |
10:00-10:50 Datong Zhou |
10:15 -10:30 |
Group photo |
Break |
Break |
10:30-11:30 |
Douglas Zhou |
Daniele Avitabile |
Zhennan Zhou |
11:00-11:50 Nicolas Zadeh |
11:30-14:00 |
Poster Flash |
Lunch Break |
Poster Awards Announcement |
Lunch Break |
Lunch Break |
Lunch Break |
14:00 -15:00 |
Douglas Zhou |
Daniele Avitabile |
Yu Hu |
Free discussion |
15:00 -15:15 |
Break |
Break |
Break |
15:15-16:15 |
Charlotte Dion-Blanc |
Valentin Schmutz |
Songting Li |
16:15 -16:30 |
Break |
Break |
Break |
16:30-17:30 |
Charlotte Dion-Blanc |
Valentin Schmutz |
Zhuocheng Xiao |
17:30-18:30 |
|
|
|
Dinner |
|
18:30 |
Happy Hour at Hotel Lounge |
|
5 Main Lectures:
Douglas Zhou (Shanghai Jiao Tong University)
Mathematical modeling, theoretical analysis and applications of Spatial Neurons with Dendritic Structures
Understanding how neurons integrate spatially distributed synaptic inputs through their complex dendritic structures is fundamental to deciphering the principles of brain information processing. In this lecture, we develop a mathematical framework for spatial neurons with dendritic morphologies, combining geometric modeling, cable theory, and asymptotic analysis to quantitatively describe dendritic integration. Using both simplified “ball-and-stick” and multi-compartment models, we derive a unified bilinear dendritic integration rule and characterize the spatial dependence of integration coefficients. Our theoretical predictions closely match electrophysiological experiments, validating the accuracy of our models. Furthermore, we propose efficient numerical algorithms and biologically interpretable reduced neuron models (e.g., DIF and DHH), which preserve key dendritic computational capabilities such as direction selectivity and coincidence detection. As applications of our mathematical framework, we further construct a reduced point neuron model and a biologically inspired Dendritic Bilinear Neural Network (DBNN), both of which can faithfully captures subthreshold voltage responses and spike outputs of realistic neurons, and demonstrates strong performance on spatiotemporal classification tasks. In addition, we propose a two-step clamp method and intercept method to precisely estimate of local and effective synaptic conductances, respectively. Our work bridges biophysical modeling and neural computation, providing insights into dendritic function, guiding experimental design, and enabling principled mapping between biological and artificial neurons.
Daniele Avitabile, (Vrije Universiteit Amsterdam)
Numerical Analysis and Simulation of Neural Field Equations
In these lectures, we explore both the theoretical foundations and practical implementation of numerical simulations for neural field equations defined on arbitrary cortical geometries. We introduce projection schemes, including collocation and Galerkin approaches, and discuss general theoretical frameworks for analyzing the convergence properties of Finite Element and Spectral method variants. The course will culminate in a hands-on session where participants will simulate neural fields on one-dimensional and two-dimensional cortices. Although the tutorial is language agnostic, Matlab, Python, and Julia are recommended for their ease of use given the short duration of the lecture. Time permitting, we will also examine models with stochastic inputs and delve into forward uncertainty quantification in this context.
Charlotte Dion-Blanc (Sorbonne Université)
Hawkes processes for Neuroscience.
In neuroscience, spikes (action potentials) are fundamental components of information processing over time. Biologists can simultaneously record the activity of multiple neurons, producing several spike trains.
These data can initially be modeled using a homogeneous Poisson process. However, this stochastic process, characterized by a constant intensity, fails to capture temporal dependencies between spikes and does not account for the non-stationarity of neuronal activity.
The Hawkes process generalizes the Poisson process by defining a point process with a stochastic intensity function. At any time, the intensity depends on the entire history of the process up to that point. Moreover, the shape of the intensity function is governed by an interaction function, which quantifies the strength and duration of the temporal dependencies.
Hawkes processes are particularly well-suited for modeling large neural networks comprising M neurons, especially in the high-dimensional setting where M goes to infinity. In this case, the model involves M×M interaction functions that characterize the connections between neurons in the network.
Thus, Hawkes processes offer a powerful framework for modeling event-based data. They provide a tractable and flexible modeling approach for understanding neuronal connectivity.
In this talk, I will first introduce the Hawkes process, following a brief review of Poisson processes. I will then present both parametric and nonparametric estimation procedures in high-dimensional settings. The main objective of these methods is to construct connectivity graphs that accurately reflect the underlying biological processes.
Valentin Schmutz (University College London)
Concentration of measure in biological neural networks I: From spikes to rates
Mean-field limits of large networks of neurons have provided us with a mathematical understanding of many emergent behaviours in biological neural networks. These limits can be seen as special cases of the concentration of measure phenomenon. The goal of this two-part lecture is to show that concentration of measure can explain new emergent behaviours in large networks that do not satisfy standard mean-field assumptions.
To illustrate this idea, I will consider a simple two-layer neural network model where neurons in the first layer are modelled as nonlinear Hawkes processes. In the first part, I will show that this stochastic spiking network can behave like a deterministic rate-based system, even when pairwise firing rate correlations in the first layer are vanishing.
Reference: S., Brea, Gerstner (2025) Emergent rate-based dynamics in duplicate-free populations of spiking neurons.
Concentration of measure in biological neural networks II: From spikes to potentials
In the second part, I will prove a much less expected result. Taking the family of limits considered in the first part (which are not classical mean-field limit), large networks not only behave like deterministic systems but they also enter a linear signal transmission regime where each spiking neuron appears to be perfectly transmitting its subthreshold membrane potential fluctuations. This mathematical result suggests a probabilistic mechanism through which populations of neurons could use sparse spikes to reliably transmit time-continuous, high-dimensional signals.
Reference: S. (2025) Spikes can transmit neurons’ subthreshold membrane potentials.
Zhennan Zhou (Westlake University)
From Spikes to PDEs: A Multiscale Journey into the Mathematics of Neural Networks
This tutorial explores the fascinating link between the microscopic world of individual spiking neurons and the macroscopic dynamics of large-scale neural populations. We will embark on a multiscale modeling journey, demonstrating how various nonlinear partial differential equation (PDE) models can be rigorously derived from underlying agent-based systems. The focus will be on understanding how collective phenomena, such as network oscillations and spatial patterns, emerge from simple neuronal interactions.
We will begin with a detailed analysis of the Nonlinear Noisy Leaky Integrate-and-Fire (NNLIF) model, a cornerstone Fokker-Planck equation in computational neuroscience. We will discuss its derivation, key analytical results concerning network synchrony and stability, and explore biologically relevant extensions such as refractory periods and learning rules. To broaden our perspective, we will then examine two contrasting models: a stochastic neural field model used to explain the hexagonal firing patterns of grid cells, and a jump-based LIF model that results in a hyperbolic transport equation. Finally, we will introduce the kinetic Voltage-Conductance model, highlighting the mathematical challenges and richer dynamics that arise from its degenerate diffusion structure. The tutorial will conclude by discussing the limitations of the mean-field approach, current research trends, and the many open questions that make this a vibrant field of study.
6 Presentations:
Yu Hu (Hong Kong University of Science and Technology)
How recurrent network interactions shape the geometry of neuron population activity
Recent advances in the simultaneous recording of large numbers of neurons have driven significant interest in understanding the multi-dimensional structures of neural population activity. A key question is how these dynamic features arise mechanistically and how they relate to circuit connectivity. We propose the use of the covariance eigenvalue distribution, or spectrum, as a robust measure to characterize the geometry of neural population activity besides its dimensionality (Hu & Sompolinsky 2022). Under linear dynamics, analytical results can be obtained to explicitly demonstrate how connectivity statistics, including motifs, affect the spectrum. In particular, the spectrum exhibits a long tail of large eigenvalues, which quantitatively aligns with the low-dimensional activity observed in various experimental datasets and provides an estimate of effective recurrent connection strength. We will also discuss recent extensions of this theory to nonlinear dynamics, which shows how the dimension and spectrum evolve across the transition to chaos, and offer additional theoretical support and insights into applying covariance spectrum analysis to data.
Songting Li (Shanghai Jiao Tong University)
Signal integration and propagation in the large-scale cortical network
In the brain, while early sensory areas encode and process external inputs rapidly, higher-association areas are endowed with slow dynamics to benefit hierarchical signal integration over time. This property raises the question of why diverse timescales are well localized rather than being mixed up across the cortex, despite high connection density and an abundance of feedback loops that support reliable signal propagation. In this lecture, we will address this question by developing and analyzing a large-scale network model of the primate cortex, and we identify a novel dynamical regime termed "interference-free propagation". In this regime, the mean components of the synaptic currents to each downstream area are imbalanced to ensure signals to propagate reliably, while the temporally fluctuating components of the synaptic inputs governed by upstream areas' timescales are largely canceled out, leading to the localization of its own timescale in each downstream area. We further extend the model to nonlinear spiking neural networks, and verify our theory with experimental data. Our result provides new insights into the operational regime of the cortex, leading to the coexistence of timescale localization for hierarchical signal integration and reliable signal propagation.
Zhuocheng Xiao (New York University Shanghai)
Multiscale cortical modeling efficiently infers unknown brain architecture and parameters
Modeling the human cortex is challenging due to its structural and dynamical complexity. Spiking neuron models can incorporate many details of cortical circuits, but they are computationally costly and difficult to scale up, limiting their scope to small patches of cortex and restricting the range of phenomena that can be studied. Alternatively, reduced, phenomenological models are easier to construct and simulate, but there is no free lunch: the more simplified, the less directly a model relates to experimental data.
This talk presents a multiscale modeling strategy aimed at striking a balance between biological realism and computational efficiency, enabling effective inference of unknown cortical architectures and physiological parameters. By exploiting structural modularity and dynamical similarities inherent to cortical organization, our approach integrates local circuit dynamics with a coarse-grained global representation of brain areas. Specifically, we precompute a library of local neural circuit responses driven by various lateral and external inputs. This library then facilitates rapid evolution toward large-scale cortical steady states via efficient lookup methods.
We demonstrate the utility and robustness of this method by modeling an input layer of the primate primary visual cortex (V1). The computational advantages allow our model to scale readily to extensive cortical areas, facilitating efficient inference of anatomical connectivity and physiological interactions underlying binocular visual integration—features previously overlooked by other studies and now testable through modern experimental neuroscience techniques. The fidelity of our inference is ensured by the model's accurate reproduction of essential V1 functionalities, including orientation selectivity, contrast sensitivity, and temporal and spatial frequency selectivity.
Nicolas Torres (Université Côte d'Azur)
On the local stability of the elapsed-time model in terms of the transmission delay and interconnection strength
The elapsed-time model describes the behavior of interconnected neurons through the time since their last spike.
It is an age-structured non-linear equation in which age corresponds to the elapsed time since the last discharge, and models many interesting dynamics depending on the type of interactions between neurons. We investigate the linearized stability of this equation by considering a discrete delay, which accounts for the possibility of a synaptic delay due to the time needed to transmit a nerve impulse from one neuron to the rest of the ensemble.
We state a stability criterion that allows to determine if a steady state is linearly stable or unstable depending on the delay and the interaction between neurons. Our approach relies on the study of the asymptotic behavior of related Volterra-type integral equations in terms of theirs Laplace transforms. The analysis is complemented with numerical simulations illustrating the change of stability of a steady state in terms of the delay and the intensity of interconnections.
Datong Zhou (Sorbonne Université)
Coupling and Tensorization of Kinetic Theory and Graph Theory
We study a non-exchangeable multi-agent system and rigorously derive a strong form of the mean-field limit. The convergence of the connection weights and the initial data implies convergence of large-scale dynamics toward a deterministic limit given by the corresponding extended Vlasov PDE, at any later time and any realization of randomness. This is established on what we call a bi-coupling distance defined through a convex optimization problem, which is an interpolation of the optimal transport between measures and the fractional overlay between graphs. The proof relies on a quantitative stability estimate of the so-called observables, which are tensorizations of agent laws and graph homomorphism densities. This reveals a profound relationship between mean-field theory and graph limiting theory, intersecting in the study of non-exchangeable systems.
Nicolas Zadeh (Universite Libre de Bruxelles)
Existence and uniqueness of weak solutions for a class of kinetic Fokker-Planck equations related to neuroscience
Models of neuronal electric activity such as the integrate-and-fire [Lapicque, 1907] and resonate-and-fire [Izhikevich, 2001] describe a spiking mechanism triggered upon reaching a threshold, followed by a reset. The mean-field description of networks of such neurons leads to nonlinear Fokker-Planck equations posed on a half-space, featuring a measure-valued source term that accounts for the reset. The theory of strong solutions for these equations has seen significant developments since [Cáceres, Carrillo, Perthame, 2011], with ongoing progress (see, e.g., [Dou, Zhou, 2025]).
In this work, we focus on the corresponding kinetic equations—specifically, kinetic Kramers/Fokker-Planck-Kolmogorov equations in half-space domains with measure right-hand sides. We explain the analytical challenges associated with the existence and uniqueness of weak solutions, relate our approach to the state of the art in the theory of similar equations in physics (notably [Carrillo, 1998]; [Albritton, Armstrong, Mourrat, Novack, 2024]), and connect our framework to classical results for equations with measure data ([Stampacchia, 1963]; [Boccardo, Gallouët, 1989]; [Malusa, Porzio, 2007]).
We also incorporate recent advances on regularity and embedding theorems in kinetic theory (see [Bramanti, Cupini, Lanconelli, Priola, 2023]; [Pascussi, Pesce, 2024]), to our methods, defining and constructing suitable weak solutions in this context.