HomeEventsSeminarsOthers 》 Content

Workshop on Stochastic Modelling and Machine Learning

2023-08-01 08:44:01
报告人 时间 9:30-15:00
地点 E4-233 2023
月日 08-15

Time:August 15, 2023

Venue:E4-233, Yungu Campus, Westlake University


9:30-10:30

Speaker: Simone Scotti, University of Pisa

Title: Parsimonious SPX and VIX calibration using (rough) stochastic volatility with jump clusters

Abstract: This presentation is based on three different papers sharing the idea to reproduce SPX and VIX implied volatilities and many empirical facts in a parsimonious way. The three models are exponential affine and then the Fourier-Laplace transform of the log returns and the square of the volatility index can be computed explicitly in terms of solutions of deterministic Riccati (Volterra) equation.

The first model is driven by a branching alpha-stable and then also reproduce the empirical results on jump infinite activity by Todorov and Tauchen.

The two other models are based on extension of Hawkes processes since the intensity of the jumps coincides with the volatility process itself. In the last case, the volatility is rough.

We calibrate a parsimonious specification of our model characterized by a power kernel and an exponential law for the jumps. We show that our parsimonious setup is able to simultaneously capture, with a high precision, the behavior of the implied volatility smile for both S&P 500 and VIX options. In particular, we observe that in our setting the usual shift in the implied volatility of VIX options is explained by a very low value of the power in the kernel. Our findings demonstrate the relevance, under an affine framework, of rough volatility and self-exciting jumps in order to capture the joint evolution of the S&P 500 and VIX.


10:45-11:45

Speaker: Yiliu Wang, Westlake University

Title: Two problems in relational learning

Abstract: Relational learning is learning in a context where we have a set of items with relationships. In this talk, I will give you an overview of the topic, introduce the framework and discuss two specific problems. I will first talk about a simple model for group outcomes, called the beta model for hypergraphs. The model treats relational data as hypergraphs where nodes represent items and hyper-edges group items into sets. It is motivated by real-life examples such as crowdsourcing platforms, recommender systems and knowledge graphs. We study the problem in the statistical inference setting under maximum likelihood estimation, where the goal is to estimate individual item values from the group outcomes. We will discuss parameter existence, uniqueness, and estimation error under different experimental settings.

Then I will talk about an online variant where an agent can draw samples sequentially. At each time step, the agent chooses a group of items subject to constraints and receives some form of feedback. This setting has many examples in the real world, such as sensory processing, project portfolio selection and digital advertising. We formulate the problem into the framework of combinatorial multi-armed bandits, where the goal is to select a set of items with maximum performance with minimum regret. Algorithms and regret upper bounds will be discussed under different feedback assumptions.


14:00-15:00

Speaker: Emmanuel Gobet, École Polytechnique

Title: Structured Dictionary Learning of Rating Migration Matrices for Credit Risk Modeling

Abstract: Rating Migration Matrix is a crux to assess credit risks. Modeling and predicting these matrices are then an issue of great importance for risk managers in any financial institution. As a challenger to usual parametric modeling approaches, we propose a new structured dictionary learning model with auto-regressive regularization that is able to meet key expectations and constraints: small amount of data, fast evolution in time of these matrices, economic interpretability of the calibrated model. To show the model applicability, we present a numerical test with real data and a comparison study with the widely used parametric Gaussian Copula model: it turns out that our new approach based on dictionary learning significantly outperforms the Gaussian Copula model.