Anna Korba

About me

Since September 2020, I am an assistant professor at ENSAE/CREST.

My main line of research is in statistical machine learning. My current research is focused on kernel methods, optimal transport and ranking data.

Short bio

From December 2018 to August 2020 I was a postdoctoral researcher at Gatsby Unit, working with Arthur Gretton. Gastby Unit is part of the Centre for Computational Statistics and Machine Learning (CSML) at University College London (UCL).

From October 2015 to October 2018, I was a PhD student at Télécom ParisTech, in the S2A (Signal, Statistics and Learning) team, supervised by Stephan Clémençon .

Before that in 2015, I graduated the Master MVA (Machine Learning and Computer Vision) from ENS Cachan and obtained the engineering degree of ENSAE. More details can be found in my resume [EN] [FR].

News

  • October 2020: Gave a talk at IHP on our two recent Neurips 2020 papers (slides):

    Title : Sampling as optimization of the relative entropy over the space of measures : a non asymptotic analysis of SVGD and the Forward-Backward scheme.

    Abstract : We consider the problem of sampling from a log-concave probability distribution π ∝ exp(−V ) on ℝ^d. This target distribution π can be seen as a minimizer of the relative entropy functional with respect to π, defined on the space of probability distributions. A general strategy to minimize a function is to run the gradient flow dynamics. On the space of probability measures, Wasserstein gradient flows define such curves of steepest descent for the objective functional. In this talk, I will discuss recent works [1,2] on two different algorithms that result from different time-space discretizations of this gradient flow. The first one [1] provides a novel finite time analysis for the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate π. It implements a forward discretization of the Wasserstein gradient flow of the relative entropy, where the gradient is smoothed through a kernel integral operator. The second one [2] proposes a Forward-Backward discretization scheme for this gradient flow. Using techniques from convex optimization and optimal transport, we show that it has convergence guarantees similar to the proximal gradient algorithm in Euclidean spaces.

    [1] A Non-Asymptotic Analysis of Stein Variational Gradient Descent. A. Korba, A. Salim, M. Arbel, G. Luise, A. Gretton. Neurips 2020.
    [2] The Wasserstein Proximal Gradient Algorithm. A. Salim, A. Korba, G. Luise. Neurips 2020.

  • September 2020: Gave a (pre-recorded) talk at the Second Symposium on Machine Learning and Dynamical Systems at Fields Institute on our Neurips 2019 paper [3] (slides).

    [3] Maximum Mean Discrepancy Gradient Flow. M. Arbel, A. Korba, A. Salim, A. Gretton. Neurips 2019.