Pierre Glaser

I am a Machine Learning PhD Student in the Gatsby Computational Neuroscience Unit. More about me here.



Recent Publications


May 2023 - Accepted as a spotlight at UAI 2023! (pdf)

Fast and Scalable Score-Based Kernel Calibration Tests

We introduce the Kernel Calibration Conditional Stein Discrepancy test (KCCSD test), a nonparametric, kernel-based test for assessing the calibration of probabilistic models with well-defined scores. In contrast to previous methods, ... test avoids the need for possibly expensive expectation approximations while providing control over its type-I error.

We achieve these improvements by using a new family of kernels for score-based probabilities that can be estimated without probability density samples, and by using a Conditional Goodness of Fit criterion for the KCCSD test's U-statistic.

The tractability of the KCCSD test widens the surface area of calibration measures to new promising use-cases, such as regularization during model training. We demonstrate the properties of our test on various synthetic settings.



October 2022 - Under Review (ArXiv)

Learning Unnormalized Models for Simulation-Based Inference

We introduce two synthetic likelihood methods for Simulation-Based Inference (SBI), to conduct either amortized or targeted inference from experimental observations when a high-fidelity simulator is available. Both methods learn a conditional... energy-based model (EBM) of the likelihood using synthetic data generated by the simulator, conditioned on parameters drawn from a proposal distribution. The learned likelihood can then be combined with any prior to obtain a posterior estimate, from which samples can be drawn using MCMC.

Our methods uniquely combine a flexible Energy-Based Model and the minimization of a KL loss: this is in contrast to other synthetic likelihood methods, which either rely on normalizing flows, or minimize score-based objectives; choices that come with known pitfalls.

We demonstrate the properties of both methods on a range of synthetic datasets, and apply them to a neuroscience model of the pyloric network in the crab, where our method outperforms prior art for a fraction of the simulation budget.



October 2021 - Accepted at NeurIPS 2021 (ArXiv)

KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support

We study the gradient flow for a relaxed approximation to the Kullback-Leibler (KL) divergence between a moving source and a fixed target distribution. This approximation, termed the KALE (KL approximate lower-bound estimator)... solves a regularized version of the Fenchel dual problem defining the KL over a restricted class of functions.

When using a Reproducing Kernel Hilbert Space (RKHS) to define the function class, we show that the KALE continuously interpolates between the KL and the Maximum Mean Discrepancy (MMD). Like the MMD and other Integral Probability Metrics, the KALE remains well defined for mutually singular distributions. Nonetheless, the KALE inherits from the limiting KL a greater sensitivity to mismatch in the support of the distributions, compared with the MMD.

These two properties make the KALE gradient flow particularly well suited when the target distribution is supported on a low-dimensional manifold. Under an assumption of sufficient smoothness of the trajectories, we show the global convergence of the KALE flow. We propose a particle implementation of the flow given initial samples from the source and the target distribution, which we use to empirically confirm the KALE's properties.

News