Machine Learning Talks on Campus - Past events

Machine Learning Talks on Campus is an information service about talks, workshops and other events in the local community.

If you want to get our News, please subscribe to our mailing list.

If you want to announce a talk for the following week, please send an email to machine-learning@uni-heidelberg.de by Wednesday night.

To access table on mobile please scroll to right.

Past events


4:15 PM

Phinli: a physics informed surrogate model for elliptic PDEs and its Bayesian inverse problem analysis
J Andrés Christen (CIMAT-CONAHCYT, México)

The talk addresses Bayesian inferences in inverse problems with uncertainty quantification involving a computationally expensive forward map associated with solving a partial differential equations. To mitigate the computational cost, the paper proposes a new surrogate model informed by the physics of the problem, specifically when the forward map involves solving a linear elliptic partial differential equation. The study establishes the consistency of the posterior distribution for this surrogate model and demonstrates its effectiveness through numerical examples with synthetic data. The results indicate a substantial improvement in computational speed, reducing the processing time from several months with the exact forward map to a few minutes, while maintaining negligible loss of accuracy in the posterior distribution.

Rob Scheichl
INF 205 Mathematikon, Seminarraum A
1 PM -2 PM

Simulation-based inference and the places it takes us
Prof. Jakob Macke (Uni Tübingen) 

Many fields of science make extensive use of mechanistic forward models which are implemented through numerical simulators. Simulation-based inference aims to make it possible to perform Bayesian inference on such models by only using model-simulations, but not requiring access to likelihood evaluations. I will speak about recent work on developing simulation based inference methods using flexible density estimators parameterised with neural networks,  on improving their robustness and efficiency, and applications to modelling problems in neuroscience, computational imaging and astrophysics. Finally, I will talk about the prospect of building large-scale models of neural circuits in the Drosophila melanogster by combing connectomics and  simulation-based machine learning.

CZS Heidelberg Initiative for Model-Based AI 
Seminar Room A + B (0.202 + 0.203), Mathematikon,
INF 205, Heidelberg
9:30 AM - 11 AM

Practical Equivariances via Relational Conditional Neural Processes
Dr. Manuel Haussmann, University of Southern Denmark


Conditional Neural Processes (CNPs) are a class of meta-learning models popular for combining the runtime efficiency of amortized inference with reliable uncertainty quantification. Many relevant machine learning tasks, such as in spatiotemporal modeling, Bayesian Optimization, and continuous control, inherently contain equivariances – for example to translation – which the model can exploit for maximal performance. However, prior attempts to include equivariances in CNPs do not scale effectively beyond two input dimensions.

In this talk, I will introduce the theory behind CNPs and discuss our recent proposal on how to incorporate equivariances into any neural process model and how we can ensure scalability to higher dimensions.

Fred Hamprecht
INF 205 Mathematikon
Seminarraum 10
11:15 AM - 12:15 AM

Topologically penalized regression on manifolds
Wolfgang Polonik, UC Davis


We study a regression problem on a compact manifold. In order to take advantage of the underlying geometry and topology of the data, we propose to perform the regression task on the basis of  eigenfunctions of the Laplace-Beltrami operator of the manifold that are regularized with topological penalties. We will discuss the approach and the penalties, provide some supporting theory and illustrate the performance of the methodology on some data sets, illustrating the relevance of our approach in the case where the target function is ``topologically smooth”. This is joint work with O. Hacquard, K. Balasubramanian, G. Blanchard and C. Levrard.

Enno Mammen
SR 8 (4th floor)
10 AM - 11 AM

Semi-supervised learning: The provable benefits of unlabeled data for sparse Gaussian classification 
Boaz Nadler, Weizmann Institute of Science, Israel


The premise of semi-supervised learning (SSL) is that combining labeled and unlabeled data enables learning significantly more accurate models. Despite empirical successes, the theoretical understanding of SSL is still far from complete. In this talk, we consider SSL for high dimensional sparse Gaussian classification. A key challenge here is feature selection, detecting the few variables informative for the classification problem. 

For this SSL setting, we derive information theoretic lower bounds as well as computational lower bounds, based on the low-degree likelihood ratio framework. Our key contribution is the identification of a regime in the problem parameters (dimension, sparsity, number of labeled and unlabeled samples) where a polynomial time SSL algorithm that we propose succeeds, but any computationally efficient supervised or unsupervised schemes, that separately use only the labeled or unlabeled data would fail. This result highlights the provable benefits of combining labeled and unlabeled data for feature selection in high dimensions. 

Fred Hamprecht
SR 8 (4th floor)
12:45 PM - 5 PM

HI4AI – Human Intelligence for Artificial Intelligence in Medicine 
Digital patient twins embody virtual representations of patients based on health and environmental data to model individual and population behavior for improving health care by guidance through artificial intelligence (AI). The thematic research network (TRN) HI4AI extends this concept by the digital clinician twin. It aims to foster inter- and transdisciplinary communication to gain new insight into clinical reasoning/human intelligence (HI) to enable its effective support by AI. 


12:45 – 13:00 Coffee & Registration

13:00 – 13:45 Introduction to the TRN
PD. Dr. Holger A. Lindner (Experimental Anesthesiology, UMM):  
HI4AI in medicine - overview 
Prof. Dr. Stefan Riezler (Statistical NLP, IWR): 
Grounding large language models in clinical 
measurements and expert knowledge 
Prof. Dr. Jan Rummel (Experimental Psychology 
and Cognitive Self-Regulation):  
Self-regulation of cognition 
Prof. Dr. Vera Araujo-Soares (CPD):  
The role of co-design and process evaluation for 
future implementation

13:45 – 14:30 Spotlight talks
• AI ethics 
  - PD Dr. Markus Herrmann (NCT): 
    AI ethics and medical decision making  
• Medical decision support by AI 
  - M.Sc. Markus Buchwald (EMCL, IWR): 
   Learning to defer 
• Hospital admission and outpatient care 
  - Dipl.-Inform. Sebastian Schöning 
    (Fraunhofer IPA): TBA 
- Prof. Emanuel Schwarz, Ph.D. (ZI): 
  Advancing personalized psychiatry through 
  artificial intelligence 
• Acute care 
  - Intensive care, TBA 
• Chronic medical conditions 
  - PD Dr. med. Sebastian Belle (Transl. 
    Oncology, UMM):
    Automation process in endoscopic adenoma therapy -optimized long-term therapy strategies

14:30 – 14:50 Coffee break


14:50 – 15:40 Breakout discussions
• Hospital admission and outpatient care 
• Acute care 
• Chronic medical conditions

15:40 – 16:00 Coffee break 

16:00 – 16:45 Results from breakout discussions
• Hospital admission and outpatient care 
• Acute care 
• Chronic medical conditions

16:45 – 17:00 Résumé: expectations, gaps & collaborative next steps 
Lindner, Riezler, Rummel, Araujo-Soares


The TRN HI4AI invites participation of stakeholders across diverse disciplines. If you are interested in contributing to this network, please apply by email to PD Dr. Holger Lindner
holger.lindner@medma.uni-heidelberg.de with “HI4AI” in the subject line and 
● state in 80 words or less your motivation and 
● indicate the order of your preference for participation in one of the Breakout Discussions: 
- Hospital admission and outpatient care 
- Acute care 
- Chronic medical conditions 
● On-site child care may be offered by the Medical Faculty Mannheim depending on the  demand. Please indicate your interest. 
Registration Deadline: January 31st, 2024   Please note that participation is limited. 


Organized by PD Dr. Holger A. Lindner, Prof. Stefan Riezler, Prof. Jan Rummel, Prof. Vera Araujo-Soares, PD. Dr. Dr. Verena-Schneider-Lindner, Christopher Jones, M.Sc. 

Funded by the Federal Ministry of Education and Research (BMBF) and the Ministry of Science Baden-Württemberg within the framework of the Excellence Strategy of the Federal and State 

CUBEX ONE, Mannheim Medical Technology Campus 
Franz Volhard Straße 5, 68167 Mannheim 
2:15 PM

Coarse-grained molecular dynamics for proteins with neural networks: Challenges and breakthroughs
Aleksander Durumeric, Freie Universität Berlin

Neural network force-fields have enabled molecular dynamics (MD) simulations at unprecedented accuracy by efficiently emulating expensive ab initio calculations. However, these advances have not yet accelerated the long-timescale modelling of biomolecular complexes, where the computational cost of classical force-fields is difficult to reduce. One leading approach for adapting neural network force fields to this context focuses on creating force-fields at a reduced (i.e. coarse-grained) resolution. We here discuss how this task differs from that at the atomistic resolution and discuss recent advances by myself and colleagues which have brought the idea of an accurate and extrapolative neural network protein coarse-grained force-fields within reach, with focus on the collection and processing of training data.

Tristan Berau
Institute for Theoretical Physics Philosophenweg 19, Seminar Room
2.15 PM

Machine Learning in condensed matter: from molecules and materials to quantum systems
Huziel E. Sauceda, Universidad Nacional Autonoma de Mexico


Machine learning (ML) encompasses a wide range of algorithms and models, which have been prominently applied to condensed matter. Some applications range from atomistic simulations, generative quantum and classical distributions, predictors of physicochemical properties, differential equations’ ansatz, among many others. In this talk, we will present some examples of how ML models have advanced our understanding of molecular systems and their complex interactions. In particular, we will focus on how combining machine learned force fields and quantum interatomic dilation, not only reveals the intricate nature of molecular systems, but also shows the limitations of many electronic structure methods. Additionally, we will briefly show some of the current applications of ML to quantum systems in our group, this with particular emphasis to describe excited states in second quantized representation and their paramount importance while describing experimental results.

Tristan Bereau
Institute for Theoretical Physics, Philosophenweg 19, Seminar room
9:30 AM -11 AM

Enhancing Accuracy in Deep Learning Using Random Matrix Theory
L. Berlyand, Penn State, USA


We discuss applications of random matrix theory (RMT) to the training of deep neural networks (DNNs). Our focus is on pruning of DNN parameters, guided by the Marchenko-Pastur spectral approach. Our numerical results show that this pruning leads to a drastic reduction of parameters while not reducing the accuracy of DNNs and CNNs.  Moreover, pruning  the  fully connected DNNs  actually increases the accuracy and decreases the variance for random initializations. We next  show how these RMT  techniques can be used to remove 20% of parameters from state-of-the-art DNNs such as Resnet and VIT while reducing accuracy by at most 2% and, at some instances, even increasing accuracy. 

Finally, we provide a theoretical understanding of these results by proving the Pruning  Theorem that establishes a rigorous relation between the accuracy of the pruned and non-pruned DNNs. 

Joint work with E. Sandier (U. Paris 12), Y. Shmalo (PSU student) and L. Zhang (Jiao Tong U.)

Fred Hamprecht
Im Neuenheimer Feld 205
Konferenzraum / 5. Stock, Raum 5/104
69210 Heidelberg

Mathematical Data Science and Optimization


09:00 AM Konstantin Rusch (Massachusetts Institute of Technology, ETH Zurich)
Physics-inspired Machine Learning 
Combining physics with machine learning is a rapidly growing field of research.Thereby, most work focuses on leveraging machine learning methods to solve problemsin physics. Here, however, we focus on the converse, i.e., physics-inspired machinelearning, which can be described as incorporating structure from physical systems intomachine learning methods to obtain models with better inductive biases. More concretely,we propose several physics-inspired deep learning architectures for sequencemodelling based on nonlinear coupled oscillators, Hamiltonian systems and multi-scaledynamical systems. The proposed architectures tackle central problems in the field ofrecurrent sequence modeling, namely the vanishing and exploding gradients problemas well as the issue of insufficient expressive power. Moreover, we discuss physicsinspiredlearning on graphs, wherein the dynamics of the message-passing propagationare derived from physical systems. We further prove that these methods mitigate theover-smoothing issue, thereby enabling the construction of deep graph neural networks(GNNs). We extensively test all proposed methods on a variety of versatile syntheticand real-world datasets, ranging from image recognition, speech recognition, naturallanguage processing (NLP), medical applications, and scientific computing for sequencemodels, to citation networks, computational chemistry applications, and networks of articles and websites for graph learning models.


10:00 AM Johannes Hertrich (TU Berlin)
Sliced MMD Gradient Flows with Negative Distance Kernel for Generative Modeling and Inverse Problems 
We consider gradient flows with respect to the maximum mean discrepancy (MMD) with negative distance kernel, which is also known as energy distance. In order to achieve computational efficiency, we prove that for certain kernels the MMD coincides with its sliced version. Therefore, all computations can be performed in a one-dimensional setting, where the MMD with negative distance kernel can be evaluated by a simple sorting algorithm with improved computational complexity. This enables us to simulate MMD particle flows in high dimensions for a large number of particles. We approximate these particle flows by neural networks and apply them for generative modeling and posterior sampling in Bayesian inverse problems. From a theoretical viewpoint, we study Wasserstein gradient flows with respect to our MMD functionals. Interestingly, particles might "explode" in this setting, i.e., the flow turns atomic measures into absolutely continuous ones and vice versa. We analytically derive the Wasserstein flows for some special cases and propose a numerical approximation of suitable forward and backward time discretizations by generative neural networks.


02:00 PM Johannes Wiesel (Carnegie Mellon University)
The out-of-sample prediction error of the square-root lasso and related estimators    
We study the classical problem of predicting an outcome variable, Y, using a linear combination of a d-dimensional covariate vector, X. We are interested in linear predictors whose coefficients solve: inf_β (E[(Y - < β, X >)^r])^(1/r) + 𝛿 || β ||, where r >1 and d > 0 is a regularisation parameter. We provide conditions under which linear predictors based on these estimators minimize the worst-case prediction error over a ball of distributions determined by a type of max-sliced Wasserstein metric. A detailed analysis of the statistical properties of this metric yields a simple recommendation for the choice of regularization parameter.  The suggested order of 𝛿,  after a suitable normalization of the covariates, is typically d/n, up to logarithmic factors. Our recommendation is computationally straightforward to implement, pivotal, has provable out-of-sample performance guarantees, and does not rely on sparsity assumptions about the true data generating process.


03:00 PM Caroline Geiersbach (WIAS Berlin)
Stochastic Algorithms for Physics-Based Systems under Uncertainty
In this talk, I will present a class of problems from stochastic optimization, where the constraints contain a family of partial differential equations (PDEs). These problems have a wide range of applications, from engineering, to materials science, to economics. One central challenge is that analysis of these systems typically needs to be done in infinite dimensions; optimization theory is delicate due to the function space setting. Optimization procedures need to account for numerical error from the discretization of the PDE. When solving these systems using stochastic algorithms, error due to the stochastic dimension also needs to be correctly controlled. I will focus on the usage of the stochastic gradient method for solving various problems in this class and discuss what is known about convergence and efficiency. There are many perspectives for this topic. Recent developments in the field of data science can be carried over to the infinite-dimensional context. Vice versa, the problems studied in physics-based systems present new challenges for data science.

Jan Johannes
Uni Heidelberg,
Mathematikon Raum 5.104,
Im Neuenheimer Feld 205,  69120 Heidelberg

Mathematical Data Science and Optimization

09:00 AM Johannes Maly  (University of Munich)
Explicit regularization, implicit bias, and the effect of quantization
A core challenge in mathematical data science is to understand and leverage intrinsic structures of sets. With reference to different branches of my research I will describe how in the last decade the focus shifted from explicit structural regularization in inverse problems and related fields to implicit regularization in massively overparametrized machine learning models. I will furthermore discuss the effect of coarse quantization, i.e., representation of real numbers by a finite alphabet of small size, on established results. The parts of my work that will serve as illustration encompass compressed sensing, covariance estimation from one-bit samples, and the implicit bias of gradient descent in matrix factorization and regression problems.


10:00 AM Lisa Kreusser (University of Bath)
Unlocking the Full Potential of Data: From Applied Analysis and Optimisation to Applications 
Recent and rapid breakthroughs in contemporary biology, climate science, and data science have unveiled a spectrum of intricate mathematical challenges which can be tackled through the fusion of applied and numerical analysis, as well as optimisation. In this talk, I will begin by delving into a class of interacting particle models with anisotropic interaction forces and their corresponding continuum limit. These models find their inspiration in the simulation of fingerprint patterns, which play a critical role in databases in forensic science and biometric applications. I will showcase our recent findings, including the development of a mean-field optimal control algorithm  to tackle an inverse problem arising in parameter identification. Transitioning from interaction-focused models to the realm of transport networks, I will introduce an optimization approach tailored for a unique coupling of differential equations that arises in the context of biological network formation. Additionally, I will provide insights into my recent research in data science, encompassing topics such as image segmentation, non-convex optimisation algorithms for machine learning, generative models such as Wasserstein Generative Adversarial Networks (WGANs), and semi-supervised learning techniques. Finally, I will give an overview of ongoing projects that explore the synergies between optimisation, numerical analysis and data science. These projects include optimisation algorithms designed for training deep neural networks, the development of robust machine learning algorithms for differential equations, the study of physics-informed generative models, the rigorous investigation of score-based diffusion models, and the design of data assimilation algorithms for uncertain models.


02:00 PM Jakob Zech (Heidelberg University) 
Neural operator surrogates 
In this talk we discuss the use of neural network based operator surrogates to approximate smooth maps between infinite-dimensional Hilbert spaces. Such surrogates have a wide range of applications and can be used in uncertainty quantification and parameter estimation problems in fields such as classical mechanics, fluid mechanics, electrodynamics, earth sciences etc. In this case, the operator input represents the problem configuration and models initial conditions, material properties, forcing terms and/or the domain of a partial differential equation (PDE) describing the underlying physics. The output of the operator is the corresponding PDE solution. We will also present an alternative approach using interpolation, which allows for deterministic construction and eliminates the need for training the network weights. In both cases, algebraic and dimension-independent convergence rates are obtained.


03:00 PM Diyora Salimova (University of Freiburg)
Deep neural network approximations for partial differential equations
Most of the numerical approximation methods for partial differential equations (PDEs) in the scientific literature suffer from the so-called curse of dimensionality (CoD) in the sense that the number of computational operations and/or the number of parameters employed in the corresponding approximation scheme grows exponentially in the PDE dimension and/or the reciprocal of the desired approximation precision. Recently, certain deep learning-based approximation methods for PDEs have been proposed and various numerical simulations for such methods suggest that deep neural network (DNN) approximations might have the capacity to indeed overcome the CoD in the sense that the number of real parameters used to describe the approximating DNNs grows at most polynomially in both the PDE dimension and the reciprocal of the prescribed approximation accuracy. In this talk, we show that solutions of suitable Kolmogorov PDEs can be approximated by DNNs without the CoD.

Jan Johannes
Uni Heidelberg,
Mathematikon Raum 5.104,
Im Neuenheimer Feld 205,  69120 Heidelberg
How to enhance chemical databases for atomistic machine learning?
Luis Itza Vazquez-Salazar, University of Basel                                                                                                            

Machine learning (ML) has revolutionized the field of atomistic simulations. It is now possible to obtain high-quality predictions of chemical properties at a low computational cost. Given that the computational effort to evaluate such a statistical model is independent of the quality of the input data, the most significant bottleneck for devising yet better ML models is the considerable amount of data required to train them. Although the community consensus is that more data naturally leads to better performance, it has been found that this working hypothesis is not necessarily correct for predicting chemical properties. Consequently, there is a need to identify how to obtain suitable data for training ML models while retaining the best performance of the model.
In this contribution, we will discuss the use of uncertainty quantification (UQ) methods for atomistic neural networks, such as Deep Evidential Regression and Regression Prior Networks, for identifying outliers in chemical space. Furthermore, results from using different data augmentation (DA) methods like sampling from conformational space and the Atom-in-Molecule (AMONS) fragments to improve the prediction of specific chemical moieties will be discussed. Additionally, the application of UQ techniques to potential energy surfaces will be illustrated. Combining UQ and DA methods set the stage for a workflow to obtain more robust and data-efficient chemical databases while retaining prediction accuracy.
Tristan Bereau
Philosophenweg 19, 69120 Heidelberg, seminar room