Talks & Keynotes

Geometric Deep Learning: from Euclid to Drug Design

Keynote

blankphoto.png

For nearly two millennia, the word "geometry" was synonymous with Euclidean geometry, as no other types of geometry existed. Euclid's monopoly came to an end in the 19th century, where multiple examples of non-Euclidean geometries were shown. However, these studies quickly diverged into disparate fields, with mathematicians debating the relations between different geometries and what defines one. A way out of this pickle was shown by Felix Klein in his Erlangen Programme, which proposed approaching geometry as the study of invariants or symmetries using the language of group theory. In the 20th century, these ideas have been fundamental in developing modern physics, culminating in the Standard Model. 


The current state of deep learning somewhat resembles the situation in the field of geometry in the 19h century: On the one hand, in the past decade, deep learning has brought a revolution in data science and made possible many tasks previously thought to be beyond reach -- including computer vision, playing Go, or protein folding. At the same time, we have a zoo of neural network architectures for various kinds of data, but few unifying principles. As in times past, it is difficult to understand the relations between different methods, inevitably resulting in the reinvention and re-branding of the same concepts.  


Geometric Deep Learning aims to bring geometric unification to deep learning in the spirit of the Erlangen Programme. Such an endeavour serves a dual purpose: it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers, and gives a constructive procedure to incorporate prior knowledge into neural networks and build future architectures in a principled way. In this talk, I will overview the mathematical principles underlying Geometric Deep Learning on grids, graphs, and manifolds, and show some of the exciting and groundbreaking applications of these methods in the domains of computer vision, social science, biology, and drug design. 


(based on joint work with J. Bruna, T. Cohen, P. Veličković)

Unravelling A Geometric Conspiracy

Talk

blankphoto.png

The Hamiltonian Monte Carlo method has proven a powerful approach to efficiently exploring complex probability distributions.  That power, however, is something of a geometric conspiracy: a sequence of delicate mathematical machinations that ensure a means to explore distributions not just in theory but also in practice.  In this talk I will discuss the coincident geometrical properties that ensure the scalable performance of Hamiltonian Monte Carlo and present recent work developing new geometric theories that generalize each of these properties individually, providing a foundation for generalizing the method without compromising its performance.

Graph Representation Learning for Biomedical Discovery

Talk

blankphoto.png

Networks are pervasive in biology and medicine, from molecular interaction maps to dependencies between diseases in a person, all the way to populations encompassing human interactions. In this talk, we put forward an observation that long-standing principles in the Network Biology field--while often unspoken in machine learning research--can provide the conceptual grounding for representation learning, explain its current successes and limitations, and inform future advances. We will describe how we have developed meta-learning algorithms to enable graph neural networks for problems at the scientific frontier where labeled examples are incredibly scarce. We will conclude with applications in drug development and precision medicine where our algorithmic predictions were validated in human cells and led to the discovery of a new class of drugs.

Robust learning-based methods for shape correspondence

Talk

blankphoto.png

In this talk I will describe several recent works aimed at developing accurate and robust methods for non-rigid 3D shape matching and comparison. I will first describe several ways to model this problem, including supervised, unsupervised and weakly supervised training losses. In addition, I will highlight several recent architectures that are well adapted to computing dense correspondences across a variety of settings. My ultimate goal will be to show that these techniques are becoming remarkably robust and universally applicable and useful.

Curve Based Approximation of Measures on Manifolds

Talk

blankphoto.png

The approximation of probability measures on compact metric spaces and in particular on Riemannian manifolds by atomic or empirical ones is a classical task in approximation and complexity theory with a wide range of applications.


Instead of point measures we are concerned with the
approximation by measures supported on Lipschitz curves.
Special attention is paid to push-forward measures of Lebesgue measures on the interval by such curves. Using the discrepancy as distance between measures, we prove optimal approximation rates in terms of Lipschitz constants of curves. 


Having established the theoretical convergence rates, we are interested in the numerical minimization of the discrepancy between a given probability measure and the set of push-forward measures of Lebesgue measures on the interval by Lipschitz curves. We present  various numerical examples.

Towards Learning Simulation

Talk

blankphoto.png

Description to follow.

Challenges in Persistence Homology and Applications

Keynote

blankphoto.png

Persistent homology offers a computable, non-linear dimension reduction technique for data analysis. We will give a brief introduction and move quickly to some brand new techniques which have recently been applied to the study of immune cell infiltration in tumours.

Deep 3D Generative Modeling

Talk

blankphoto.png

Deep learning has taken the Computer Graphics world by storm. While remarkable progress has been reported in the context of supervised learning, the state of unsupervised learning, in contrast, remains quite primitive. In this talk, we will discuss recent advances where we have combined knowledge from traditional computer graphics and image formation models to enable deep generative modeling workflows. We will describe how we have combined modeling and rendering, in the unsupervised setting, to enable controllable and realistic image and animation production. The work is done in collaboration with various students and research colleagues.

Tropical Support Vector Machines

Talk

blankphoto.png

Support Vector Machines (SVMs) are one of the most popular supervised learning models to classify using a hyperplane in an Euclidean space. Similar to SVMs, tropical SVMs classify data points using a tropical hyperplane under the tropical metric with the max-plus algebra. In this talk, we show generalization error bounds of tropical SVMs over the tropical projective space. While the generalization error bounds attained via VC dimensions in a distribution-free manner still depend on the dimension, we also show theoretically by extreme value statistics that the tropical SVMs for classifying data points from two Gaussian distributions as well as empirical data sets of different neuron types are fairly robust against the curse of dimensionality. Extreme value statistics also underlie the anomalous scaling behaviors of the tropical distance between random vectors with additional noise dimensions.

Geometric Representation Learning in Relational Domains

Talk

blankphoto.png

Learning from relational knowledge representations is often characterized by complex relational patterns involving large amounts of uncertainty. Moreover, domains such as the Web, bioinformatics, or natural language understanding can consist of billions of entities and relationships. In these settings, representation learning has become an invaluable approach for making statistical inferences as it allows us to learn high-quality models and scale to large datasets with billions of relations.

Recently, new attention has been given to an important aspect of such methods, i.e., the geometry of the representation space. Methods such as hyperbolic embeddings and Riemannian generative models show that non-Euclidean geometries can provide significant advantages for modeling relational data, e.g., with regard to interpretability, scalability, and latent semantics.

In this talk, I will provide an overview of our recent work on such geometric approaches to representation learning. I will first discuss how structural properties of relational data (such as latent hierarchies) are connected to the geometry of the embedding space and how methods such hyperbolic embeddings allow us to learn parsimonious representations in these cases. Moreover, I will show how the embeddings can be used to discover latent hierarchies and be applied for diverse tasks in NLP and bioinformatics. In addition, I will discuss how we can model flexible probability distributions over such geometric representations through Riemmannian continous normalizing flows.

Relational Structure Discovery

Talk

blankphoto.png

Graphs are a powerful abstraction: they allow us to efficiently describe data in the form of entities and their pairwise relationships. The past four years have seen an incredible proliferation of graph neural networks (GNNs): neural network architectures that are effective at learning and reasoning with data provided in the form of a graph. Rarely, however, do we ask the question where and how the entities and relations are obtained from in the first place on which we deploy our models, and how we can infer effective relational abstractions from data in cases where they are not available. This talk focuses on the question of how we can build effective relational machine learning models in the absence of annotated links or relations, or even in the absence of abstractions such as entities or objects in the first place. I will give a brief introduction to graph neural networks (GNNs) and cover work on GNN-based link prediction, on Neural Relational Inference, and more recent work on object discovery and relational learning with raw perceptual inputs, such as images or videos.

Sparse Principal Component Analysis on Shape Manifolds

Talk

blankphoto.png

This talk discusses the construction of a low-dimensional nonlinear space capturing the variability of a non-rigid shape from a data set of example poses.  The core of the approach is a Sparse Principal Geodesic Analysis (SPGA) on the Riemannian manifold of discrete shells, in which a pose of a non-rigid shape is a point. The SPGA is invariant to rigid body motions of the poses and supports large deformation. Since the Riemannian metric measures the membrane and bending distortions of the shells, the sparsity term forces the modes to describe largely decoupled and localized deformations.