Talks & Keynotes

Sensible Learning Tools for Shape Analysis

Talk

blankphoto.png

From 3D modeling to autonomous driving, a variety of applications can benefit from data-driven reasoning about geometric problems.   The available data and preferred shape representation, however, varies widely from one application to the next.  The one commonality among most of these settings is that they are not easily approached using data-driven methods that have become de rigueur in other branches of computer vision and machine learning. In this talk, I will summarize recent efforts in my group to develop learning architectures and methodologies paired to specific applications, from point cloud processing to mesh and implicit surface modeling.  In each case, we will see how mathematical structures and application-specific demands drive our design of the learning methodology, rather than bending application details or eliding geometric details to apply a standard data analysis technique.

Computational topology and applications to biological data

Talk

blankphoto.png

Biological processes are governed by interactions at multiple scales (genomic, molecular, cellular), which are now captured by multiple modalities (multi-indexed data) at different spatial resolutions. Understanding complex biological phenomena requires mathematical approaches to elucidate dynamics, predict mechanisms and reveal function. With the wealth of state-of-the-art data at unprecedented depth and scales, new approaches are required to extract meaningful and interpretable biological insights. This talk will present computational topology methods, relying on persistent
homology, that provide insight and quantification to geometric structures arising at multiple scales in biology, such as protein structure and cancer/immune microenvironment.

Universes as Bigdata: Physics, Geometry and Machine-Learning

Talk

blankphoto.png

The search for the Theory of Everything has led to superstring theory, which then led physics, first to algebraic/differential geometry/topology, and then to computational geometry, and now to data science.

With a concrete playground of the geometric landscape, accumulated by the collaboration of physicists, mathematicians and computer scientists over the last 4 decades, we show how the latest techniques in machine-learning can help explore problems of interest to theoretical physics and to pure mathematics.

At the core of our programme is the question: how can AI help us with mathematics?

Type-checking graph neural networks

Talk

blankphoto.png

Recent advances in neural algorithmic reasoning with graph neural networks (GNNs) are propped up by the notion of algorithmic alignment. Broadly, a neural network will be better at learning to execute a reasoning task (in terms of sample complexity) if its individual components align well with the target algorithm. Specifically, GNNs are claimed to align with dynamic programming (DP), a general problem-solving strategy which expresses many polynomial-time algorithms. However, has this alignment truly been demonstrated and theoretically quantified? Here we show, using methods from category theory and abstract algebra, that there exists an intricate connection between GNNs and DP, going well beyond the initial observations over individual algorithms such as Bellman-Ford. Exposing this connection, we easily verify several prior findings in the literature. But more generally, our approach allowed us to immediately detect a misalignment of previous proposals for using GNN architectures for edge-centric tasks, and propose a fixed implementation, in a manner not at all unlike type checking. Our proposal demonstrates strong empirical results on the CLRS algorithmic reasoning benchmark, and we hope our exposition will serve as a foundation for building stronger algorithmically aligned GNNs in the future.

Challenges in Persistence Homology and Applications

Keynote

blankphoto.png

Persistent homology offers a computable, non-linear dimension reduction technique for data analysis. We will give a brief introduction and move quickly to some brand new techniques which have recently been applied to the study of immune cell infiltration in tumours.

Deep 3D Generative Modeling

Talk

blankphoto.png

Deep learning has taken the Computer Graphics world by storm. While remarkable progress has been reported in the context of supervised learning, the state of unsupervised learning, in contrast, remains quite primitive. In this talk, we will discuss recent advances where we have combined knowledge from traditional computer graphics and image formation models to enable deep generative modeling workflows. We will describe how we have combined modeling and rendering, in the unsupervised setting, to enable controllable and realistic image and animation production. The work is done in collaboration with various students and research colleagues.

Graph Spectral and Signal Processing tools for extracting structure from Scientific Data

Talk

blankphoto.png

In this talk, I will show how to leverage data geometry and topology, embedded within modern machine learning frameworks, to understand complex high dimensional scientific data. First, I will show how graphs can model underlying manifolds from which data are sampled and how graph spectral tools such as diffusion operators and signal processing tools such as filters can shed light on characteristics of the underlying manifold including geodesic distances, density, and curvature. Next, I will show how to combine graph diffusion geometry with topology to extract multi-granular features from the data for predictive analysis. Then, I will move up from the local geometry of individual data points to the global geometry of complex objects like data clouds, using graph signal processing to derive representations of these entities and optimal transport for distances between them. Finally, I will demonstrate how two neural networks use geometric inductive biases for generation and inference: GRASSY (geometric scattering synthesis network) for generating new molecules and molecular fold trajectories, and TrajectoryNet for performing dynamic optimal transport between time-course samples to understand the dynamics of cell populations. Throughout the talk, I will include examples of how these methods shed light on the inner workings of biomedical and cellular systems including cancer, immunology and neuroscientific systems. I will finish by highlighting future directions of inquiry.

From Equivariance to Naturality

Talk

blankphoto.png

In this talk I will explain how groups, representations, and equivariant maps, the fundamental concepts of geometric deep learning, are special cases of the concepts category, functor, and natural transformation. Like equivariant maps, natural transformations capture the idea that the way we process an input should be essentially independent of which one of a number of equivalent (isomorphic) ways to encode an input we choose. Being more general, the categorical concepts open up new possibilities for "structure preserving machine learning" beyond what is currently considered in geometric DL. We will discuss examples such as natural graph networks and natural causal models.

Fisher information geometry of beta and Dirichlet distributions

Talk

blankphoto.png

The Fisher information metric is a Riemannian metric defined on the parameter space of families of probability distributions. It provides geometric tools that are useful to perform statistics and learning on probability distributions, inside a given parametric family. It is well-known since the 1980s that in the case of normal distributions, Fisher information geometry amounts to hyperbolic geometry. In this talk we will investigate the geometry of Dirichlet distributions, and beta distributions as a particular case. We show that it is negatively curved and geodesically complete. This guarantees the uniqueness of the notion of mean and makes it a suitable geometry to apply learning algorithms such as K-means clustering.

Geometric Deep Learning: from Euclid to Drug Design

Keynote

blankphoto.png

For nearly two millennia, the word "geometry" was synonymous with Euclidean geometry, as no other types of geometry existed. Euclid's monopoly came to an end in the 19th century, where multiple examples of non-Euclidean geometries were shown. However, these studies quickly diverged into disparate fields, with mathematicians debating the relations between different geometries and what defines one. A way out of this pickle was shown by Felix Klein in his Erlangen Programme, which proposed approaching geometry as the study of invariants or symmetries using the language of group theory. In the 20th century, these ideas have been fundamental in developing modern physics, culminating in the Standard Model. 


The current state of deep learning somewhat resembles the situation in the field of geometry in the 19h century: On the one hand, in the past decade, deep learning has brought a revolution in data science and made possible many tasks previously thought to be beyond reach -- including computer vision, playing Go, or protein folding. At the same time, we have a zoo of neural network architectures for various kinds of data, but few unifying principles. As in times past, it is difficult to understand the relations between different methods, inevitably resulting in the reinvention and re-branding of the same concepts.  


Geometric Deep Learning aims to bring geometric unification to deep learning in the spirit of the Erlangen Programme. Such an endeavour serves a dual purpose: it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers, and gives a constructive procedure to incorporate prior knowledge into neural networks and build future architectures in a principled way. In this talk, I will overview the mathematical principles underlying Geometric Deep Learning on grids, graphs, and manifolds, and show some of the exciting and groundbreaking applications of these methods in the domains of computer vision, social science, biology, and drug design. 


(based on joint work with J. Bruna, T. Cohen, P. Veličković)

Unravelling A Geometric Conspiracy

Talk

blankphoto.png

The Hamiltonian Monte Carlo method has proven a powerful approach to efficiently exploring complex probability distributions.  That power, however, is something of a geometric conspiracy: a sequence of delicate mathematical machinations that ensure a means to explore distributions not just in theory but also in practice.  In this talk I will discuss the coincident geometrical properties that ensure the scalable performance of Hamiltonian Monte Carlo and present recent work developing new geometric theories that generalize each of these properties individually, providing a foundation for generalizing the method without compromising its performance.

Graph Representation Learning for Biomedical Discovery

Talk

blankphoto.png

Networks are pervasive in biology and medicine, from molecular interaction maps to dependencies between diseases in a person, all the way to populations encompassing human interactions. In this talk, we put forward an observation that long-standing principles in the Network Biology field--while often unspoken in machine learning research--can provide the conceptual grounding for representation learning, explain its current successes and limitations, and inform future advances. We will describe how we have developed meta-learning algorithms to enable graph neural networks for problems at the scientific frontier where labeled examples are incredibly scarce. We will conclude with applications in drug development and precision medicine where our algorithmic predictions were validated in human cells and led to the discovery of a new class of drugs.