top of page

Dr Maximilian Nickel



Facebook AI Research

Max is a Research Scientist at Facebook AI Research in New York, working on foundational methods for learning from structured and temporal information and their applications in artificial intelligence, network science, and computational social science. Before joining FAIR in 2016, he was a postdoctoral fellow at MIT where he was with the Laboratory for Computational and Statistical Learning and the Center for Brains, Minds and Machines. In 2013, Max earned his PhD with summa cum laude from the Ludwig Maximilian University in Munich. From 2010 to 2013 he worked as a research assistant at Siemens Corporate Technology.


Geometric Representation Learning in Relational Domains

Learning from relational knowledge representations is often characterized by complex relational patterns involving large amounts of uncertainty. Moreover, domains such as the Web, bioinformatics, or natural language understanding can consist of billions of entities and relationships. In these settings, representation learning has become an invaluable approach for making statistical inferences as it allows us to learn high-quality models and scale to large datasets with billions of relations.

Recently, new attention has been given to an important aspect of such methods, i.e., the geometry of the representation space. Methods such as hyperbolic embeddings and Riemannian generative models show that non-Euclidean geometries can provide significant advantages for modeling relational data, e.g., with regard to interpretability, scalability, and latent semantics.

In this talk, I will provide an overview of our recent work on such geometric approaches to representation learning. I will first discuss how structural properties of relational data (such as latent hierarchies) are connected to the geometry of the embedding space and how methods such hyperbolic embeddings allow us to learn parsimonious representations in these cases. Moreover, I will show how the embeddings can be used to discover latent hierarchies and be applied for diverse tasks in NLP and bioinformatics. In addition, I will discuss how we can model flexible probability distributions over such geometric representations through Riemmannian continous normalizing flows.

bottom of page