Adaptive frame averaging for invariant and equivariant representations
Prof Bruno Ribeiro
Abstract

Many machine learning tasks require learning functions that are invariant or equivariant to known symmetries of the input data. Unfortunately, there is a significant challenge in the design of neural network architectures that simultaneously (a) take into account the symmetries, (b) are expressive enough to have small generalization error, and (c) are computationally efficient. Murphy et al. [1, 2] have shown that it is possible to sacrifice (c) --computational efficiency-- for (a) and (b) with the use of symmetrization (Reynolds operator). Recently, Puny et al. [3] have proposed a method (Frame Averaging) to improve the efficiency of symmetrization. However, in some tasks, Frame Averaging can lead to large generalization errors. This is because of a fundamental trade-off between computationally efficient and generalization error in symmetrization. This project will study this trade-off and propose efficient solutions to achieve both computational efficiency and better generalization error in the affected tasks.
References
[1] Murphy, Ryan L., Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. ICLR 2019.
[2] Murphy, Ryan, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling for graph representations. In International Conference on Machine Learning, pp. 4663-4673. PMLR, 2019.
[3] Puny, Omri, Matan Atzmon, Heli Ben-Hamu, Edward J. Smith, Ishan Misra, Aditya Grover, and Yaron Lipman. Frame Averaging for Invariant and Equivariant Network Design. ICLR 2022.