Computing Techniques Seminar on Deep Learning – Nov. 30

There will be a Computing Techniques Seminar, “Steerable Message Passing Neural Networks on Graphs and their Applications” on Thursday, November 30, from 2:30 to 3:30 p.m. in Wilson Hall Curia II (WH2SW). The seminar will be given by Shubhendu Trivedi of the
Toyota Technological Institute at Chicago (University of Chicago).

Abstract: The case of supervised learning over graphs—i.e., where each datapoint is a graph—is an increasingly important problem. The usual approach to this problem is to embed each graph into an appropriate feature space, and then train a classifier on top of it. Recently, various models have been proposed that seek to imitate classical Convolutional Neural Networks (CNNs), with the goal to learn discriminative graph features appropriate to the domain jointly with the classifier. Message-Passing Neural Networks (MPNNs) resemble CNNs, because at each level, each neuron aggregates information over a local neighborhood: as we go deeper, the network sees more of the input than in earlier layers. However, MPNNs differ crucially from CNNs in their internal symmetries: the internal representations of MPNN’s don’t change in a controlled manner when vertices are relabeled. In this work, we propose a steerable MPNN in which the internal representations are equivariant and transform according to some representation of the symmetric group. Meanwhile the final embedding of the graph is truly invariant to relabeling of vertices. We test the model on various molecular modeling datasets, and strong results are obtained in comparison to MPNNs and other graph CNNs. We also discuss applications on point clouds and two applications in the physical sciences.