NeurIPS 2020

Graphon Neural Networks and the Transferability of Graph Neural Networks


Meta Review

This article provides a theoretical analysis of the behaviour of graph neural networks (GNN), when applied to a specific deterministic sequence of graphs converging to a graphon. It gives bounds between the GNN outputs of a finite graph versus its graphon limit, and of the GNN output of graphs of different sizes. As such, it provides useful insights on GNNs, and about the transferability of parameters learned on a small graph to a larger graphs. An important point raised by R3, not really adressed in the response, needs to be adressed in the final version. The analysis is based on a deterministic sequence of graphs, and not on random graphs drawn from a graphon. Hence, the claim that a GNN trained on a single graph can be transferred to other graphs drawn from the same graphon using this result is incorrect. The authors should remove the following sentences, and clarify the contributions of the paper. Lines 4-5: "As a byproduct, coefficient can also be transferred to different graphs, thereby motivating the analysis of transferability across graphs." Lines 26-29: "In GNNs, there are two typical scenarios where transferability is desirable. The first involves applications in which we would like to reproduce a model trained on a graph to multiple other graphs without retraining. This would be the case, for instance, of replicating a GNN model for analysis of NYC air pollution data on the air pollution sensor network in Philadelphia." Lines 248-251: "This result is useful in two important ways. First, it means that, provided that its design parameters are chosen carefully, a GNN trained on a given graph can be transferred to multiple other graphs in the same graphon family with performance guarantees. This is desirable in problems where the same task has to be replicated on different networks, because it eliminates the need for retraining the GNN on every graph."