Graph Isomorphism Network

[[concept]]
Graph isomorphism network

A graph isomorphism network (GIN) layer is defined as

(x)v=σ(W((1+ε)(x1)v+uN(v)(x1)u))

We can write this layer in terms of concatenate function φ:

φ({(x1)u:uN(v)})=uN(v)(x1)u

And aggregate function ϕ

ϕ((x1)v,φ())=W((1+ε)(x1)v+φ())

The motivation for this architecture comes from the fact that injective GNNs are as powerful as the WL test, which holds in this case as long as ϕ,φ as defined above are injective.

notes

ϕφ1 is a perceptron or MLP over the neighborhood multiset

  • Hornik 1989 says an MLP can model any injective function - this is due to the universal approximation theorem
  • Injective ϕXϕ1 exist for multiset X (proven in paper where GINs are introduced)

Review

#flashcards/math/dsg

What is the motivation for the Graph Isomorphism Network (GIN) architecture?
-?-
Injective GNNs are as powerful as the WL test, and we can define the update for GINs to be injective.

Mentions

File
GINs are maximally powerful for anonymous input graphs
2025-03-03 graphs lecture 11
2025-04-16 lecture 21