Skip to main content
placeholder image

Computational capabilities of graph neural networks

Journal Article


Download full-text (Open Access)

Abstract


  • In this paper, we will consider the universal approximation properties of a recently introduced neural network model called graph neural network (GNN) which can be used to process structured data inputs, e.g., acyclic graph, cyclic graph, directed or un-directed graphs. This class of neural networks implements a function $\tau(\BG,n)\in\R^m$ that maps a graph $\BG$ and one of its nodes $n$ onto an m-dimensional Euclidean space.We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property, called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks. Some experimental examples are used to show the computational capabilities of the proposed model.

Authors


  •   Scarselli, Franco (external author)
  •   Gori, Marco (external author)
  •   Tsoi, Ah Chung
  •   Hagenbuchner, M.
  •   Monfardini, Gabriele (external author)

Publication Date


  • 2009

Citation


  • Scarselli, F., Gori, M., Tsoi, A., Hagenbuchner, M. & Monfardini, G. (2009). Computational capabilities of graph neural networks. IEEE Transactions on Neural Networks, 20 (1), 81-102.

Scopus Eid


  • 2-s2.0-58649092639

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=2715&context=infopapers

Ro Metadata Url


  • http://ro.uow.edu.au/infopapers/1695

Number Of Pages


  • 21

Start Page


  • 81

End Page


  • 102

Volume


  • 20

Issue


  • 1

Abstract


  • In this paper, we will consider the universal approximation properties of a recently introduced neural network model called graph neural network (GNN) which can be used to process structured data inputs, e.g., acyclic graph, cyclic graph, directed or un-directed graphs. This class of neural networks implements a function $\tau(\BG,n)\in\R^m$ that maps a graph $\BG$ and one of its nodes $n$ onto an m-dimensional Euclidean space.We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property, called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks. Some experimental examples are used to show the computational capabilities of the proposed model.

Authors


  •   Scarselli, Franco (external author)
  •   Gori, Marco (external author)
  •   Tsoi, Ah Chung
  •   Hagenbuchner, M.
  •   Monfardini, Gabriele (external author)

Publication Date


  • 2009

Citation


  • Scarselli, F., Gori, M., Tsoi, A., Hagenbuchner, M. & Monfardini, G. (2009). Computational capabilities of graph neural networks. IEEE Transactions on Neural Networks, 20 (1), 81-102.

Scopus Eid


  • 2-s2.0-58649092639

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=2715&context=infopapers

Ro Metadata Url


  • http://ro.uow.edu.au/infopapers/1695

Number Of Pages


  • 21

Start Page


  • 81

End Page


  • 102

Volume


  • 20

Issue


  • 1