All talk recordings are now available. You can find the links in the schedule.


Learning and reasoning with graph-structured representations is gaining increasing interest in both academia and industry, due to its fundamental advantages over more traditional unstructured methods in supporting interpretability, causality, and transferability / inductive generalization. Recently, there is a surge of new techniques in the context of deep learning, such as graph neural networks, for learning graph representations and performing reasoning and prediction, which have achieved impressive progress. However, it can still be a long way to go to obtain satisfactory results in long-range multi-step reasoning, scalable learning with very large graphs, flexible modeling of graphs in combination with other dimensions such as temporal variation and other modalities such as language and vision. New advances in theoretical foundations, models and algorithms, as well as empirical discoveries and applications are therefore all highly desirable. The aims of this workshop are to bring together researchers to dive deeply into some of the most promising methods which are under active exploration today, discuss how we can design new and better benchmarks, identify impactful application domains, encourage discussion and foster collaboration.

Important Dates

  • Submission deadline: Thursday, 18 April 2019 Friday, 26 April 2019 (23:59 AoE)
  • Author notification: Friday, 10 May 2019 Wednesday, 15 May 2019
  • Camera ready deadline: Friday, 31 May 2019
  • Workshop: 15 June 2019

Call for Papers

We invite the submission of papers on topics including, but not limited to:

  • Deep learning methods on graphs/manifolds/relational data (e.g., graph neural networks)
  • Deep generative models of graphs (e.g., for drug design)
  • Unsupervised graph/manifold/relational embedding methods (e.g., hyperbolic embeddings)
  • Optimization methods for graphs/manifolds/relational data
  • Relational or object-level reasoning in machine perception
  • Relational/structured inductive biases for reinforcement learning, modeling multi-agent behavior and communication
  • Neural-symbolic integration
  • Theoretical analysis of capacity/generalization of deep learning models for graphs/manifolds/relational data
  • Benchmark datasets and evaluation metrics

Submissions should be no more than 4 pages, excluding references or supplementary materials. All submissions must be in pdf format as a single file (incl. supplementary materials) using the workshop template and submitted through CMT under this link.

The review process is single-round and double-blind (submission files have to be anonymized). Previously published work and concurrent submissions are acceptable.

All accepted papers will be presented as posters during the workshop and listed on the website. Additionally, a small number of accepted papers will be selected to be presented as contributed or spotlight talks.

For any questions, please email



Program Committee

  • Alvaro Sanchez-Gonzalez, DeepMind
  • Andrea Tacchetti, DeepMind
  • Andreea Deac, University of Cambridge
  • Beliz Gunel, Stanford University
  • Ben J. Day, University of Cambridge
  • Cătălina Cangea, University of Cambridge
  • Christopher Morris, TU Dortmund University
  • Ekansh Sharma, University of Toronto
  • Frederic Sala, Stanford University
  • Guang-He Lee, MIT
  • Guillem Cucurull, Element AI
  • Haggai Maron, Weizmann Institute of Science
  • Hanjun Dai, Georgia Tech
  • Hao He, MIT
  • Harris Chan, University of Toronto
  • Jakub Tomczak, Qualcomm AI Research
  • Jiaxuan You, Stanford University
  • Joey Bose, McGill University
  • Kun Xu, IBM T.J. Watson Research Center
  • Lisa Zhang, University of Toronto
  • Luca Venturi, New York University
  • Marc Law, NVIDIA
  • Matthias Fey, TU Dortmund University
  • Min Jae Song, NYU
  • Nick Choma, NYU
  • Pau Riba, Computer Vision Center
  • Perouz Taslakian, Element AI
  • Petar Veličković, DeepMind
  • Rafael Gomez-Bombarelli, MIT
  • Renjie Liao, University of Toronto
  • Rex Ying, Stanford University
  • Rianne van den Berg, Google Brain
  • Shagun Sodhani, MILA
  • Thomas Kipf, University of Amsterdam
  • Victor Bapst, DeepMind
  • Will Hamilton, McGill University
  • Yoon Kim, Harvard University
  • Yujia Li, DeepMind
  • Yuntian Deng, Harvard University
  • Zhengdao Chen, New York University
  • Zhourong Chen, Hong Kong University of Science and Technology