PreGNN: Hardware Acceleration to Take Preprocessing off the Critical Path in Graph Neural Networks

Donghyun Gouk, Seungkwan Kang, Miryeong Kwon, Junhyeok Jang, Hyunkyu Choi, Sangwon Lee, Myoungsoo Jung

Research output: Contribution to journalArticlepeer-review


In this paper, we observe that the main performance bottleneck of emerging graph neural networks (GNNs) is not the inference algorithms themselves, but their graph data preprocessing. To take such preprocessing off the critical path in GNNs, we propose PreGNN, a novel hardware automation architecture that accelerates all the tasks of GNN preprocessing from the beginning to the end. Specifically, PreGNN accelerates graph generation in parallel, samples neighbor nodes of a given graph, and prepares graph datasets through all hardware. To reduce the long latency of GNN preprocessing over hardware, we also propose simple, efficient combinational logic that can perform radix sort and arrange the data in a self-governing manner. The evaluation results show that PreGNN can shorten the end-to-end latency of GNN inferences by 10.7× while consuming less energy by 3.3×, compared to a GPU-only system.

Original languageEnglish
Pages (from-to)117-120
Number of pages4
JournalIEEE Computer Architecture Letters
Issue number2
Publication statusPublished - 2022

Bibliographical note

Funding Information:
This work was supported by Samsung Science and Technology Foundation under Grant SRFC-IT2101-04.

Publisher Copyright:
© 2002-2011 IEEE.

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture


Dive into the research topics of 'PreGNN: Hardware Acceleration to Take Preprocessing off the Critical Path in Graph Neural Networks'. Together they form a unique fingerprint.

Cite this