Simplifying and empowering transformers for large-graph representations Q Wu, W Zhao, C Yang, H Zhang, F Nie, H Jiang, Y Bian, J Yan Advances in Neural Information Processing Systems 36, 2024 | 14 | 2024 |
Refresh: Reducing memory access from exploiting stable historical embeddings for graph neural network training K Huang, H Jiang, M Wang, G Xiao, D Wipf, X Song, Q Gan, Z Huang, ... arXiv preprint arXiv:2301.07482, 2023 | 6 | 2023 |
MuseGNN: Interpretable and Convergent Graph Neural Network Layers at Scale H Jiang, R Liu, X Yan, Z Cai, M Wang, D Wipf arXiv preprint arXiv:2310.12457, 2023 | 3 | 2023 |
DiskGNN: Bridging I/O Efficiency and Model Accuracy for Out-of-Core GNN Training R Liu, Y Wang, X Yan, Z Cai, M Wang, H Jiang, B Tang, J Li arXiv preprint arXiv:2405.05231, 2024 | | 2024 |
FreshGNN: Reducing Memory Access via Stable Historical Embeddings for Graph Neural Network Training K Huang, H Jiang, M Wang, G Xiao, D Wipf, X Song, Q Gan, Z Huang, ... Proceedings of the VLDB Endowment 17 (6), 1473-1486, 2024 | | 2024 |
Interpretable and Convergent Graph Neural Network Layers at Scale H Jiang, R Liu, X Yan, Z Cai, M Wang, D Wipf | | 2023 |