G-MATCH: Graph-Structured Memory with Interpretable Motif Matching for Future Node Affinity Prediction

Main Article Content

Weijue Huang

Abstract

Predicting future node affinity in dynamic graphs is essential for applications such as recommender systems. However, existing methods, including state-of-the-art approaches like NAVIS, rely on continuous vector representations that struggle to explicitly capture, reason over, and memorize discrete, structured interaction motifs, limiting interpretability and generalization. To address this, we introduce G-MATCH (Graph-structured Memory with Attentive Template Consensus for Heterogeneous interactions), a novel paradigm that reformulates a node's state as a dynamic, heterogeneous graph of learnable interaction motifs. G-MATCH incorporates four key innovations: (1) state evolution via graph matching and dynamic motif creation/pruning, (2) a global motif bank for cross-node knowledge transfer, (3) interpretable affinity prediction through motif attribution, and (4) optimization with a listwise ranking loss and structured regularization. Extensive experiments on future affinity (TGB) and converted link prediction datasets demonstrate that G-MATCH consistently outperforms all strong baselines, including NAVIS, achieving an average improvement of +4.2% in NDCG@10. The model also excels in few-shot and limited-information settings. Ablation studies confirm the critical role of each component, and case studies highlight its unique capability for explainable, motif-level reasoning. The transition from linear states to graph-structured memory marks a significant advance, enabling superior performance and unprecedented interpretability in modeling complex node interactions.

Article Details

Section

Articles

How to Cite

G-MATCH: Graph-Structured Memory with Interpretable Motif Matching for Future Node Affinity Prediction. (2026). Journal of Sustainability, Policy, and Practice, 2(2), 7-17. https://schoalrx.com/index.php/jspp/article/view/92

References

1. H. Bäckman, and A. Brändström, "Modelling and Control of an Electro-Hydraulic Forklift," 2016.

2. Z. Bi, R. Gao, and S. Fang, "A general framework for visualizing machine learning models," 2024. doi: 10.20944/preprints202402.0798.v1

3. C. Burges, R. Ragno, and Q. Le, "Learning to rank with nonsmooth cost functions," Advances in neural information processing systems, vol. 19, 2006.

4. V. Capone, A. Casolaro, and F. Camastra, "Spatio-temporal prediction using graph neural networks: A survey," Neurocomputing, vol. 643, p. 130400, 2025.

5. X. Song, K. Chen, Z. Bi, Q. Niu, J. Song, J. Liu, and P. Feng, "Mastering reinforcement learning: Foundations, algorithms, and real-world applications," 2024. doi: 10.2139/ssrn.5208695

6. K. Cho, B. Van Merriënboer, Gulçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase representations using RNN encoder-decoder for statistical machine translation," In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), October, 2014, pp. 1724-1734.

7. W. Cong, S. Zhang, J. Kang, B. Yuan, H. Wu, X. Zhou, and M. Mahdavi, "Do we really need complicated model architectures for temporal networks?," arXiv preprint arXiv:2302.11636, 2023.

8. F. Cornell, O. Smirnov, G. Z. Gandler, and L. Cao, "On the power of heuristics in temporal graphs," arXiv preprint arXiv:2502.04910, 2025.

9. J. H. Fowler, "Connecting the congress: A study of cosponsorship networks," Political analysis, vol. 14, no. 4, pp. 456-487, 2006. doi: 10.1093/pan/mpl002

10. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, "Neural message passing for quantum chemistry," In International conference on machine learning, July, 2017, pp. 1263-1272.

11. A. Gu, and T. Dao, "Mamba: Linear-time sequence modeling with selective state spaces," In First conference on language modeling., May, 2024.

12. A. Gu, K. Goel, and C. Ré, "Efficiently modeling long sequences with structured state spaces," arXiv preprint arXiv:2111.00396, 2021.

13. W. Hsieh, Z. Bi, C. Jiang, J. Liu, B. Peng, S. Zhang, and C. X. Liang, "A comprehensive guide to explainable AI: from classical models to LLMs," arXiv preprint arXiv:2412.00800, 2024.

14. S. Huang, F. Poursafaei, J. Danovitch, M. Fey, W. Hu, E. Rossi, and R. Rabbany, "Temporal graph benchmark for machine learning on temporal graphs," Advances in Neural Information Processing Systems, vol. 36, pp. 2056-2073, 2023. doi: 10.52202/075280-0099

15. K. Järvelin, and J. Kekäläinen, "Cumulated gain-based evaluation of IR techniques," ACM Transactions on Information Systems (TOIS), vol. 20, no. 4, pp. 422-446, 2002. doi: 10.1145/582415.582418

16. S. M. Kazemi, R. Goel, K. Jain, I. Kobyzev, A. Sethi, P. Forsyth, and P. Poupart, "Representation learning for dynamic graphs: A survey," Journal of Machine Learning Research, vol. 21, no. 70, pp. 1-73, 2020.

17. S. Kumar, X. Zhang, and J. Leskovec, "Predicting dynamic embedding trajectory in temporal interaction networks," In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, July, 2019, pp. 1269-1278. doi: 10.1145/3292500.3330895

18. D. Li, S. Tan, Y. Wang, K. Funakoshi, and M. Okumura, "Temporal and topological augmentation-based cross-view contrastive learning model for temporal link prediction," In Proceedings of the 32nd ACM international conference on information and knowledge management, October, 2023, pp. 4059-4063. doi: 10.1145/3583780.3615231

19. J. Li, R. Wu, X. Jin, B. Ma, L. Chen, and Z. Zheng, "State space models on temporal graphs: A first-principles study," Advances in Neural Information Processing Systems, vol. 37, pp. 127030-127058, 2024. doi: 10.52202/079017-4034

20. K. S. I. Mantri, O. Feldman, M. Eliasof, and C. Baskin, "Revisiting Node Affinity Prediction in Temporal Graphs," arXiv preprint arXiv:2510.06940, 2025.

21. P. Mineault, "Is Attention All You Need?," In From Human Attention to Computational Attention: A Multidisciplinary Approach, 2025, pp. 297-314. doi: 10.1007/978-3-031-84300-6_13

22. E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein, "Temporal graph networks for deep learning on dynamic graphs," arXiv preprint arXiv:2006.10637, 2020.

23. X. Song, K. Chen, Z. Bi, Q. Niu, J. Liu, B. Peng, and P. Feng, "Transformer: A survey and application," 2024. doi: 10.31219/osf.io/5p2hu

24. A. Souza, D. Mesquita, S. Kaski, and V. Garg, "Provably expressive temporal graph networks," Advances in neural information processing systems, vol. 35, pp. 32257-32269, 2022. doi: 10.52202/068431-2337

25. M. Strohmeier, X. Olive, J. Lübbe, M. Schäfer, and V. Lenders, "Crowdsourced air traffic data from the OpenSky Network 2019-2020," Earth System Science Data, vol. 13, no. 2, pp. 357-366, 2021. doi: 10.5194/essd-13-357-2021

26. M. Sun, and M. Tang, "A review of link prediction algorithms in dynamic networks," Mathematics, vol. 13, no. 5, p. 807, 2025. doi: 10.3390/math13050807

27. R. Trivedi, M. Farajtabar, P. Biswal, and H. Zha, "Dyrep: Learning representations over dynamic graphs," In International conference on learning representations., May, 2019.

28. E. Voeten, "Data and analyses of voting in the United Nations: General Assembly," Routledge handbook of international organization, pp. 54-66, 2013.

29. P. A. Wałęga, and M. Rawson, "Expressive power of temporal message passing," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 20, 2025. doi: 10.1609/aaai.v39i20.35396

30. Y. Wang, Y. Y. Chang, Y. Liu, J. Leskovec, and P. Li, "Inductive representation learning in temporal networks via causal anonymous walks," arXiv preprint arXiv:2101.05974, 2021.

31. Y. Wu, Y. Tang, and W. Zhang, "Fine-Grained Interactive Transformers for Continuous Dynamic Link Prediction," IEEE Transactions on Cybernetics, 2025. doi: 10.1109/tcyb.2025.3598250

32. D. Xu, C. Ruan, E. Korpeoglu, S. Kumar, and K. Achan, "Inductive representation learning on temporal graphs," arXiv preprint arXiv:2002.07962, 2020.

33. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, "How powerful are graph neural networks?," arXiv preprint arXiv:1810.00826, 2018.

34. C. Zhang, B. Peng, X. Sun, Q. Niu, J. Liu, K. Chen, and T. Wang, "From word vectors to multimodal embeddings: Techniques, applications, and future directions for large language models," arXiv preprint arXiv:2411.05036, 2024.