keyboard_arrow_up
AMOSL: Adaptive Modality-Wise Structure Learning in Multi-View Graph Neural Networks for Enhanced Unified Representation

Authors

Peiyu Liang, Hongchang Gao and Xubin He, Temple University, USA

Abstract

While Multi-view Graph Neural Networks (MVGNNs) excel at leveraging diverse modalities for learning object representation, existing methods assume identical local topology structures across modalities that overlook real-world discrepancies. This leads MVGNNs straggles in modality fusion and representations denoising. To address these issues, we propose adaptive modality-wise structure learning (AMoSL). AMoSL captures node correspondences between modalities via optimal transport, and jointly learning with graph embedding. To enable efficient end-to-end training, we employ an efficient solution for the resulting complex bilevel optimization problem. Furthermore, AMoSL adapts to downstream tasks through unsupervised learning on inter-modality distances. The effectiveness of AMoSL is demonstrated by its ability to train more accurate graph classifiers on six benchmark datasets.

Keywords

Multi-view Graph Neural Network, Graph Classification, Graph Mining, Optimal Transport

Full Text  Volume 14, Number 10