Ericsson Cognitive Labs

Publications

On this page, all the publications from Ericsson Cognitive Labs are listed, with links to the manuscript and the code to ensure reproducibility. We have also provided tags, so you can check our collaborators for each publication.

Highlighted

Evaluating Neighbor Explainability for Graph Neural Networks
Evaluating Neighbor Explainability for Graph Neural Networks
Oscar Llorente Gonzalez, Rana Fawzy, Jared keown, Michal Horemuz, Péter Vaderna, Sándor Laki, Roland Kotroczó, Rita Csoma, János Márk Szalai-Gindl
xAI   ·   10 Jul 2024   ·   doi:10.1007/978-3-031-63787-2_20
Explainability in Graph Neural Networks (GNNs) is a new field growing in the last few years. In this publication we address the problem of determining how important is each neighbor for the GNN when classifying a node and how to measure the performance for this specific task. To do this, various known explainability methods are reformulated to get the neighbor importance and four new metrics are presented. Our results show that there is almost no difference between the explanations provided by gradient-based techniques in the GNN domain. In addition, many explainability techniques failed to identify important neighbors when GNNs without self-loops are used.

All

Showing 2 of 5 results
Clear search

2024

Evaluating Neighbor Explainability for Graph Neural Networks
Evaluating Neighbor Explainability for Graph Neural Networks
Oscar Llorente Gonzalez, Rana Fawzy, Jared keown, Michal Horemuz, Péter Vaderna, Sándor Laki, Roland Kotroczó, Rita Csoma, János Márk Szalai-Gindl
xAI   ·   10 Jul 2024   ·   doi:10.1007/978-3-031-63787-2_20
Explainability in Graph Neural Networks (GNNs) is a new field growing in the last few years. In this publication we address the problem of determining how important is each neighbor for the GNN when classifying a node and how to measure the performance for this specific task. To do this, various known explainability methods are reformulated to get the neighbor importance and four new metrics are presented. Our results show that there is almost no difference between the explanations provided by gradient-based techniques in the GNN domain. In addition, many explainability techniques failed to identify important neighbors when GNNs without self-loops are used.

2023