Ericsson Cognitive Labs

Publications

On this page, all the publications from Ericsson Cognitive Labs are listed, with links to the manuscript and the code to ensure reproducibility. We have also provided tags, so you can check our collaborators for each publication.

Highlighted

Evaluating Neighbor Explainability for Graph Neural Networks
Evaluating Neighbor Explainability for Graph Neural Networks
Oscar Llorente Gonzalez, Rana Fawzy, Jared keown, Michal Horemuz, Péter Vaderna, Sándor Laki, Roland Kotroczó, Rita Csoma, János Márk Szalai-Gindl
arXiv   ·   14 Nov 2023   ·   doi:10.48550/arXiv.2311.08118
Explainability in Graph Neural Networks (GNNs) is a new field growing in the last few years. In this publication we address the problem of determining how important is each neighbor for the GNN when classifying a node and how to measure the performance for this specific task. To do this, various known explainability methods are reformulated to get the neighbor importance and four new metrics are presented. Our results show that there is almost no difference between the explanations provided by gradient-based techniques in the GNN domain. In addition, many explainability techniques failed to identify important neighbors when GNNs without self-loops are used.

All

Evaluating Neighbor Explainability for Graph Neural Networks
Evaluating Neighbor Explainability for Graph Neural Networks
Oscar Llorente Gonzalez, Rana Fawzy, Jared keown, Michal Horemuz, Péter Vaderna, Sándor Laki, Roland Kotroczó, Rita Csoma, János Márk Szalai-Gindl
arXiv   ·   14 Nov 2023   ·   doi:10.48550/arXiv.2311.08118
Explainability in Graph Neural Networks (GNNs) is a new field growing in the last few years. In this publication we address the problem of determining how important is each neighbor for the GNN when classifying a node and how to measure the performance for this specific task. To do this, various known explainability methods are reformulated to get the neighbor importance and four new metrics are presented. Our results show that there is almost no difference between the explanations provided by gradient-based techniques in the GNN domain. In addition, many explainability techniques failed to identify important neighbors when GNNs without self-loops are used.
Model Uncertainty based Active Learning on Tabular Data using Boosted Trees
Model Uncertainty based Active Learning on Tabular Data using Boosted Trees
Sharath M Shankaranarayana
arXiv   ·   30 Oct 2023   ·   doi:10.48550/arXiv.2310.19573
Supervised machine learning relies on the availability of good labelled data for model training. Labelled data is acquired by human annotation, which is a cumbersome and costly process, often requiring subject matter experts. Active learning is a sub-field of machine learning which helps in obtaining the labelled data efficiently by selecting the most valuable data instances for model training and querying the labels only for those instances from the human annotator. Recently, a lot of research has been done in the field of active learning, especially for deep neural network based models. Although deep learning shines when dealing with image\textual\multimodal data, gradient boosting methods…
A matter of attitude: Focusing on positive and active gradients to boost saliency maps
A matter of attitude: Focusing on positive and active gradients to boost saliency maps
Oscar Llorente Gonzalez, Jaime Boal, Eugenio F. Sánchez-Úbeda
arXiv   ·   22 Sep 2023   ·   doi:10.48550/arXiv.2309.12913
ESaliency maps have become one of the most widely used interpretability techniques for convolutional neural networks (CNN) due to their simplicity and the quality of the insights they provide. However, there are still some doubts about whether these insights are a trustworthy representation of what CNNs use to come up with their predictions. This paper explores how rescuing the sign of the gradients from the saliency map can lead to a deeper understanding of multi-class classification problems. Using both pretrained and trained from scratch CNNs we unveil that considering the sign and the effect not only of the correct class, but also the influence of the other classes, allows to better identify the pixels of the image that the network is really focusing on. Furthermore, how occluding or altering those pixels is expected to affect the outcome also becomes clearer.