THE Impact and Detection of Adversarial Attacks on Graph Neural Networks
Keywords:
Graph Neural NetworkAbstract
Graph neural networks are one of the widespread learning models due to its great advantage in a number of applications for modeling and analyzing
graphs.
Despite the high efficiency of GNN in the tasks of classifying nodes, predicting edges, and even classifying the graph as a whole, any slight change in the topology of the graph or the characteristics of the nodes will negatively affect the performance and stability of these networks and will lead to undesirable results.
In this research, the structure of GNN was studied and how to train them to classify the nodes of the famous graph (Citation network), and to study the impact of hostile attacks on these networks and how to detect them.
Experimental results show the negative impact of hostile attacks on the performance of GNN, which allows attackers to exploit security vulnerabilities and restrict their applications.
The results also show the ability to detect these alopecia attacks on the network.