Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Defending OC-SVM based IDS from poisoning attacks

Abstract

Machine learning techniques are widely used to detect intrusions in the cyber security field. However, most machine learning models are vulnerable to poisoning attacks, in which malicious samples are injected into the training dataset to manipulate the classifier's performance. In this paper, we first evaluate the accuracy degradation of OC-SVM classifiers with 3 different poisoning strategies with the ADLA-FD public dataset and a real world dataset. Secondly, we propose a saniti-zation mechanism based on the DBSCAN clustering algorithm. In addition, we investigate the influences of different distance metrics and different dimensionality reduction techniques and evaluate the sensitivity of the DBSCAN parameters. The ex-perimental results show that the poisoning attacks can degrade the performance of the OC-SVM classifier to a large degree, with an accuracy equal to 0.5 in most settings. The proposed sanitization method can filter out poisoned samples effectively for both datasets. The accuracy after sanitization is very close or even higher to the original value.</p

Similar works

Full text

thumbnail-image

UvA-DARE

redirect
Last time updated on 07/11/2023

This paper was published in UvA-DARE.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.