Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Robust filtering schemes for machine learning systems to defend Adversarial Attack

Abstract

Defenses against adversarial attacks are essential to ensure the reliability of machine learning models as their applications are expanding in different domains. Existing ML defense techniques have several limitations in practical use. I proposed a trustworthy framework that employs an adaptive strategy to inspect both inputs and decisions. In particular, data streams are examined by a series of diverse filters before sending to the learning system and then crossed checked its output through a diverse set of filters before making the final decision. My experimental results illustrated that the proposed active learning-based defense strategy could mitigate adaptive or advanced adversarial manipulations both in input and after with the model decision for a wide range of ML attacks by higher accuracy. Moreover, the output decision boundary inspection using a classification technique automatically reaffirms the reliability and increases the trustworthiness of any ML-Based decision support system. Unlike other defense strategies, my defense technique does not require adversarial sample generation, and updating the decision boundary for detection makes the defense systems robust to traditional adaptive attacks

Similar works

This paper was published in University of Memphis Digital Commons.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.