Please use this identifier to cite or link to this item:
Title: Adversarial Machine Learning Applied to Intrusion and Malware Scenarios: A Systematic Review
Authors: Martins, Nuno
Cruz, Jose Magalhaes
Cruz, Tiago 
Abreu, Pedro Henriques 
Keywords: Cybersecurity; adversarial machine learning; intrusion detection; malware detection
Issue Date: 2020
Publisher: IEEE
Serial title, monograph or event: IEEE Access
Volume: 8
Abstract: Cyber-security is the practice of protecting computing systems and networks from digital attacks, which are a rising concern in the Information Age. With the growing pace at which new attacks are developed, conventional signature based attack detection methods are often not enough, and machine learning poses as a potential solution. Adversarial machine learning is a research area that examines both the generation and detection of adversarial examples, which are inputs specially crafted to deceive classi ers, and has been extensively studied speci cally in the area of image recognition, where minor modi cations are performed on images that cause a classi er to produce incorrect predictions. However, in other elds, such as intrusion and malware detection, the exploration of such methods is still growing. The aim of this survey is to explore works that apply adversarial machine learning concepts to intrusion and malware detection scenarios. We concluded that a wide variety of attacks were tested and proven effective in malware and intrusion detection, although their practicality was not tested in intrusion scenarios. Adversarial defenses were substantially less explored, although their effectiveness was also proven at resisting adversarial attacks. We also concluded that, contrarily to malware scenarios, the variety of datasets in intrusion scenarios is still very small, with the most used dataset being greatly outdated.
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2020.2974752
Rights: openAccess
Appears in Collections:FCTUC Eng.Informática - Artigos em Revistas Internacionais

Show full item record

Google ScholarTM




This item is licensed under a Creative Commons License Creative Commons