A survey of game theoretic approaches for adversarial machine learning in cybersecurity tasks

Prithviraj Dasgupta, Joseph B. Collins

Research output: Contribution to journalArticle

1 Scopus citations

Abstract

Machine learning techniques are used extensively for automating various cybersecurity tasks. Most of these techniques use supervised learning algorithms that rely on training the algorithm to classify incoming data into categories, using data encountered in the relevant domain. A critical vulnerability of these algorithms is that they are susceptible to adversarial attacks by which a malicious entity called an adversary deliberately alters the training data to misguide the learning algorithm into making classification errors. Adversarial attacks could render the learning algorithm unsuitable for use and leave critical systems vulnerable to cybersecurity attacks. This article provides a detailed survey of the state-of-the-art techniques that are used to make a machine learning algorithm robust against adversarial attacks by using the computational framework of game theory. We also discuss open problems and challenges and possible directions for further research that would make deep machine learning-based systems more robust and reliable for cybersecurity tasks.

Original languageEnglish (US)
Pages (from-to)31-43
Number of pages13
JournalAI Magazine
Volume40
Issue number2
DOIs
StatePublished - Jul 5 2019

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint Dive into the research topics of 'A survey of game theoretic approaches for adversarial machine learning in cybersecurity tasks'. Together they form a unique fingerprint.

  • Cite this