Position paper: Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning

Prithviraj Dasgupta, Joseph Collins

Research output: Chapter in Book/Report/Conference proceedingConference contribution


In this position paper, we propose a game theoretic formulation of the adversarial learning problem called a Repeated Bayesian Stackelberg Game (RBSG) that can be used by a prediction mechanism to make itself robust against adversarial examples.

Original languageEnglish (US)
Title of host publicationFS-17-01
Subtitle of host publicationArtificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind
PublisherAI Access Foundation
Number of pages2
ISBN (Electronic)9781577357940
StatePublished - 2017
Event2017 AAAI Fall Symposium - Arlington, United States
Duration: Nov 9 2017Nov 11 2017

Publication series

NameAAAI Fall Symposium - Technical Report
VolumeFS-17-01 - FS-17-05


Other2017 AAAI Fall Symposium
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Engineering(all)

Cite this