Investigating reinforcement learning in multiagent coalition formation

Xin Li, Leen Kiat Soh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Scopus citations


In this paper we investigate the use of reinforcement learning to address the multiagent coalition formation problem in dynamic, uncertain, real-time, and noisy environments. To adapt to the complex environmental factors, we equip each agent with the case-based reinforcement learning ability which is the integration of case-based reasoning and reinforcement learning. The agent can use case-based reasoning to derive a coalition formation plan in a real-time manner based on the past experience, and then instantiate the plan adapting to the dynamic and uncertain environment with the reinforcement learning on coalition formation experience. In this paper we focus on describing multiple aspects of the application of reinforcement learning in multiagent coalition formation. We classify two types of reinforcement learning: case-oriented reinforcement learning and peerrelated reinforcement learning, corresponding to strategic, off-line learning scenario and tactical, online learning scenario respectively. An agent might learn about the others' joint or individual behavior during coalition formation, as a result, we identify them as joint-behavior reinforcement learning and individual-behavior reinforcement learning. We embed the learning approach in a multi-phase coalition formation model and have implemented the approach.

Original languageEnglish (US)
Title of host publicationAAAI Workshop - Technical Report
Number of pages7
StatePublished - 2004
Event19th National Conference on Artificial Intelligence - San Jose, CA
Duration: Jul 25 2004Jul 26 2004


Other19th National Conference on Artificial Intelligence
CitySan Jose, CA

ASJC Scopus subject areas

  • General Engineering


Dive into the research topics of 'Investigating reinforcement learning in multiagent coalition formation'. Together they form a unique fingerprint.

Cite this