Abstract
In this paper, a multiagent-based model is used to study distributed energy management in a microgrid (MG). The suppliers and consumers of electricity are modeled as autonomous agents, capable of making local decisions in order to maximize their own profit in a multiagent environment. For every supplier, a lack of information about customers and other suppliers creates challenges to optimal decision making in order to maximize its return. Similarly, customers face difficulty in scheduling their energy consumption without any information about suppliers and electricity prices. Additionally, there are several uncertainties involved in the nature of MGs due to variability in renewable generation output power and continuous fluctuation of customers' consumption. In order to prevail over these challenges, a reinforcement learning algorithm was developed to allow generation resources, distributed storages, and customers to develop optimal strategies for energy management and load scheduling without prior information about each other and the MG system. Case studies are provided to show how the overall performance of all entities converges as an emergent behavior to the Nash equilibrium, benefiting all agents.
Original language | English (US) |
---|---|
Article number | 8331897 |
Pages (from-to) | 5749-5758 |
Number of pages | 10 |
Journal | IEEE Transactions on Power Systems |
Volume | 33 |
Issue number | 5 |
DOIs | |
State | Published - Sep 2018 |
Keywords
- Microgrid
- distributed control
- reinforcement learning
- renewable generation
ASJC Scopus subject areas
- Energy Engineering and Power Technology
- Electrical and Electronic Engineering