Abstract
In any multi-agent system (MAS) where agents must rely on each other in order to achieve individual or shared goals, the issue of trust is important. How does one agent decide whether or not to rely on another, particular agent to perform a task? This and other related questions are at the forefront of recent research into trust and reputation in MAS. One area not deeply explored is the effect of the MAS environment itself on trust decisions. For example, if an agent is operating in a MAS where it is expensive (in whatever manner "expensive" makes sense in the MAS) to initiate a transaction with another agent, should that relatively high cost affect the agent's trust decisions and if so, how? What about the level of competitiveness in the MAS? Are the agents working towards a set of common goals, or is it "every agent for itself"? How should each type of environment - or even an environment where the level of competitiveness changes over time - affect a participating agent's trust decisions? This work explores methods for considering those types of environmental factors in an agent's trust algorithm. The theory is that an agent capable of a) detecting and b) reacting to certain environmental factors will be more effective in accomplishing its goals, whether those goals are shared with other agents or not. Using the current state-ofthe- art research testbed, an "environmentally-aware" trust algorithm will be designed and implemented in a software agent. This agent will then be pitted against a "stock" (unmodified) agent in a simulated competitive MAS to see if the modified agent outperforms its peers.
Original language | English (US) |
---|---|
Pages (from-to) | 2263-2270 |
Number of pages | 8 |
Journal | Journal of Software |
Volume | 6 |
Issue number | 11 SPEC. ISSUE |
DOIs | |
State | Published - Nov 2011 |
Keywords
- Agent reputation
- Multi-agent systems
- Simulation
- Testbed
- Trust
ASJC Scopus subject areas
- Software
- Human-Computer Interaction
- Artificial Intelligence