TY - GEN
T1 - Adaptive locomotion learning in modular self-reconfigurable robots
T2 - 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017
AU - Dutta, Ayan
AU - Dasgupta, Prithviraj
AU - Nelson, Carl
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/12/13
Y1 - 2017/12/13
N2 - Modular self-reconfigurable robots (MSRs) are mostly used in environments where it is difficult to navigate and explore otherwise. Especially, the shape-changing ability of MSRs makes them more dexterous in these situations compared to fixed-body robots. But when the MSR forms a new configuration, usually, the locomotion pattern for that particular configuration is not known by its constituting robotic modules. The main challenge for modules is to learn how to move in that specific configuration within a reasonable amount of time. In this paper, we study the problem where an MSR needs to learn its movement pattern on-the-fly. To solve this problem, we have proposed a game theoretic solution based on multi-agent reinforcement learning using which the constituting modules distributedly learn the best actions that they need to perform to travel more distance in less time. We have implemented this approach in simulation on both ModRED and Yamor MSR platforms. Results show that our approach performs better (up to 7.86 times) in terms of average speed achieved for most of the tested configurations as compared to an existing locomotion learning approach.
AB - Modular self-reconfigurable robots (MSRs) are mostly used in environments where it is difficult to navigate and explore otherwise. Especially, the shape-changing ability of MSRs makes them more dexterous in these situations compared to fixed-body robots. But when the MSR forms a new configuration, usually, the locomotion pattern for that particular configuration is not known by its constituting robotic modules. The main challenge for modules is to learn how to move in that specific configuration within a reasonable amount of time. In this paper, we study the problem where an MSR needs to learn its movement pattern on-the-fly. To solve this problem, we have proposed a game theoretic solution based on multi-agent reinforcement learning using which the constituting modules distributedly learn the best actions that they need to perform to travel more distance in less time. We have implemented this approach in simulation on both ModRED and Yamor MSR platforms. Results show that our approach performs better (up to 7.86 times) in terms of average speed achieved for most of the tested configurations as compared to an existing locomotion learning approach.
UR - http://www.scopus.com/inward/record.url?scp=85041953909&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041953909&partnerID=8YFLogxK
U2 - 10.1109/IROS.2017.8206200
DO - 10.1109/IROS.2017.8206200
M3 - Conference contribution
AN - SCOPUS:85041953909
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 3556
EP - 3561
BT - IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 24 September 2017 through 28 September 2017
ER -