dc.contributor.author | Mannion, Patrick | |
dc.contributor.author | Rockefeller, Golden | |
dc.contributor.author | Turner, Kagan | |
dc.date.accessioned | 2019-03-14T15:47:43Z | |
dc.date.available | 2019-03-14T15:47:43Z | |
dc.date.copyright | 2019 | |
dc.date.issued | 2019 | |
dc.identifier.uri | https://research.thea.ie/handle/20.500.12065/2516 | |
dc.description.abstract | In this paper,we leverage curriculum learning(CL) to improve the performance of multiagent systems(MAS) that are trained
with the cooperative coevolution of artificial neural networks. We design curricula to progressively change two dimensions:scale(i.e.domain size) and coupling (i.e.the number of agents required to complete a subtask).We demonstrate that CL can successfully mitigate the challenge of learning on a sparse reward signal resulting from a high degree of coupling in complex MAS. We also show that, in most cases, the combination of difference reward shaping with CL can improve performance by up to 56%. We evaluate our CL methods on the tightly coupled multi-rover domain.CL increased converged system performance on all tasks presented.Furthermore, agents were only able to learn when trained with CL for most tasks | en_US |
dc.format | Pdf | en_US |
dc.language.iso | en | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 Ireland | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/ie/ | * |
dc.subject | Multiagent Coordination | en_US |
dc.subject | Curriculum Learning | en_US |
dc.subject | Difference Rewards | en_US |
dc.title | Curriculum Learning for Tightly Coupled Multiagent Systems | en_US |
dc.type | Presentation | en_US |
dc.description.peerreview | yes | en_US |
dc.rights.access | Copyright | en_US |
dc.subject.department | Department of Computer Science & Applied Physics | en_US |