dc.contributor.author | Roche, Jamie | |
dc.contributor.author | De-Silva, Varuna | |
dc.contributor.author | Kondoz, Ahmet | |
dc.date.accessioned | 2023-08-23T08:31:01Z | |
dc.date.available | 2023-08-23T08:31:01Z | |
dc.date.issued | 2021-10-08 | |
dc.identifier.citation | J. Roche, V. De-Silva and A. Kondoz, "A Multimodal Perception-Driven Self Evolving Autonomous Ground Vehicle," in IEEE Transactions on Cybernetics, vol. 52, no. 9, pp. 9279-9289, Sept. 2022, doi: 10.1109/TCYB.2021.3113804 | en_US |
dc.identifier.uri | https://research.thea.ie/handle/20.500.12065/4580 | |
dc.description | © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.description.abstract | Increasingly complex automated driving functions, specifically those associated with free space detection (FSD), are delegated to convolutional neural networks (CNNs). If the dataset used to train the network lacks diversity, modality, or sufficient quantities, the driver policy that controls the vehicle may induce safety risks. Although most autonomous ground vehicles (AGVs) perform well in structured surroundings, the need for human intervention significantly rises when presented with unstructured niche environments. To this end, we developed an AGV for seamless indoor and outdoor navigation to collect realistic multimodal data streams. We demonstrate one application of the AGV when applied to a self-evolving FSD framework that leverages online active machine-learning (ML) paradigms and sensor data fusion. In essence, the self-evolving AGV queries image data against a reliable data stream, ultrasound, before fusing the sensor data to improve robustness. We compare the proposed framework to one of the most prominent free space segmentation methods, DeepLabV3+ [1]. DeepLabV3+ [1] is a state-of-the-art semantic segmentation model composed of a CNN and an autodecoder. In consonance with the results, the proposed framework outperforms DeepLabV3+ [1]. The performance of the proposed framework is attributed to its ability to self-learn free space. This combination of online and active ML removes the need for large datasets typically required by a CNN. Moreover, this technique provides case-specific free space classifications based on the information gathered from the scenario at hand. | en_US |
dc.format | application/pdf | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartof | IEEE Transactions on Cybernetics | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
dc.subject | Autonomous vehicles | en_US |
dc.subject | Neural networks (Computer science) | en_US |
dc.subject | Traffic safety | en_US |
dc.subject | Convolutional neural networks | en_US |
dc.subject | Optical sensors | en_US |
dc.title | A Multimodal Perception-Driven Self Evolving Autonomous Ground Vehicle / | en_US |
dc.type | info:eu-repo/semantics/article | en_US |
dc.description.peerreview | yes | en_US |
dc.identifier.doi | 10.1109/TCYB.2021.3113804 | en_US |
dc.identifier.endpage | 9289 | en_US |
dc.identifier.issue | 9 (September 2022) | en_US |
dc.identifier.orcid | 0000-0002-5449-3774 | en_US |
dc.identifier.startpage | 9279 | en_US |
dc.identifier.url | https://ieeexplore.ieee.org/document/9565853 | en_US |
dc.identifier.volume | 52 | en_US |
dc.rights.accessrights | info:eu-repo/semantics/openAccess | en_US |
dc.subject.department | Dept of Mechanical & Electronic Engineering, ATU Sligo | en_US |
dc.type.version | info:eu-repo/semantics/acceptedVersion | en_US |