dc.contributor.author | Arshad, Iram | |
dc.contributor.author | Asghar, Mamoona Naveed | |
dc.contributor.author | Qiao, Yuansong | |
dc.contributor.author | Lee, Brian | |
dc.contributor.author | Ye, Yuhang | |
dc.date.accessioned | 2022-12-20T11:16:11Z | |
dc.date.available | 2022-12-20T11:16:11Z | |
dc.date.copyright | 2021 | |
dc.date.issued | 2021-08-23 | |
dc.identifier.citation | Arshad, I, Asghar, M.N., Qiao, Y., Lee, B., Yuhang, Y. (2022). Pixdoor: a pixel-space backdoor attack on deep learning models. Published in: 29th European Sirocessing Conference (EUSIPCO). , Dublin, Ireland, August 23-27, 2021. pp,681-685. doi: 10.23919/EUSIPCO54536.2021.9616118 | en_US |
dc.identifier.isbn | 978-9-0827-9706-0 | |
dc.identifier.uri | https://research.thea.ie/handle/20.500.12065/4344 | |
dc.description.abstract | Deep learning algorithms outperform the machine
learning techniques in various fields and are widely deployed
for recognition and classification tasks. However, recent research
focuses on exploring these deep learning models’ weaknesses as
these can be vulnerable due to outsourced training data and
transfer learning. This paper proposed a rudimentary, stealthy
Pixel-space based Backdoor attack (Pixdoor) during the training
phase of deep learning models. For generating the poisoned
dataset, the bit-inversion technique is used for injecting errors in
the pixel bits of training images. Then 3% of the poisoned dataset
is mixed with the clean dataset to corrupt the complete training
images dataset. The experimental results show that the minimal
percent of data poisoning can effectively fool a deep learning
model with a high degree of accuracy. Likewise, in experiments,
we witness a marginal degradation of the model accuracy by
0.02%. | en_US |
dc.format | PDF | en_US |
dc.language.iso | eng | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartof | 29th European Sirocessing Conference (EUSIPCO). | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
dc.subject | Backdoor attack | en_US |
dc.subject | Causative attack | en_US |
dc.subject | Pixel-space | en_US |
dc.subject | Poisoned dataset | en_US |
dc.subject | Training phase | en_US |
dc.title | Pixdoor: a pixel-space backdoor attack on deep learning models | en_US |
dc.conference.date | 2021-08-23 | |
dc.conference.host | EUSIPCO | en_US |
dc.conference.location | Dublin | en_US |
dc.contributor.affiliation | Technological University of the Shannon: Midlands Midwest | en_US |
dc.description.funding | President's Doctoral Scholarship (Athlone Institute of Technology - TUS Midlands) | |
dc.description.peerreview | yes | en_US |
dc.identifier.doi | 10.23919/EUSIPCO54536.2021.9616118 | en_US |
dc.identifier.orcid | https://orcid.org/0000-0003-0755-5896 | en_US |
dc.identifier.orcid | https://orcid.org/0000-0001-7460-266X | en_US |
dc.identifier.orcid | https://orcid.org/0000-0002-1543-1589 | en_US |
dc.identifier.orcid | https://orcid.org/0000-0002-8475-4074 | en_US |
dc.identifier.orcid | https://orcid.org/0000-0003-4608-1451 | en_US |
dc.rights.accessrights | info:eu-repo/semantics/openAccess | en_US |
dc.subject.department | Department of Computer & Software Engineering: TUS Midlands | en_US |
dc.type.version | info:eu-repo/semantics/acceptedVersion | en_US |