摘要 : Deep Neural Network classifiers are vulnerable to adversarial attacks, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In... 展开
作者 | Mo Zhou Le Wang Zhenxing Niu Qilin Zhang Nanning Zheng Gang Hua |
---|---|
作者单位 | |
页码/总页数 | 5306-5324 / 19 |
语种/中图分类号 | 英语 / TP391 |
关键词 | Robustness Perturbation methods Glass box Training Face recognition Adaptation models Task analysis |
DOI | 10.1109/TPAMI.2024.3365699 |
馆藏号 | IELEP0261 |