Volgen
Bo Li
Titel
Geciteerd door
Geciteerd door
Jaar
Robust physical-world attacks on deep learning visual classification
K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, C Xiao, A Prakash, ...
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
3181*2018
Robust physical-world attacks on deep learning visual classification
K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, C Xiao, A Prakash, ...
Proceedings of the IEEE conference on computer vision and pattern …, 2018
26422018
Targeted backdoor attacks on deep learning systems using data poisoning
X Chen, C Liu, B Li, K Lu, D Song
arXiv preprint arXiv:1712.05526, 2017
18992017
Generating adversarial examples with adversarial networks
C Xiao, B Li, JY Zhu, W He, M Liu, D Song
arXiv preprint arXiv:1801.02610, 2018
10282018
Manipulating machine learning: Poisoning attacks and countermeasures for regression learning
M Jagielski, A Oprea, B Biggio, C Liu, C Nita-Rotaru, B Li
2018 IEEE Symposium on Security and Privacy (SP), 19-35, 2018
9882018
Characterizing adversarial subspaces using local intrinsic dimensionality
X Ma, B Li, Y Wang, SM Erfani, S Wijewickrema, G Schoenebeck, D Song, ...
arXiv preprint arXiv:1801.02613, 2018
8322018
Textbugger: Generating adversarial text against real-world applications
J Li, S Ji, T Du, B Li, T Wang
arXiv preprint arXiv:1812.05271, 2018
7622018
Deepgauge: Multi-granularity testing criteria for deep learning systems
L Ma, F Juefei-Xu, F Zhang, J Sun, M Xue, B Li, C Chen, T Su, L Li, Y Liu, ...
Proceedings of the 33rd ACM/IEEE International Conference on Automated …, 2018
7622018
DBA: Distributed Backdoor Attacks against Federated Learning
C Xie, K Huang, PY Chen, B Li
International Conference on Learning Representations, 2019
7242019
Spatially transformed adversarial examples
C Xiao, JY Zhu, B Li, W He, M Liu, D Song
arXiv preprint arXiv:1801.02612, 2018
6042018
Physical adversarial examples for object detectors
D Song, K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, F Tramer, ...
12th {USENIX} Workshop on Offensive Technologies ({WOOT} 18), 2018
5402018
The secret revealer: generative model-inversion attacks against deep neural networks
Y Zhang, R Jia, H Pei, W Wang, B Li, D Song
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
4922020
Towards efficient data valuation based on the shapley value
R Jia, D Dao, B Wang, FA Hubis, N Hynes, NM Gürel, B Li, C Zhang, ...
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
4702019
Deephunter: A coverage-guided fuzz testing framework for deep neural networks
X Xie, L Ma, F Juefei-Xu, M Xue, H Chen, Y Liu, J Zhao, B Li, J Yin, S See
Proceedings of the 28th ACM SIGSOFT International Symposium on Software …, 2019
4502019
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Y Li, X Lyu, N Koren, L Lyu, B Li, X Ma
arXiv preprint arXiv:2101.05930, 2021
4272021
Deepmutation: Mutation testing of deep learning systems
L Ma, F Zhang, J Sun, M Xue, B Li, F Juefei-Xu, C Xie, L Li, Y Liu, J Zhao, ...
2018 IEEE 29th International Symposium on Software Reliability Engineering …, 2018
4232018
Data poisoning attacks on factorization-based collaborative filtering
B Li, Y Wang, A Singh, Y Vorobeychik
Advances in neural information processing systems 29, 2016
4112016
Data Poisoning Attacks on Factorization-based Collaborative Filtering
B Li, Y Wang, A Singh, Y Vorobeychik
In Proceedings of the Neural Information Processing Systems (NIPS), 2016
4112016
Towards stable and efficient training of verifiably robust neural networks
H Zhang, H Chen, C Xiao, S Gowal, R Stanforth, B Li, D Boning, CJ Hsieh
arXiv preprint arXiv:1906.06316, 2019
3712019
Adversarial attack and defense on graph data: A survey
L Sun, Y Dou, C Yang, K Zhang, J Wang, SY Philip, L He, B Li
IEEE Transactions on Knowledge and Data Engineering 35 (8), 7693-7711, 2022
3292022
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20