Volgen
Jindong Gu
Jindong Gu
University of Oxford & Google DeepMind
Geverifieerd e-mailadres voor robots.ox.ac.uk - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Understanding individual decisions of cnns via contrastive backpropagation
J Gu, Y Yang, V Tresp
14th Asian Conference on Computer Vision (ACCV), 119-134, 2019
1302019
A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models
J Gu, Z Han, S Chen, A Beirami, B He, G Zhang, R Liao, Y Qin, V Tresp, ...
arXiv preprint arXiv:2307.12980, 2023
1262023
Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness
J Gu, H Zhao, V Tresp, PHS Torr
European Conference on Computer Vision (ECCV), 308-325, 2022
782022
Improving the robustness of capsule networks to image affine transformations
J Gu, V Tresp
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7285-7293, 2020
712020
Are vision transformers robust to patch perturbations?
J Gu, V Tresp, Y Qin
European Conference on Computer Vision (ECCV), 404-421, 2022
702022
Mm-safetybench: A benchmark for safety evaluation of multimodal large language models
X Liu, Y Zhu, J Gu, Y Lan, C Yang, Y Qiao
European Conference on Computer Vision (ECCV), 386-403, 2025
62*2025
Backdoor Defense via Adaptively Splitting Poisoned Dataset
K Gao, Y Bai, J Gu, Y Yang, ST Xia
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4005-4014, 2023
542023
Towards efficient adversarial training on vision transformers
B Wu*, J Gu*, Z Li, D Cai, X He, W Liu
European Conference on Computer Vision (ECCV), 307-325, 2022
472022
Can Large Language Model Agents Simulate Human Trust Behaviors?
GL Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu ...
arXiv preprint arXiv:2402.04559, 2024
442024
Interpretable graph capsule networks for object recognition
J Gu
Proceedings of the AAAI Conference on Artificial Intelligence 35 (2), 1469-1477, 2021
402021
An image is worth 1000 lies: Adversarial transferability across prompts on vision-language models
H Luo*, J Gu*, F Liu, P Torr
International Conference on Learning Representations (ICLR), 2024, 2024
39*2024
Capsule network is not more robust than convolutional network
J Gu, V Tresp, H Hu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 14309-14317, 2021
372021
Understanding bias in machine learning
J Gu, D Oelke
Workshop on Visualization for AI Explainability, IEEE Vis, 2019
372019
Saliency methods for explaining adversarial attacks
J Gu, V Tresp
Workshop on Human-Centric Machine Learning, NeurIPS 2019, 2019
362019
Attacking Adversarial Attacks as A Defense
B Wu, H Pan, L Shen, J Gu, S Zhao, Z Li, D Cai, X He, W Liu
arXiv preprint arXiv:2106.04938, 2021
352021
Effective and Efficient Vote Attack on Capsule Networks
J Gu, B Wu, V Tresp
International Conference on Learning Representations (ICLR), 2021, 2021
352021
Inducing high energy-latency of large vision-language models with verbose images
K Gao, Y Bai, J Gu, ST Xia, P Torr, Z Li, W Liu
International Conference on Learning Representations (ICLR), 2024, 2024
272024
Watermark vaccine: Adversarial attacks to prevent watermark removal
X Liu, J Liu, Y Bai, J Gu, T Chen, X Jia, X Cao
European Conference on Computer Vision (ECCV), 1-17, 2022
272022
Feddat: An approach for foundation model finetuning in multi-modal heterogeneous federated learning
H Chen, Y Zhang, D Krompass, J Gu, V Tresp
Proceedings of the AAAI Conference on Artificial Intelligence 38 (10), 11285 …, 2024
262024
A survey on transferability of adversarial examples across deep neural networks
J Gu, X Jia, P de Jorge, W Yu, X Liu, A Ma, Y Xun, A Hu, A Khakzar, Z Li, ...
Transactions on Machine Learning Research (TMLR), 2023
262023
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20