Obserwuj
Qiuling Xu
Qiuling Xu
Zweryfikowany adres z purdue.edu
Tytuł
Cytowane przez
Cytowane przez
Rok
Bridging machine learning and logical reasoning by abductive learning
WZ Dai, Q Xu, Y Yu, ZH Zhou
Advances in Neural Information Processing Systems 32, 2019
1492019
Backdoor scanning for deep neural networks through k-arm optimization
G Shen, Y Liu, G Tao, S An, Q Xu, S Cheng, S Ma, X Zhang
International Conference on Machine Learning, 9525-9536, 2021
952021
Better trigger inversion optimization in backdoor scanning
G Tao, G Shen, Y Liu, S An, Q Xu, S Ma, P Li, X Zhang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
542022
Model orthogonalization: Class distance hardening in neural networks for better security
G Tao, Y Liu, G Shen, Q Xu, S An, Z Zhang, X Zhang
2022 IEEE Symposium on Security and Privacy (SP), 1372-1389, 2022
412022
Tunneling neural perception and logic reasoning through abductive learning
WZ Dai, QL Xu, Y Yu, ZH Zhou
arXiv preprint arXiv:1802.01173, 2018
352018
Flip: A provable defense framework for backdoor mitigation in federated learning
K Zhang, G Tao, Q Xu, S Cheng, S An, Y Liu, S Feng, G Shen, PY Chen, ...
arXiv preprint arXiv:2210.12873, 2022
302022
Mirror: Model inversion for deep learning network with high fidelity
S An, G Tao, Q Xu, Y Liu, G Shen, Y Yao, J Xu, X Zhang
Proceedings of the 29th Network and Distributed System Security Symposium, 2022
302022
Towards feature space adversarial attack
Q Xu, G Tao, S Cheng, X Zhang
arXiv preprint arXiv:2004.12385, 2020
292020
Constrained optimization with dynamic bound-scaling for effective nlp backdoor defense
G Shen, Y Liu, G Tao, Q Xu, Z Zhang, S An, S Ma, X Zhang
International Conference on Machine Learning, 19879-19892, 2022
252022
Towards feature space adversarial attack by style perturbation
Q Xu, G Tao, S Cheng, X Zhang
Proceedings of the AAAI Conference on Artificial Intelligence 35 (12), 10523 …, 2021
212021
Trader: Trace divergence analysis and embedding regulation for debugging recurrent neural networks
G Tao, S Ma, Y Liu, Q Xu, X Zhang
Proceedings of the ACM/IEEE 42nd International Conference on Software …, 2020
132020
Beagle: Forensics of deep learning backdoor attack for better defense
S Cheng, G Tao, Y Liu, S An, X Xu, S Feng, G Shen, K Zhang, Q Xu, S Ma, ...
arXiv preprint arXiv:2301.06241, 2023
72023
Medic: Remove model backdoors via importance driven cloning
Q Xu, G Tao, J Honorio, Y Liu, S An, G Shen, S Cheng, X Zhang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
52023
Bounded adversarial attack on deep content features
Q Xu, G Tao, X Zhang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
42022
Deck: Model hardening for defending pervasive backdoors
G Tao, Y Liu, S Cheng, S An, Z Zhang, Q Xu, G Shen, X Zhang
arXiv preprint arXiv:2206.09272, 2022
32022
A le cam type bound for adversarial learning and applications
Q Xu, K Bello, J Honorio
2021 IEEE International Symposium on Information Theory (ISIT), 1164-1169, 2021
32021
Elijah: Eliminating backdoors injected in diffusion models via distribution shift
S An, SY Chou, K Zhang, Q Xu, G Tao, G Shen, S Cheng, S Ma, PY Chen, ...
Proceedings of the AAAI Conference on Artificial Intelligence 38 (10), 10847 …, 2024
22024
Remove Model Backdoors via Importance Driven Cloning
Q Xu, G Tao, J Honorio, Y Liu, S An, G Shen, S Cheng, X Zhang
IEEE Conference on Computer Vision and Pattern Recognition, 2023
22023
D-square-b: Deep distribution bound for natural-looking adversarial attack
Q Xu, G Tao, X Zhang
arXiv preprint arXiv:2006.07258, 2020
22020
{PELICAN}: Exploiting Backdoors of Naturally Trained Deep Learning Models In Binary Code Analysis
Z Zhang, G Tao, G Shen, S An, Q Xu, Y Liu, Y Ye, Y Wu, X Zhang
32nd USENIX Security Symposium (USENIX Security 23), 2365-2382, 2023
12023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20