Obserwuj
Hongbin Liu
Hongbin Liu
Ph.D. student, Duke University; Google Deepmind
Zweryfikowany adres z duke.edu - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Encodermi: Membership inference against pre-trained encoders in contrastive learning
H Liu, J Jia, W Qu, NZ Gong
Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications …, 2021
1012021
Pointguard: Provably robust 3d point cloud classification
H Liu, J Jia, NZ Gong
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
882021
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Y Liu, J Jia, H Liu, NZ Gong
ACM Conference on Computer and Communications Security (CCS), 2022
512022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
H Liu, J Jia, NZ Gong
USENIX Security Symposium, 2022
392022
Visual Hallucinations of Multi-modal Large Language Models
W Huang, H Liu, M Guo, NZ Gong
Findings of the Association for Computational Linguistics (ACL), 2024
332024
Almost tight l0-norm certified robustness of top-k predictions against adversarial perturbations
J Jia, B Wang, X Cao, H Liu, NZ Gong
International Conference on Learning Representations (ICLR), 2022
282022
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
X He, H Liu, NZ Gong, Y Zhang
European Conference on Computer Vision (ECCV), 2022
192022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
J Zhang, H Liu, J Jia, NZ Gong
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
142024
On the Intrinsic Differential Privacy of Bagging
H Liu, J Jia, NZ Gong
International Joint Conference on Artificial Intelligence (IJCAI), 2021
142021
10 Security and Privacy Problems in Large Foundation Models
J Jia, H Liu, NZ Gong
AI Embedded Assurance for Cyber Systems, 2023
11*2023
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
J Zhang, J Jia, H Liu, NZ Gong
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
82023
AudioMarkBench: Benchmarking Robustness of Audio Watermarking
H Liu, M Guo, Z Jiang, L Wang, NZ Gong
NeurIPS Datasets and Benchmarks 2024, 2024
52024
Data Poisoning based Backdoor Attacks to Contrastive Learning
J Zhang, H Liu, J Jia, NZ Gong
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
52024
Pre-trained encoders in self-supervised learning improve secure and privacy-preserving supervised learning
H Liu, W Qu, J Jia, NZ Gong
IEEE Security and Privacy Workshops, 2024
42024
Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning
Y Jia, M Fang, H Liu, J Zhang, NZ Gong
arXiv preprint arXiv:2407.07221, 2024
22024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
H Liu, MK Reiter, NZ Gong
USENIX Security Symposium, 2024
22024
Generation-based fuzzing? Don’t build a new generator, reuse!
C Pang, H Liu, Y Wang, NZ Gong, B Mao, J Xu
Computers & Security 129, 103178, 2023
22023
Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment
Z Shao, H Liu, J Mu, NZ Gong
arXiv preprint arXiv:2410.14827, 2024
12024
Refusing Safe Prompts for Multi-modal Large Language Models
Z Shao, H Liu, Y Hu, NZ Gong
arXiv preprint arXiv:2407.09050, 2024
12024
Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models
Z Liu, H Liu, Y Hu, Z Shao, NZ Gong
arXiv preprint arXiv:2410.11242, 2024
2024
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20