Follow
Neil Zhenqiang Gong
Neil Zhenqiang Gong
Associate Professor, Duke University
Verified email at duke.edu - Homepage
Title
Cited by
Cited by
Year
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
M Fang, X Cao, J Jia, NZ Gong
USENIX Security Symposium, 2020
11932020
Stealing Hyperparameters in Machine Learning
B Wang, NZ Gong
IEEE Symposium on Security and Privacy, 2018
6222018
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
X Cao, M Fang, J Liu, NZ Gong
ISOC Network and Distributed System Security Symposium (NDSS), 2021
5932021
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
J Jia, A Salem, M Backes, Y Zhang, NZ Gong
ACM Conference on Computer and Communications Security (CCS), 2019
4162019
On the feasibility of internet-scale author identification
A Narayanan, H Paskov, NZ Gong, J Bethencourt, E Stefanov, ECR Shin, ...
IEEE Symposium on Security and Privacy, 2012
4012012
Joint link prediction and attribute inference using a social-attribute network
NZ Gong, A Talwalkar, L Mackey, L Huang, ECR Shin, E Stefanov, ER Shi, ...
ACM Transactions on Intelligent Systems and Technology (TIST) 5 (2), 27, 2014
327*2014
Evolution of Social-Attribute Networks: Measurements, Modeling, and Implications using Google+
NZ Gong, W Xu, L Huang, P Mittal, E Stefanov, V Sekar, D Song
ACM Internet Measurement Conference (IMC), 2012
2712012
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
X Cao, NZ Gong
Annual Computer Security Applications Conference (ACSAC), 2017
2552017
Poisoning Attacks to Graph-Based Recommender Systems
M Fang, G Yang, NZ Gong, J Liu
Annual Computer Security Applications Conference (ACSAC), 2018
2422018
SybilBelief: A Semi-supervised Learning Approach for Structure-based Sybil Detection
NZ Gong, M Frank, P Mittal
IEEE Transactions on Information Forensics and Security 9 (6), 2014
2282014
Backdoor Attacks to Graph Neural Networks
Z Zhang, J Jia, B Wang, NZ Gong
ACM Symposium on Access Control Models and Technologies (SACMAT), 2021
2212021
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
K Zhu, J Wang, J Zhou, Z Wang, H Chen, Y Wang, L Yang, W Ye, ...
arXiv preprint arXiv:2306.04528, 2023
2052023
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
J Jia, NZ Gong
USENIX Security Symposium, 2018
2002018
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
Z Zhang, X Cao, J Jia, NZ Gong
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022
1952022
FLCert: Provably Secure Federated Learning against Poisoning Attacks
X Cao, Z Zhang, J Jia, NZ Gong
IEEE Transactions on Information Forensics and Security, 2022
181*2022
TrustLLM: Trustworthiness in Large Language Models
L Sun, Y Huang, H Wang, S Wu, Q Zhang, C Gao, Y Huang, W Lyu, ...
International Conference on Machine Learning (ICML), 2024
180*2024
You Are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors.
NZ Gong, B Liu
USENIX Security Symposium, 2016
1742016
Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning
J Jia, Y Liu, NZ Gong
IEEE Symposium on Security and Privacy, 2022
1712022
Influence function based data poisoning attacks to top-n recommender systems
M Fang, NZ Gong, J Liu
Proceedings of The Web Conference, 2020
1692020
Attacking Graph-based Classification via Manipulating the Graph Structure
B Wang, NZ Gong
ACM Conference on Computer and Communications Security (CCS), 2019
1652019
The system can't perform the operation now. Try again later.
Articles 1–20