Zhaodong Chen
Zhaodong Chen
Zweryfikowany adres z ucsb.edu - Strona główna
Cytowane przez
Cytowane przez
Hardness-aware deep metric learning
W Zheng, Z Chen, J Lu, J Zhou
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019
Characterizing and understanding GCNs on GPU
M Yan, Z Chen, L Deng, X Ye, Z Zhang, D Fan, Y Xie
IEEE Computer Architecture Letters 19 (1), 22-25, 2020
Effective and efficient batch normalization using a few uncorrelated data for statistics estimation
Z Chen, L Deng, G Li, J Sun, X Hu, L Liang, Y Ding, Y Xie
IEEE Transactions on Neural Networks and Learning Systems 32 (1), 348-362, 2020
H2learn: High-efficiency learning accelerator for high-accuracy spiking neural networks
L Liang, Z Qu, Z Chen, F Tu, Y Wu, L Deng, G Li, P Li, Y Xie
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2021
A Comprehensive and Modularized Statistical Framework for Gradient Norm Equality in Deep Neural Networks
Z Chen, L Deng, B Wang, G Li, Y Xie
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020
fuseGNN: Accelerating Graph Convolutional Neural Network Training on GPGPU
Z Chen, M Yan, M Zhu, L Deng, G Li, S Li, Y Xie
2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), 1-9, 2020
Boosting Deep Neural Network Efficiency with Dual-Module Inference
L Liu, L Deng, Z Chen, Y Wang, S Li, J Zhang, Y Yang, Z Gu, Y Ding, ...
ICML 2020 - Proceedings of the 37th International Conference on Machine Learning, 2020
Efficient Tensor Core-based GPU Kernels for Structured Sparsity under Reduced Precision
Z Chen, Z Qu, L Liu, Y Ding, Y Xie
2021 Proceedings of the International Conference for High Performance …, 2021
Dynamic N: M Fine-grained Structured Sparse Attention Mechanism
Z Chen, Y Quan, Z Qu, L Liu, Y Ding, Y Xie
arXiv preprint arXiv:2203.00091, 2022
DOTA: detect and omit weak attentions for scalable transformer acceleration
Z Qu, L Liu, F Tu, Z Chen, Y Ding, Y Xie
Proceedings of the 27th ACM International Conference on Architectural …, 2022
Transformer Acceleration with Dynamic Sparse Attention
L Liu, Z Qu, Z Chen, Y Ding, Y Xie
arXiv preprint arXiv:2110.11299, 2021
Dynamic Sparse Attention for Scalable Transformer Acceleration
L Liu, Z Qu, Z Chen, F Tu, Y Ding, Y Xie
IEEE Transactions on Computers, 2022
Accelerating spatiotemporal supervised training of large-scale spiking neural networks on GPU
L Liang, Z Chen, L Deng, F Tu, G Li, Y Xie
2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), 658-663, 2022
Faith: An Efficient Framework for Transformer Verification on {GPUs}
B Feng, T Tang, Y Wang, Z Chen, Z Wang, S Yang, Y Xie, Y Ding
2022 USENIX Annual Technical Conference (USENIX ATC 22), 167-182, 2022
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–14