Obserwuj
Song Han
Tytuł
Cytowane przez
Cytowane przez
Rok
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
S Han, H Mao, WJ Dally
International Conference on Learning Representations (ICLR'16 best paper award), 2015
71562015
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5MB model size
FN Iandola, S Han, MW Moskewicz, K Ashraf, WJ Dally, K Keutzer
arXiv preprint arXiv:1602.07360, 2016
60332016
Learning both Weights and Connections for Efficient Neural Network
S Han, J Pool, J Tran, W Dally
Advances in Neural Information Processing Systems (NIPS), 1135-1143, 2015
50132015
EIE: Efficient Inference Engine on Compressed Deep Neural Network
S Han, X Liu, H Mao, J Pu, A Pedram, MA Horowitz, WJ Dally
International Symposium on Computer Architecture (ISCA 2016), 2016
23332016
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
H Cai, L Zhu, S Han
International Conference on Learning Representations (ICLR) 2019, 2018
12602018
AMC: Automl for model compression and acceleration on mobile devices
Y He, J Lin, Z Liu, H Wang, LJ Li, S Han
Proceedings of the European Conference on Computer Vision (ECCV), 784-800, 2018
10352018
Trained Ternary Quantization
C Zhu, S Han, H Mao, WJ Dally
International Conference on Learning Representations (ICLR) 2017, 2016
9812016
Deep gradient compression: Reducing the communication bandwidth for distributed training
Y Lin, S Han, H Mao, Y Wang, WJ Dally
International Conference on Learning Representations (ICLR) 2018, 2017
9042017
TSM: Temporal shift module for efficient video understanding
J Lin, C Gan, S Han
Proceedings of the IEEE International Conference on Computer Vision, 7083-7093, 2019
8472019
Deep leakage from gradients
L Zhu, Z Liu, S Han
Advances in neural information processing systems 32, 2019
6292019
ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA.
S Han, J Kang, H Mao, Y Hu, X Li, Y Li, D Xie, H Luo, S Yao, Y Wang, ...
International Symposium on Field-Programmable Gate Arrays (FPGA'17), 75-84, 2017
6252017
Once-for-all: Train one network and specialize it for efficient deployment
H Cai, C Gan, T Wang, Z Zhang, S Han
International Conference on Learning Representations (ICLR) 2020, 2019
5592019
HAQ: Hardware-aware automated quantization with mixed precision
K Wang, Z Liu, Y Lin, J Lin, S Han
Proceedings of the IEEE conference on computer vision and pattern …, 2019
5532019
Angel-eye: A complete design flow for mapping CNN onto embedded FPGA
K Guo, L Sui, J Qiu, J Yu, J Wang, S Yao, S Han, Y Wang, H Yang
IEEE transactions on computer-aided design of integrated circuits and …, 2017
3812017
Exploring the granularity of sparsity in convolutional neural networks
H Mao, S Han, J Pool, W Li, X Liu, Y Wang, WJ Dally
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2017
337*2017
Model compression and hardware acceleration for neural networks: A comprehensive survey
L Deng, G Li, S Han, L Shi, Y Xie
Proceedings of the IEEE 108 (4), 485-532, 2020
3032020
Point-voxel cnn for efficient 3d deep learning
Z Liu, H Tang, Y Lin, S Han
Advances in Neural Information Processing Systems 32, 2019
2672019
Fast inference of deep neural networks in FPGAs for particle physics
J Duarte, S Han, P Harris, S Jindariani, E Kreinar, B Kreis, J Ngadiuba, ...
Journal of Instrumentation 13 (07), P07027, 2018
2502018
DSD: Dense-Sparse-Dense Training for Deep Neural Networks
S Han, J Pool, S Narang, H Mao, S Tang, E Elsen, B Catanzaro, J Tran, ...
International Conference on Learning Representations (ICLR) 2017, 2016
245*2016
Differentiable augmentation for data-efficient gan training
S Zhao, Z Liu, J Lin, JY Zhu, S Han
NeurIPS'20, 2020
2122020
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20