Obserwuj
Liu Liu
Tytuł
Cytowane przez
Cytowane przez
Rok
-Norm Batch Normalization for Efficient Training of Deep Neural Networks
S Wu, G Li, L Deng, L Liu, D Wu, Y Xie, L Shi
IEEE transactions on neural networks and learning systems 30 (7), 2043-2051, 2018
1152018
Leveraging 3D technologies for hardware security: Opportunities and challenges
P Gu, S Li, D Stow, R Barnes, L Liu, Y Xie, E Kursun
2016 International Great Lakes Symposium on VLSI (GLSVLSI), 347-352, 2016
362016
Dynamic Sparse Graph for Efficient Deep Learning
L Liu, L Deng, X Hu, M Zhu, G Li, Y Ding, Y Xie
International Conference on Learning Representations (ICLR), 2019
322019
Nvsim-cam: a circuit-level simulator for emerging nonvolatile memory based content-addressable memory
S Li, L Liu, P Gu, C Xu, Y Xie
2016 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 1-7, 2016
252016
Building energy-efficient multi-level cell STT-RAM caches with data compression
L Liu, P Chi, S Li, Y Cheng, Y Xie
2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC), 751-756, 2017
232017
Cnnlab: a novel parallel framework for neural networks using gpu and fpga-a practical study with trade-off analysis
M Zhu, L Liu, C Wang, Y Xie
arXiv preprint arXiv:1606.06234, 2016
222016
DUET: Boosting Deep Neural Network Efficiency on Dual-Module Architecture
L Liu, Z Qu, L Deng, F Tu, S Li, X Hu, Z Gu, Y Ding, Y Xie
53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2020
162020
π-RT: A Runtime Framework to Enable Energy-Efficient Real-Time Robotic Vision Applications on Heterogeneous Architectures
L Liu, J Tang, S Liu, B Yu, Y Xie, JL Gaudiot
Computer 54 (4), 14-25, 2021
13*2021
SemiMap: A semi-folded convolution mapping for speed-overhead balance on crossbars
L Deng, L Liang, G Wang, L Chang, X Hu, X Ma, L Liu, J Pei, G Li, Y Xie
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2018
132018
Computation on sparse neural networks and its implications for future hardware
F Sun, M Qin, T Zhang, L Liu, YK Chen, Y Xie
2020 57th ACM/IEEE Design Automation Conference (DAC), 1-6, 2020
11*2020
Boosting Deep Neural Network Efficiency with Dual-Module Inference
L Liu, L Deng, Z Chen, Y Wang, S Li, J Zhang, Y Yang, Z Gu, Y Ding, ...
Thirty-seventh International Conference on Machine Learning (ICML), 2020
72020
Fast object tracking on a many-core neural network chip
L Deng, Z Zou, X Ma, L Liang, G Wang, X Hu, L Liu, J Pei, G Li, Y Xie
Frontiers in neuroscience 12, 841, 2018
72018
Efficient Tensor Core-Based GPU Kernels for Structured Sparsity under Reduced Precision
Z Chen, Z Qu, L Liu, Y Ding, Y Xie
SC'21 - International Conference for High Performance Computing, Networking …, 2021
32021
INSPIRE: in-storage private information retrieval via protocol and architecture co-design
J Lin, L Liang, Z Qu, I Ahmad, L Liu, F Tu, T Gupta, Y Ding, Y Xie
Proceedings of the 49th Annual International Symposium on Computer …, 2022
12022
ENMC: Extreme Near-Memory Classification via Approximate Screening
L Liu, J Lin, Z Qu, Y Ding, Y Xie
54th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2021
12021
Dynamic N: M Fine-grained Structured Sparse Attention Mechanism
Z Chen, Y Quan, Z Qu, L Liu, Y Ding, Y Xie
arXiv preprint arXiv:2203.00091, 2022
2022
A one-for-all and o(v log(v ))-cost solution for parallel merge style operations on sorted key-value arrays
B Wang, L Deng, F Sun, G Dai, L Liu, Y Wang, Y Xie
Proceedings of the 27th ACM International Conference on Architectural …, 2022
2022
DOTA: detect and omit weak attentions for scalable transformer acceleration
Z Qu, L Liu, F Tu, Z Chen, Y Ding, Y Xie
Proceedings of the 27th ACM International Conference on Architectural …, 2022
2022
A 28nm 15.59 µJ/Token Full-Digital Bitline-Transpose CIM-Based Sparse Transformer Accelerator with Pipeline/Parallel Reconfigurable Modes
F Tu, Z Wu, Y Wang, L Liang, L Liu, Y Ding, L Liu, S Wei, Y Xie, S Yin
2022 IEEE International Solid-State Circuits Conference (ISSCC) 65, 466-468, 2022
2022
Transformer Acceleration with Dynamic Sparse Attention
L Liu, Z Qu, Z Chen, Y Ding, Y Xie
arXiv preprint arXiv:2110.11299, 2021
2021
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20