Obserwuj
Bingyi Zhang
Tytuł
Cytowane przez
Cytowane przez
Rok
Hardware acceleration of large scale GCN inference
B Zhang, H Zeng, V Prasanna
2020 IEEE 31st International Conference on Application-specific Systems …, 2020
712020
BoostGCN: A framework for optimizing GCN inference on FPGA
B Zhang, R Kannan, V Prasanna
2021 IEEE 29th Annual International Symposium on Field-Programmable Custom …, 2021
582021
Hp-gnn: Generating high throughput gnn training implementation on cpu-fpga heterogeneous platform
YC Lin, B Zhang, V Prasanna
Proceedings of the 2022 ACM/SIGDA International Symposium on Field …, 2022
332022
Model-architecture co-design for high performance temporal gnn inference on fpga
H Zhou, B Zhang, R Kannan, V Prasanna, C Busart
2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS …, 2022
182022
A skeleton-based action recognition system for medical condition detection
J Yin, J Han, C Wang, B Zhang, X Zeng
2019 IEEE Biomedical Circuits and Systems Conference (BioCAS), 1-4, 2019
172019
A real-time and hardware-efficient processor for skeleton-based action recognition with lightweight convolutional neural network
B Zhang, J Han, Z Huang, J Yang, X Zeng
IEEE Transactions on Circuits and Systems II: Express Briefs 66 (12), 2052-2056, 2019
142019
Gcn inference acceleration using high-level synthesis
YC Lin, B Zhang, V Prasanna
2021 IEEE High Performance Extreme Computing Conference (HPEC), 1-6, 2021
132021
Efficient neighbor-sampling-based gnn training on cpu-fpga heterogeneous platform
B Zhang, SR Kuppannagari, R Kannan, V Prasanna
2021 IEEE High Performance Extreme Computing Conference (HPEC), 1-7, 2021
122021
Low-latency mini-batch gnn inference on cpu-fpga heterogeneous platform
B Zhang, H Zeng, V Prasanna
2022 IEEE 29th International Conference on High Performance Computing, Data …, 2022
112022
Accelerating large scale GCN inference on FPGA
B Zhang, H Zeng, V Prasanna
2020 IEEE 28th Annual International Symposium on Field-Programmable Custom …, 2020
102020
Accurate, low-latency, efficient sar automatic target recognition on fpga
B Zhang, R Kannan, V Prasanna, C Busart
2022 32nd International Conference on Field-Programmable Logic and …, 2022
92022
MiniTracker: a lightweight CNN-based system for visual object tracking on embedded device
B Zhang, X Li, J Han, X Zeng
2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), 1-5, 2018
82018
GraphAGILE: An FPGA-based Overlay Accelerator for Low-latency GNN Inference
B Zhang, H Zeng, V Prasanna
IEEE Transactions on Parallel and Distributed Systems, 2023
62023
Accelerating GNN Training on CPU Multi-FPGA Heterogeneous Platform
YC Lin, B Zhang, V Prasanna
Latin American High Performance Computing Conference, 16-30, 2022
52022
HitGNN: High-throughput GNN Training Framework on CPU+Multi-FPGA Heterogeneous Platform
YC Lin, B Zhang, V Prasanna
IEEE Transactions on Parallel and Distributed Systems, 2024
42024
Exploiting on-chip heterogeneity of versal architecture for gnn inference acceleration
P Chen, P Manjunath, S Wijeratne, B Zhang, V Prasanna
2023 33rd International Conference on Field-Programmable Logic and …, 2023
32023
Graph Neural Network for Accurate and Low-complexity SAR ATR
B Zhang, S Wijeratne, R Kannan, V Prasanna, C Busart
The Fifteenth International Conference on Advanced Geographic Information …, 2023
32023
Dynasparse: Accelerating GNN Inference through Dynamic Sparsity Exploitation
B Zhang, V Prasanna
2023 International Parallel and Distributed Processing Symposium (IPDPS 2023), 2023
32023
Performance modeling sparse mttkrp using optical static random access memory on fpga
S Wijeratne, A Jaiswal, AP Jacob, B Zhang, V Prasanna
2022 IEEE High Performance Extreme Computing Conference (HPEC), 1-7, 2022
22022
A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification
J Fein-Ashley, T Ye, S Wickramasinghe, B Zhang, R Kannan, V Prasanna
arXiv preprint arXiv:2402.00564, 2024
12024
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20