Obserwuj
Hao Zhang
Tytuł
Cytowane przez
Cytowane przez
Rok
HD-CNN: hierarchical deep convolutional neural networks for large scale visual recognition
Z Yan, H Zhang, R Piramuthu, V Jagadeesh, D DeCoste, W Di, Y Yu
Proceedings of the IEEE international conference on computer vision, 2740-2748, 2015
521*2015
Poseidon: An efficient communication architecture for distributed deep learning on {GPU} clusters
H Zhang, Z Zheng, S Xu, W Dai, Q Ho, X Liang, Z Hu, J Wei, P Xie, ...
2017 USENIX Annual Technical Conference (USENIX ATC 17), 181-193, 2017
428*2017
Geeps: Scalable deep learning on distributed gpus with a gpu-specialized parameter server
H Cui, H Zhang, GR Ganger, PB Gibbons, EP Xing
Proceedings of the eleventh european conference on computer systems, 1-16, 2016
3682016
Automatic Photo Adjustment Using Deep Neural Networks
Z Yan, H Zhang, B Wang, S Paris, Y Yu
ACM Transactions on Graphics (TOG) 35 (2), 2016
2872016
Scan: Structure correcting adversarial network for organ segmentation in chest x-rays
W Dai, N Dong, Z Wang, X Liang, H Zhang, EP Xing
International Workshop on Deep Learning in Medical Image Analysis, 263-273, 2018
2422018
Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality
WL Chiang, Z Li, Z Lin, Y Sheng, Z Wu, H Zhang, L Zheng, S Zhuang, ...
See https://vicuna. lmsys. org (accessed 14 April 2023), 2023
2222023
Recurrent topic-transition gan for visual paragraph generation
X Liang, Z Hu, H Zhang, C Gan, EP Xing
Proceedings of the IEEE international conference on computer vision, 3362-3371, 2017
2222017
Symbolic graph reasoning meets convolutions
X Liang, Z Hu, H Zhang, L Lin, EP Xing
Advances in neural information processing systems 31, 2018
1542018
Generative semantic manipulation with mask-contrasting gan
X Liang, H Zhang, L Lin, E Xing
Proceedings of the European Conference on Computer Vision (ECCV), 558-573, 2018
103*2018
Alpa: Automating inter-and {Intra-Operator} parallelism for distributed deep learning
L Zheng, Z Li, H Zhang, Y Zhuang, Z Chen, Y Huang, Y Wang, Y Xu, ...
16th USENIX Symposium on Operating Systems Design and Implementation (OSDI …, 2022
982022
Pollux: Co-adaptive cluster scheduling for goodput-optimized deep learning
A Qiao, SK Choe, SJ Subramanya, W Neiswanger, Q Ho, H Zhang, ...
15th {USENIX} Symposium on Operating Systems Design and Implementation …, 2021
892021
A tensor-based scheme for stroke patients’ motor imagery EEG analysis in BCI-FES rehabilitation training
Y Liu, M Li, H Zhang, H Wang, J Li, J Jia, Y Wu, L Zhang
Journal of neuroscience methods 222, 238-249, 2014
732014
Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
L Zheng, WL Chiang, Y Sheng, S Zhuang, Z Wu, Y Zhuang, Z Lin, Z Li, ...
arXiv preprint arXiv:2306.05685, 2023
682023
Toward understanding the impact of staleness in distributed machine learning
W Dai, Y Zhou, N Dong, H Zhang, EP Xing
arXiv preprint arXiv:1810.03264, 2018
652018
Dynamic topic modeling for monitoring market competition from online text and image data
H Zhang, G Kim, EP Xing
Proceedings of the 21th ACM SIGKDD international conference on knowledge …, 2015
642015
Structured generative adversarial networks
Z Deng, H Zhang, X Liang, L Yang, S Xu, J Zhu, EP Xing
Advances in neural information processing systems 30, 2017
622017
A Boosting-Based Spatial-Spectral Model for Stroke Patients' EEG Analysis in Rehabilitation Training
Y Liu, H Zhang, M Chen, L Zhang
Transactions on Neural Systems and Rehabilitation Engineering, 2015
572015
Terapipe: Token-level pipeline parallelism for training large-scale language models
Z Li, S Zhuang, S Guo, D Zhuo, H Zhang, D Song, I Stoica
International Conference on Machine Learning, 6543-6552, 2021
542021
Autoloss: Learning discrete schedules for alternate optimization
H Xu, H Zhang, Z Hu, X Liang, R Salakhutdinov, E Xing
arXiv preprint arXiv:1810.02442, 2018
432018
Combining the best of convolutional layers and recurrent layers: A hybrid network for semantic segmentation
Z Yan, H Zhang, Y Jia, T Breuel, Y Yu
arXiv preprint arXiv:1603.04871, 2016
432016
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20