Obserwuj
Jonghyun Bae
Jonghyun Bae
Zweryfikowany adres z google.com - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Layerweaver: Maximizing resource utilization of neural processing units via layer-wise scheduling
YH Oh, S Kim, Y Jin, S Son, J Bae, J Lee, Y Park, DU Kim, TJ Ham, ...
2021 IEEE International Symposium on High-Performance Computer Architecture …, 2021
372021
{FlashNeuron}:{SSD-Enabled}{Large-Batch} Training of Very Deep Neural Networks
J Bae, J Lee, Y Jin, S Son, S Kim, H Jang, TJ Ham, JW Lee
19th USENIX Conference on File and Storage Technologies (FAST 21), 387-401, 2021
362021
A case for hardware-based demand paging
G Lee, W Jin, W Song, J Gong, J Bae, TJ Ham, JW Lee, J Jeong
2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture …, 2020
252020
Practical erase suspension for modern low-latency {SSDs}
S Kim, J Bae, H Jang, W Jin, J Gong, S Lee, TJ Ham, JW Lee
2019 USENIX Annual Technical Conference (USENIX ATC 19), 813-820, 2019
242019
Behemoth: a flash-centric training accelerator for extreme-scale {DNNs}
S Kim, Y Jin, G Sohn, J Bae, TJ Ham, JW Lee
19th USENIX Conference on File and Storage Technologies (FAST 21), 371-385, 2021
202021
{ASAP}: Fast mobile application switch via adaptive prepaging
S Son, SY Lee, Y Jin, J Bae, J Jeong, TJ Ham, JW Lee, H Yoon
2021 USENIX Annual Technical Conference (USENIX ATC 21), 365-380, 2021
162021
Jointly optimizing task granularity and concurrency for in-memory mapreduce frameworks
J Bae, H Jang, W Jin, J Heo, J Jang, JY Hwang, S Cho, JW Lee
2017 IEEE International Conference on Big Data (Big Data), 130-140, 2017
112017
Ssdstreamer: Specializing i/o stack for large-scale machine learning
J Bae, H Jang, J Gong, W Jin, S Kim, J Jang, TJ Ham, J Jeong, JW Lee
IEEE Micro 39 (5), 73-81, 2019
42019
DDStore: Distributed Data Store for Scalable Training of Graph Neural Networks on Large Atomistic Modeling Datasets
JY Choi, M Lupo Pasini, P Zhang, K Mehta, F Liu, J Bae, K Ibrahim
Proceedings of the SC'23 Workshops of The International Conference on High …, 2023
12023
L3: accelerator-friendly lossless image format for high-resolution, high-throughput dnn training
J Bae, W Baek, TJ Ham, JW Lee
European Conference on Computer Vision, 171-188, 2022
12022
Liquid: Mix-and-Match Multiple Image Formats to Balance DNN Training Pipeline
W Baek, J Bae, D Lee, H Bae, Y Park, JW Lee
Proceedings of the 14th ACM SIGOPS Asia-Pacific Workshop on Systems, 50-57, 2023
2023
Accelerator system for training deep neural network model using nand flash memory and operating method thereof
JW Lee, Y Jin, JH Bae, GA Sohn, TJ Ham
US Patent App. 18/089,141, 2023
2023
Eager Memory Management for In-Memory Data Analytics
H Jang, J Bae, TJ Ham, JW Lee
IEICE TRANSACTIONS on Information and Systems 102 (3), 632-636, 2019
2019
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–13