Transformers: State-of-the-art natural language processing T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, ... Proceedings of the 2020 conference on empirical methods in natural language …, 2020 | 5356 | 2020 |
Huggingface's transformers: State-of-the-art natural language processing T Wolf arXiv preprint arXiv:1910.03771, 2019 | 3172 | 2019 |
Opt: Open pre-trained transformer language models S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ... arXiv preprint arXiv:2205.01068, 2022 | 2238 | 2022 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... arXiv preprint arXiv:2206.04615, 2022 | 1032 | 2022 |
8-bit optimizers via block-wise quantization T Dettmers, M Lewis, S Shleifer, L Zettlemoyer arXiv preprint arXiv:2110.02861, 2021 | 190 | 2021 |
Pytorch fsdp: experiences on scaling fully sharded data parallel Y Zhao, A Gu, R Varma, L Luo, CC Huang, M Xu, L Wright, H Shojanazeri, ... arXiv preprint arXiv:2304.11277, 2023 | 152 | 2023 |
Opt: Open pre-trained transformer language models, 2022 S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ... URL https://arxiv. org/abs/2205.01068 3, 19-0, 2023 | 108 | 2023 |
Pre-trained summarization distillation S Shleifer, AM Rush arXiv preprint arXiv:2010.13002, 2020 | 99 | 2020 |
Efficient large scale language modeling with mixtures of experts M Artetxe, S Bhosale, N Goyal, T Mihaylov, M Ott, S Shleifer, XV Lin, J Du, ... arXiv preprint arXiv:2112.10684, 2021 | 93 | 2021 |
Huggingface’s transformers: State-of-the-art natural language processing. arXiv T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, ... arXiv preprint arXiv:1910.03771, 2019 | 79 | 2019 |
Low resource text classification with ulmfit and backtranslation S Shleifer arXiv preprint arXiv:1903.09244, 2019 | 72 | 2019 |
Normformer: Improved transformer pretraining with extra normalization S Shleifer, J Weston, M Ott arXiv preprint arXiv:2110.09456, 2021 | 54 | 2021 |
Few-shot learning with multilingual generative language models XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ... Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022 | 49 | 2022 |
Few-shot learning with multilingual language models XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ... arXiv preprint arXiv:2112.10668, 2021 | 48 | 2021 |
Incrementally improving graph wavenet performance on traffic prediction S Shleifer, C McCreery, V Chitters arXiv preprint arXiv:1912.07390, 2019 | 23 | 2019 |
Using small proxy datasets to accelerate hyperparameter search S Shleifer, E Prokop arXiv preprint arXiv:1906.04887, 2019 | 23 | 2019 |
Efficient language modeling with sparse all-mlp P Yu, M Artetxe, M Ott, S Shleifer, H Gong, V Stoyanov, X Li arXiv preprint arXiv:2203.06850, 2022 | 12 | 2022 |
Shen Li Y Zhao, A Gu, R Varma, L Luo, CC Huang, M Xu, L Wright, H Shojanazeri, ... Pytorch fsdp: Experiences on scaling fully sharded data parallel, 2023 | 10 | 2023 |
Classification As Decoder: Trading Flexibility For Control In Neural Dialogue S Shleifer, M Chablani, N Katariya, A Kannan, X Amatriain arXiv preprint arXiv:1910.03476, 2019 | | 2019 |
Classification as Decoder: Trading Flexibility for Control in Multi Domain Dialogue S Shleifer, M Chablani, N Katariya, A Kannan, X Amatriain | | |