Bert loses patience: Fast and robust inference with early exit W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei Advances in Neural Information Processing Systems 33, 18330-18341, 2020 | 331 | 2020 |
Bert-of-theseus: Compressing bert by progressive module replacing C Xu, W Zhou, T Ge, F Wei, M Zhou arXiv preprint arXiv:2002.02925, 2020 | 219 | 2020 |
Max-margin tensor neural network for Chinese word segmentation W Pei, T Ge, B Chang Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014 | 205 | 2014 |
Towards time-aware knowledge graph completion T Jiang, T Liu, T Ge, L Sha, B Chang, S Li, Z Sui Proceedings of COLING 2016, the 26th International Conference on …, 2016 | 187 | 2016 |
Unleashing the emergent cognitive synergy in large language models: A task-solving agent through multi-persona self-collaboration Z Wang, S Mao, W Wu, T Ge, F Wei, H Ji arXiv preprint arXiv:2307.05300, 2023 | 166 | 2023 |
Fluency boost learning and inference for neural grammatical error correction T Ge, F Wei, M Zhou Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018 | 146 | 2018 |
Encoding temporal information for time-aware link prediction T Jiang, T Liu, T Ge, L Sha, S Li, B Chang, Z Sui Proceedings of the 2016 conference on empirical methods in natural language …, 2016 | 140 | 2016 |
BERT-based lexical substitution W Zhou, T Ge, K Xu, F Wei, M Zhou Proceedings of the 57th annual meeting of the association for computational …, 2019 | 127 | 2019 |
Parallel data augmentation for formality style transfer Y Zhang, T Ge, X Sun arXiv preprint arXiv:2005.07522, 2020 | 105* | 2020 |
Reaching human-level performance in automatic grammatical error correction: An empirical study T Ge, F Wei, M Zhou arXiv preprint arXiv:1807.01270, 2018 | 91 | 2018 |
Exploiting task-oriented resources to learn word embeddings for clinical abbreviation expansion Y Liu, T Ge, KS Mathews, H Ji, DL McGuinness arXiv preprint arXiv:1804.04225, 2018 | 86 | 2018 |
In-context autoencoder for context compression in a large language model T Ge, J Hu, L Wang, X Wang, SQ Chen, F Wei arXiv preprint arXiv:2307.06945, 2023 | 81 | 2023 |
An effective neural network model for graph-based dependency parsing W Pei, T Ge, B Chang Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015 | 80 | 2015 |
Instantaneous grammatical error correction with shallow aggressive decoding X Sun, T Ge, F Wei, H Wang arXiv preprint arXiv:2106.04970, 2021 | 57 | 2021 |
Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation H Xia, T Ge, P Wang, SQ Chen, F Wei, Z Sui Findings of the Association for Computational Linguistics: EMNLP 2023, 3909-3925, 2023 | 56* | 2023 |
Inference with reference: Lossless acceleration of large language models N Yang, T Ge, L Wang, B Jiao, D Jiang, L Yang, R Majumder, F Wei arXiv preprint arXiv:2304.04487, 2023 | 54 | 2023 |
Low-code llm: Visual programming over llms Y Cai, S Mao, W Wu, Z Wang, Y Liang, T Ge, C Wu, W You, T Song, Y Xia, ... arXiv preprint arXiv:2304.08103 2, 2023 | 54* | 2023 |
Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding H Xia, Z Yang, Q Dong, P Wang, Y Li, T Ge, T Liu, W Li, Z Sui arXiv preprint arXiv:2401.07851, 2024 | 53 | 2024 |
Improving the efficiency of grammatical error correction with erroneous span detection and correction M Chen, T Ge, X Zhang, F Wei, M Zhou arXiv preprint arXiv:2010.03260, 2020 | 51 | 2020 |
Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei arXiv preprint arXiv:2109.03228, 2021 | 47 | 2021 |