Obserwuj
Tao Ge
Tao Ge
Microsoft Research
Zweryfikowany adres z microsoft.com - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Bert loses patience: Fast and robust inference with early exit
W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei
Advances in Neural Information Processing Systems 33, 18330-18341, 2020
3312020
Bert-of-theseus: Compressing bert by progressive module replacing
C Xu, W Zhou, T Ge, F Wei, M Zhou
arXiv preprint arXiv:2002.02925, 2020
2192020
Max-margin tensor neural network for Chinese word segmentation
W Pei, T Ge, B Chang
Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014
2052014
Towards time-aware knowledge graph completion
T Jiang, T Liu, T Ge, L Sha, B Chang, S Li, Z Sui
Proceedings of COLING 2016, the 26th International Conference on …, 2016
1872016
Unleashing the emergent cognitive synergy in large language models: A task-solving agent through multi-persona self-collaboration
Z Wang, S Mao, W Wu, T Ge, F Wei, H Ji
arXiv preprint arXiv:2307.05300, 2023
1662023
Fluency boost learning and inference for neural grammatical error correction
T Ge, F Wei, M Zhou
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
1462018
Encoding temporal information for time-aware link prediction
T Jiang, T Liu, T Ge, L Sha, S Li, B Chang, Z Sui
Proceedings of the 2016 conference on empirical methods in natural language …, 2016
1402016
BERT-based lexical substitution
W Zhou, T Ge, K Xu, F Wei, M Zhou
Proceedings of the 57th annual meeting of the association for computational …, 2019
1272019
Parallel data augmentation for formality style transfer
Y Zhang, T Ge, X Sun
arXiv preprint arXiv:2005.07522, 2020
105*2020
Reaching human-level performance in automatic grammatical error correction: An empirical study
T Ge, F Wei, M Zhou
arXiv preprint arXiv:1807.01270, 2018
912018
Exploiting task-oriented resources to learn word embeddings for clinical abbreviation expansion
Y Liu, T Ge, KS Mathews, H Ji, DL McGuinness
arXiv preprint arXiv:1804.04225, 2018
862018
In-context autoencoder for context compression in a large language model
T Ge, J Hu, L Wang, X Wang, SQ Chen, F Wei
arXiv preprint arXiv:2307.06945, 2023
812023
An effective neural network model for graph-based dependency parsing
W Pei, T Ge, B Chang
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
802015
Instantaneous grammatical error correction with shallow aggressive decoding
X Sun, T Ge, F Wei, H Wang
arXiv preprint arXiv:2106.04970, 2021
572021
Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation
H Xia, T Ge, P Wang, SQ Chen, F Wei, Z Sui
Findings of the Association for Computational Linguistics: EMNLP 2023, 3909-3925, 2023
56*2023
Inference with reference: Lossless acceleration of large language models
N Yang, T Ge, L Wang, B Jiao, D Jiang, L Yang, R Majumder, F Wei
arXiv preprint arXiv:2304.04487, 2023
542023
Low-code llm: Visual programming over llms
Y Cai, S Mao, W Wu, Z Wang, Y Liang, T Ge, C Wu, W You, T Song, Y Xia, ...
arXiv preprint arXiv:2304.08103 2, 2023
54*2023
Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding
H Xia, Z Yang, Q Dong, P Wang, Y Li, T Ge, T Liu, W Li, Z Sui
arXiv preprint arXiv:2401.07851, 2024
532024
Improving the efficiency of grammatical error correction with erroneous span detection and correction
M Chen, T Ge, X Zhang, F Wei, M Zhou
arXiv preprint arXiv:2010.03260, 2020
512020
Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression
C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei
arXiv preprint arXiv:2109.03228, 2021
472021
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20