Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework P Wang, A Yang, R Men, J Lin, S Bai, Z Li, J Ma, C Zhou, J Zhou, H Yang ICML 2022, 2022 | 449 | 2022 |
Enhancing pre-trained language representations with rich knowledge for machine reading comprehension A Yang, Q Wang, J Liu, K Liu, Y Lyu, H Wu, Q She, S Li ACL 2019 (Long Paper), 2019 | 154 | 2019 |
M6: A chinese multimodal pretrainer A Yang, J Lin, R Men, C Zhou, M Ding, Y Zhang, P Wang, A Wang, ... | 107* | 2021 |
Interbert: Vision-and-language interaction for multi-modal pretraining J Lin, A Yang, Y Zhang, J Liu, J Zhou, H Yang arXiv preprint arXiv:2003.13198, 2020 | 57 | 2020 |
SciDTB: Discourse dependency treebank for scientific abstracts A Yang, S Li ACL 2018 (Short Paper), 2018 | 40 | 2018 |
Machine reading comprehension: a literature review X Zhang, A Yang, S Li, Y Wang arXiv preprint arXiv:1907.01686, 2019 | 39 | 2019 |
M6-t: Exploring sparse expert models and beyond A Yang, J Lin, R Men, C Zhou, L Jiang, X Jia, A Wang, J Zhang, J Wang, ... arXiv preprint arXiv:2105.15082, 2021 | 37 | 2021 |
A Robust Adversarial Training Approach to Machine Reading Comprehension K Liu, X Liu, A Yang, J Liu, J Su, S Li, Q She AAAI 2020, 2020 | 35 | 2020 |
Adaptations of ROUGE and BLEU to Better Evaluate Machine Reading Comprehension Task A Yang, K Liu, J Liu, Y Lyu, S Li MRQA Workshop@ACL 2018, 2018 | 27 | 2018 |
M6: Multi-Modality-to-Multi-Modality Multitask Mega-transformer for Unified Pretraining A Yang, J Lin, R Men, C Zhou, Y Zhang, P Wang, J Zhou, J Tang, H Yang KDD 2021, 2021 | 24 | 2021 |
M6-10t: A sharing-delinking paradigm for efficient multi-trillion parameter pretraining J Lin, A Yang, J Bai, C Zhou, L Jiang, X Jia, A Wang, J Zhang, Y Li, W Lin, ... arXiv preprint arXiv:2110.03888, 2021 | 22 | 2021 |
Chinese clip: Contrastive vision-language pretraining in chinese A Yang, J Pan, J Lin, R Men, Y Zhang, J Zhou, C Zhou arXiv preprint arXiv:2211.01335, 2022 | 21 | 2022 |
Prompt Tuning for Generative Multimodal Pretrained Models H Yang, J Lin, A Yang, P Wang, C Zhou, H Yang ACL 2023 (Findings), 2022 | 16 | 2022 |
Learning Relation Alignment for Calibrated Cross-modal Retrieval S Ren, J Lin, G Zhao, R Men, A Yang, J Zhou, X Sun, H Yang ACL 2021 (Long Paper), 2021 | 16 | 2021 |
Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation P Wang, J Lin, A Yang, C Zhou, Y Zhang, J Zhou, H Yang ACL 2021 (Findings), 2021 | 14 | 2021 |
ExpertPrompting: Instructing Large Language Models to be Distinguished Experts B Xu, A Yang, J Lin, Q Wang, C Zhou, Y Zhang, Z Mao arXiv preprint arXiv:2305.14688, 2023 | 7 | 2023 |
Domain ontology learning enhanced by optimized relation instance in dbpedia L Xiao, C Ruan, A Yang, J Zhang, J Hu LREC 2016, 1452-1456, 2016 | 6 | 2016 |
OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models J Bai, R Men, H Yang, X Ren, K Dang, Y Zhang, X Zhou, P Wang, S Tan, ... arXiv preprint arXiv:2212.04408, 2022 | 4 | 2022 |
Instance-wise prompt tuning for pretrained language models Y Jiang, H Yang, J Lin, H Zhao, A Yang, C Zhou, H Yang, Z Yang, B Cui arXiv preprint arXiv:2206.01958, 2022 | 4 | 2022 |
Transferring General Multimodal Pretrained Models to Text Recognition J Lin, X Ren, Y Zhang, G Liu, P Wang, A Yang, C Zhou ACL 2023 (Findings), 2022 | 2 | 2022 |