Univl: A unified video and language pre-training model for multimodal understanding and generation H Luo, L Ji, B Shi, H Huang, N Duan, T Li, J Li, T Bharti, M Zhou arXiv preprint arXiv:2002.06353, 2020 | 399 | 2020 |
Imagebert: Cross-modal pre-training with large-scale weak-supervised image-text data D Qi, L Su, J Song, E Cui, T Bharti, A Sacheti arXiv preprint arXiv:2001.07966, 2020 | 262 | 2020 |
Unicoder: A universal language encoder by pre-training with multiple cross-lingual tasks H Huang, Y Liang, N Duan, M Gong, L Shou, D Jiang, M Zhou arXiv preprint arXiv:1909.00964, 2019 | 221 | 2019 |
M3p: Learning universal representations via multitask multilingual multimodal pre-training M Ni, H Huang, L Su, E Cui, T Bharti, L Wang, D Zhang, N Duan Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021 | 101 | 2021 |
Xgpt: Cross-modal generative pre-training for image captioning Q Xia, H Huang, N Duan, D Zhang, L Ji, Z Sui, E Cui, T Bharti, M Zhou Natural Language Processing and Chinese Computing: 10th CCF International …, 2021 | 67 | 2021 |
Deltalm: Encoder-decoder pre-training for language generation and translation by augmenting pretrained multilingual encoders S Ma, L Dong, S Huang, D Zhang, A Muzio, S Singhal, HH Awadalla, ... arXiv preprint arXiv:2106.13736, 2021 | 63 | 2021 |
Not all languages are created equal in llms: Improving multilingual capability by cross-lingual-thought prompting H Huang, T Tang, D Zhang, WX Zhao, T Song, Y Xia, F Wei arXiv preprint arXiv:2305.07004, 2023 | 42 | 2023 |
Multilingual machine translation systems from Microsoft for WMT21 shared task J Yang, S Ma, H Huang, D Zhang, L Dong, S Huang, A Muzio, S Singhal, ... arXiv preprint arXiv:2111.02086, 2021 | 35 | 2021 |
BlonDe: An automatic evaluation metric for document-level machine translation YE Jiang, T Liu, S Ma, D Zhang, J Yang, H Huang, R Sennrich, R Cotterell, ... arXiv preprint arXiv:2103.11878, 2021 | 35 | 2021 |
Xlm-t: Scaling up multilingual machine translation with pretrained cross-lingual transformer encoders S Ma, J Yang, H Huang, Z Chi, L Dong, D Zhang, HH Awadalla, A Muzio, ... arXiv preprint arXiv:2012.15547, 2020 | 26 | 2020 |
Multilingual agreement for multilingual neural machine translation J Yang, Y Yin, S Ma, H Huang, D Zhang, Z Li, F Wei Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021 | 23 | 2021 |
Chain-of-dictionary prompting elicits translation in large language models H Lu, H Huang, D Zhang, H Yang, W Lam, F Wei arXiv preprint arXiv:2305.06575, 2023 | 19 | 2023 |
NUWA-LIP: language-guided image inpainting with defect-free VQGAN M Ni, X Li, W Zuo Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 17 | 2023 |
Ganlm: Encoder-decoder pre-training with an auxiliary discriminator J Yang, S Ma, L Dong, S Huang, H Huang, Y Yin, D Zhang, L Yang, F Wei, ... arXiv preprint arXiv:2212.10218, 2022 | 16 | 2022 |
GEM: A general evaluation benchmark for multimodal tasks L Su, N Duan, E Cui, L Ji, C Wu, H Luo, Y Liu, M Zhong, T Bharti, ... arXiv preprint arXiv:2106.09889, 2021 | 13 | 2021 |
Hierarchical context-aware network for dense video event captioning L Ji, X Guo, H Huang, X Chen Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021 | 12 | 2021 |
Not all metrics are guilty: Improving nlg evaluation with llm paraphrasing T Tang, H Lu, YE Jiang, H Huang, D Zhang, WX Zhao, F Wei arXiv preprint arXiv:2305.15067, 2023 | 11 | 2023 |
LVP-M3: language-aware visual prompt for multilingual multimodal machine translation H Guo, J Liu, H Huang, J Yang, Z Li, D Zhang, Z Cui, F Wei arXiv preprint arXiv:2210.15461, 2022 | 11 | 2022 |
UniVL: a unified video and language pre-training model for multimodal understanding and generation (2020) H Luo, L Ji, B Shi, H Huang, N Duan, T Li, J Li, T Bharti, M Zhou arXiv preprint arXiv:2002.06353, 2002 | 10 | 2002 |
Improving the robustness of deep reading comprehension models by leveraging syntax prior B Wu, H Huang, Z Wang, Q Feng, J Yu, B Wang Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 53-57, 2019 | 9 | 2019 |