Obserwuj
Chi-Liang Liu
Tytuł
Cytowane przez
Cytowane przez
Rok
Speechbert: An audio-and-text jointly learned language model for end-to-end spoken question answering
YS Chuang, CL Liu, HY Lee, L Lee
arXiv preprint arXiv:1910.11559, 2019
1282019
Spoken SQuAD: A study of mitigating the impact of speech recognition errors on listening comprehension
CH Li, SL Wu, CL Liu, H Lee
arXiv preprint arXiv:1804.00320, 2018
872018
Zero-shot reading comprehension by cross-lingual transfer learning with multi-lingual language representation model
TY Hsu, CL Liu, H Lee
arXiv preprint arXiv:1909.09587, 2019
712019
A study of cross-lingual ability and language-specific information in multilingual BERT
CL Liu, TY Hsu, YS Chuang, HY Lee
arXiv preprint arXiv:2004.09205, 2020
212020
Speechnet: A universal modularized model for speech processing tasks
YC Chen, PH Chi, S Yang, KW Chang, J Lin, SF Huang, DR Liu, CL Liu, ...
arXiv preprint arXiv:2105.03070, 2021
142021
Machine comprehension of spoken content: TOEFL listening test and spoken SQuAD
CH Lee, H Lee, SL Wu, CL Liu, W Fang, JY Hsu, BH Tseng
IEEE/ACM Transactions on Audio, Speech, and Language Processing 27 (9), 1469 …, 2019
112019
What makes multilingual BERT multilingual?
CL Liu, TY Hsu, YS Chuang, H Lee
arXiv preprint arXiv:2010.10938, 2020
82020
Structured prompt tuning
CL Liu, H Lee, W Yih
arXiv preprint arXiv:2205.12309, 2022
52022
Unsupervised multiple choices question answering: Start learning from basic knowledge
CL Liu, H Lee
arXiv preprint arXiv:2010.11003, 2020
42020
Looking for clues of language in multilingual BERT to improve cross-lingual generalization
CL Liu, TY Hsu, YS Chuang, CY Li, H Lee
arXiv preprint arXiv:2010.10041, 2020
32020
Language representation in multilingual BERT and its applications to improve cross-lingual generalization
C Liu, T Hsu, Y Chuang, C Li, H Lee
CoRR, 2020
32020
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–11