Obserwuj
Yichong Xu
Yichong Xu
Member of Technical Staff, Character.AI
Zweryfikowany adres z microsoft.com - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
The application of two-level attention models in deep convolutional neural network for fine-grained image classification
T Xiao, Y Xu, K Yang, J Zhang, Y Peng, Z Zhang
Proceedings of the IEEE conference on computer vision and pattern …, 2015
10412015
G-eval: Nlg evaluation using gpt-4 with better human alignment
Y Liu, D Iter, Y Xu, S Wang, R Xu, C Zhu
arXiv preprint arXiv:2303.16634, 2023
6242023
An empirical study of training end-to-end vision-and-language transformers
ZY Dou, Y Xu, Z Gan, J Wang, S Wang, L Wang, C Zhu, P Zhang, L Yuan, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
3502022
Generate rather than retrieve: Large language models are strong context generators
W Yu, D Iter, S Wang, Y Xu, M Ju, S Sanyal, C Zhu, M Zeng, M Jiang
arXiv preprint arXiv:2209.10063, 2022
2172022
Want to reduce labeling cost? GPT-3 can help
S Wang, Y Liu, Y Xu, C Zhu, M Zeng
arXiv preprint arXiv:2108.13487, 2021
2132021
Scale-invariant convolutional neural networks
Y Xu, T Xiao, J Zhang, K Yang, Z Zhang
arXiv preprint arXiv:1411.6369, 2014
1752014
Dialoglm: Pre-trained model for long dialogue understanding and summarization
M Zhong, Y Liu, Y Xu, C Zhu, M Zeng
Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 11765 …, 2022
1172022
Kg-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering
D Yu, C Zhu, Y Fang, W Yu, S Wang, Y Xu, X Ren, Y Yang, M Zeng
arXiv preprint arXiv:2110.04330, 2021
1062021
Training data is more valuable than you think: A simple and effective method by retrieving from training data
S Wang, Y Xu, Y Fang, Y Liu, S Sun, R Xu, C Zhu, M Zeng
arXiv preprint arXiv:2203.08773, 2022
1022022
Revive: Regional visual representation matters in knowledge-based visual question answering
Y Lin, Y Xie, D Chen, Y Xu, C Zhu, L Yuan
Advances in Neural Information Processing Systems 35, 10560-10571, 2022
842022
Multi-task learning with sample re-weighting for machine reading comprehension
Y Xu, X Liu, Y Shen, J Liu, J Gao
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
70*2019
Dict-bert: Enhancing language model pre-training with dictionary
W Yu, C Zhu, Y Fang, D Yu, S Wang, Y Xu, M Zeng, M Jiang
arXiv preprint arXiv:2110.06490, 2021
662021
Active learning for graph neural networks via node feature propagation
Y Wu, Y Xu, A Singh, Y Yang, A Dubrawski
arXiv preprint arXiv:1910.07567, 2019
662019
Fusing context into knowledge graph for commonsense question answering
Y Xu, C Zhu, R Xu, Y Liu, M Zeng, X Huang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 …, 2021
612021
Human parity on commonsenseqa: Augmenting self-attention with external attention
Y Xu, C Zhu, S Wang, S Sun, H Cheng, X Liu, J Gao, P He, M Zeng, ...
arXiv preprint arXiv:2112.03254, 2021
602021
Preference-based reinforcement learning with finite-time guarantees
Y Xu, R Wang, L Yang, A Singh, A Dubrawski
Advances in Neural Information Processing Systems 33, 18784-18794, 2020
602020
On Strategyproof Conference Peer Review
Y Xu, H Zhao, X Shi, NB Shah
Proceedings of the Twenty-Eighth International Joint Conference on …, 2019
482019
Noise-Tolerant Interactive Learning Using Pairwise Comparisons
Y Xu, H Zhang, K Miller, A Singh, A Dubrawski
Advances in Neural Information Processing Systems, 2431--2440, 2017
48*2017
Dynamic fusion networks for machine reading comprehension
Y Xu, J Liu, J Gao, Y Shen, X Liu
arXiv preprint arXiv:1711.04964, 2017
48*2017
Small models are valuable plug-ins for large language models
C Xu, Y Xu, S Wang, Y Liu, C Zhu, J McAuley
arXiv preprint arXiv:2305.08848, 2023
422023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20