Obserwuj
Furu Wei
Furu Wei
Partner Research Manager, Microsoft Research
Zweryfikowany adres z microsoft.com - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Beit: Bert pre-training of image transformers
H Bao, L Dong, S Piao, F Wei
arXiv preprint arXiv:2106.08254, 2021
26862021
Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks
X Li, X Yin, C Li, P Zhang, X Hu, L Zhang, L Wang, H Hu, L Dong, F Wei, ...
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
20142020
Vl-bert: Pre-training of generic visual-linguistic representations
W Su, X Zhu, Y Cao, B Li, L Lu, F Wei, J Dai
arXiv preprint arXiv:1908.08530, 2019
18142019
Unified language model pre-training for natural language understanding and generation
L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, HW Hon
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019
17602019
Swin transformer v2: Scaling up capacity and resolution
Z Liu, H Hu, Y Lin, Z Yao, Z Xie, Y Wei, J Ning, Y Cao, Z Zhang, L Dong, ...
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
17082022
Learning sentiment-specific word embedding for twitter sentiment classification
D Tang, F Wei, N Yang, M Zhou, T Liu, B Qin
Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014
16132014
Wavlm: Large-scale self-supervised pre-training for full stack speech processing
S Chen, C Wang, Z Chen, Y Wu, S Liu, Z Chen, J Li, N Kanda, T Yoshioka, ...
IEEE Journal of Selected Topics in Signal Processing 16 (6), 1505-1518, 2022
14962022
Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification
L Dong, F Wei, C Tan, D TangΤ, M Zhou, K Xu
ACL, 2014
12092014
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
Advances in Neural Information Processing Systems 33, 5776-5788, 2020
10672020
Gated self-matching networks for reading comprehension and question answering
W Wang, N Yang, F Wei, B Chang, M Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
8432017
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
X Zhang, F Wei, M Zhou
ACL, 2019
808*2019
Layoutlm: Pre-training of text and layout for document image understanding
Y Xu, M Li, L Cui, S Huang, F Wei, M Zhou
Proceedings of the 26th ACM SIGKDD international conference on knowledge …, 2020
7462020
Image as a foreign language: Beit pretraining for all vision and vision-language tasks
W Wang, H Bao, L Dong, J Bjorck, Z Peng, Q Liu, K Aggarwal, ...
arXiv preprint arXiv:2208.10442, 2022
676*2022
Recognizing named entities in tweets
X Liu, S Zhang, F Wei, M Zhou
Proceedings of the 49th annual meeting of the association for computational …, 2011
6382011
Topic sentiment analysis in twitter: a graph-based hashtag sentiment classification approach
X Wang, F Wei, X Liu, M Zhou, M Zhang
Proceedings of the 20th ACM international conference on Information and …, 2011
6332011
Question answering over freebase with multi-column convolutional neural networks
L Dong, F Wei, M Zhou, K Xu
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
5852015
Layoutlmv2: Multi-modal pre-training for visually-rich document understanding
Y Xu, Y Xu, T Lv, L Cui, F Wei, G Wang, Y Lu, D Florencio, C Zhang, ...
arXiv preprint arXiv:2012.14740, 2020
4852020
Superagent: A customer service chatbot for e-commerce websites
L Cui, S Huang, F Wei, C Tan, C Duan, M Zhou
Proceedings of ACL 2017, system demonstrations, 97-102, 2017
4792017
Neural codec language models are zero-shot text to speech synthesizers
C Wang, S Chen, Y Wu, Z Zhang, L Zhou, S Liu, Z Chen, Y Liu, H Wang, ...
arXiv preprint arXiv:2301.02111, 2023
4742023
Learning-Based Processing of Natural Language Questions
M Zhou, F Wei, X Liu, H Sun, Y Duan, C Sun, HY Shum
US Patent App. 13/539,674, 2014
4662014
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20