Obserwuj
Furu Wei
Furu Wei
Partner Research Manager, Microsoft Research
Zweryfikowany adres z microsoft.com - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Beit: Bert pre-training of image transformers
H Bao, L Dong, S Piao, F Wei
arXiv preprint arXiv:2106.08254, 2021
21342021
Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks
X Li, X Yin, C Li, P Zhang, X Hu, L Zhang, L Wang, H Hu, L Dong, F Wei, ...
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
17692020
Vl-bert: Pre-training of generic visual-linguistic representations
W Su, X Zhu, Y Cao, B Li, L Lu, F Wei, J Dai
arXiv preprint arXiv:1908.08530, 2019
16522019
Unified language model pre-training for natural language understanding and generation
L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, HW Hon
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019
15772019
Learning sentiment-specific word embedding for twitter sentiment classification
D Tang, F Wei, N Yang, M Zhou, T Liu, B Qin
Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014
15622014
Swin transformer v2: Scaling up capacity and resolution
Z Liu, H Hu, Y Lin, Z Yao, Z Xie, Y Wei, J Ning, Y Cao, Z Zhang, L Dong, ...
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
12392022
Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification
L Dong, F Wei, C Tan, D TangΤ, M Zhou, K Xu
ACL, 2014
11192014
Wavlm: Large-scale self-supervised pre-training for full stack speech processing
S Chen, C Wang, Z Chen, Y Wu, S Liu, Z Chen, J Li, N Kanda, T Yoshioka, ...
IEEE Journal of Selected Topics in Signal Processing 16 (6), 1505-1518, 2022
9662022
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
Advances in Neural Information Processing Systems 33, 5776-5788, 2020
8442020
Gated self-matching networks for reading comprehension and question answering
W Wang, N Yang, F Wei, B Chang, M Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
8172017
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
X Zhang, F Wei, M Zhou
ACL, 2019
742*2019
Recognizing named entities in tweets
X Liu, S Zhang, F Wei, M Zhou
Proceedings of the 49th annual meeting of the association for computational …, 2011
6282011
Layoutlm: Pre-training of text and layout for document image understanding
Y Xu, M Li, L Cui, S Huang, F Wei, M Zhou
Proceedings of the 26th ACM SIGKDD international conference on knowledge …, 2020
6272020
Topic sentiment analysis in twitter: a graph-based hashtag sentiment classification approach
X Wang, F Wei, X Liu, M Zhou, M Zhang
Proceedings of the 20th ACM international conference on Information and …, 2011
6212011
Question answering over freebase with multi-column convolutional neural networks
L Dong, F Wei, M Zhou, K Xu
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
5582015
Learning-Based Processing of Natural Language Questions
M Zhou, F Wei, X Liu, H Sun, Y Duan, C Sun, HY Shum
US Patent App. 13/539,674, 2014
4472014
Superagent: A customer service chatbot for e-commerce websites
L Cui, S Huang, F Wei, C Tan, C Duan, M Zhou
Proceedings of ACL 2017, system demonstrations, 97-102, 2017
4312017
Context preserving dynamic word cloud visualization
W Cui, Y Wu, S Liu, F Wei, MX Zhou, H Qu
2010 IEEE Pacific Visualization Symposium (PacificVis), 121-128, 2010
4222010
Neural document summarization by jointly learning to score and select sentences
Q Zhou, N Yang, F Wei, S Huang, M Zhou, T Zhao
arXiv preprint arXiv:1807.02305, 2018
4132018
Layoutlmv2: Multi-modal pre-training for visually-rich document understanding
Y Xu, Y Xu, T Lv, L Cui, F Wei, G Wang, Y Lu, D Florencio, C Zhang, ...
arXiv preprint arXiv:2012.14740, 2020
4092020
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20