Obserwuj
Junyang Lin
Junyang Lin
Qwen Team, Alibaba Group & Peking University
Zweryfikowany adres z alibaba-inc.com - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
OFA: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework
P Wang, A Yang, R Men, J Lin, S Bai, Z Li, J Ma, C Zhou, J Zhou, H Yang
ICML 2022, 2022
9252022
Qwen technical report
J Bai, S Bai, Y Chu, Z Cui, K Dang, X Deng, Y Fan, W Ge, Y Han, F Huang, ...
arXiv preprint arXiv:2309.16609, 2023
7282023
Qwen-vl: A frontier large vision-language model with versatile abilities
J Bai, S Bai, S Yang, S Wang, S Tan, P Wang, J Lin, C Zhou, J Zhou
arXiv preprint arXiv:2308.12966, 2023
625*2023
Cogview: Mastering text-to-image generation via transformers
M Ding, Z Yang, W Hong, W Zheng, C Zhou, D Yin, J Lin, X Zou, Z Shao, ...
Advances in neural information processing systems 34, 19822-19835, 2021
6132021
Understanding and improving layer normalization
J Xu, X Sun, Z Zhang, G Zhao, J Lin
Advances in neural information processing systems 32, 2019
3212019
Diversity-promoting GAN: A cross-entropy based generative adversarial network for diversified text generation
J Xu, X Ren, J Lin, X Sun
Proceedings of the 2018 conference on empirical methods in natural language …, 2018
244*2018
Towards knowledge-based recommender dialog system
Q Chen, J Lin, Y Zhang, M Ding, Y Cen, H Yang, J Tang
arXiv preprint arXiv:1908.05391, 2019
2392019
Global Encoding for Abstractive Summarization
J Lin, X Sun, S Ma, Q Su
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
1892018
M6: A chinese multimodal pretrainer
J Lin, R Men, A Yang, C Zhou, M Ding, Y Zhang, P Wang, A Wang, ...
arXiv preprint arXiv:2103.00823, 2021
160*2021
Explicit sparse transformer: Concentrated attention through explicit selection
G Zhao, J Lin, Z Zhang, X Ren, Q Su, X Sun
arXiv preprint arXiv:1912.11637, 2019
1142019
Towards knowledge-based personalized product description generation in e-commerce
Q Chen, J Lin, Y Zhang, H Yang, J Zhou, J Tang
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
952019
Imitation learning for non-autoregressive neural machine translation
B Wei, M Wang, H Zhou, J Lin, J Xie, X Sun
arXiv preprint arXiv:1906.02041, 2019
952019
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
A Yang*, J Pan*, J Lin*, R Men, Y Zhang, J Zhou, C Zhou
arXiv preprint arXiv:2211.01335, 2022
832022
Interbert: Vision-and-language interaction for multi-modal pretraining
J Lin, A Yang, Y Zhang, J Liu, J Zhou, H Yang
arXiv preprint arXiv:2003.13198, 2020
812020
A deep reinforced sequence-to-set model for multi-label classification
P Yang, F Luo, S Ma, J Lin, X Sun
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
77*2019
Bag-of-words as target for neural machine translation
S Ma, X Sun, Y Wang, J Lin
arXiv preprint arXiv:1805.04871, 2018
732018
One-peace: Exploring one general representation model toward unlimited modalities
P Wang, S Wang, J Lin, S Bai, X Zhou, J Zhou, X Wang, C Zhou
arXiv preprint arXiv:2305.11172, 2023
712023
Semantic-Unit-Based Dilated Convolution for Multi-Label Text Classification
J Lin, Q Su, P Yang, S Ma, X Sun
arXiv preprint arXiv:1808.08561, 2018
702018
Modality competition: What makes joint training of multi-modal network fail in deep learning?(provably)
Y Huang, J Lin, C Zhou, H Yang, L Huang
International conference on machine learning, 9226-9259, 2022
632022
Autoencoder as assistant supervisor: Improving text representation for chinese social media text summarization
S Ma, X Sun, J Lin, H Wang
arXiv preprint arXiv:1805.04869, 2018
612018
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20