Sebastian Ruder
Sebastian Ruder
Research Scientist, Google
Zweryfikowany adres z google.com - Strona główna
Cytowane przez
Cytowane przez
An overview of gradient descent optimization algorithms
S Ruder
arXiv preprint arXiv:1609.04747, 2016
Universal Language Model Fine-tuning for Text Classification
J Howard, S Ruder
Proceedings of ACL 2018, 2018
An overview of multi-task learning in deep neural networks
S Ruder
arXiv preprint arXiv:1706.05098, 2017
A Survey of Cross-lingual Word Embedding Models
S Ruder, I Vulić, A Søgaard
Journal of Artificial Intelligence Research 65, 569-631, 2019
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
J Hu, S Ruder, A Siddhant, G Neubig, O Firat, M Johnson
Proceedings of ICML 2020, 2020
On the cross-lingual transferability of monolingual representations
M Artetxe, S Ruder, D Yogatama
Proceedings of ACL 2020, 2020
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
ME Peters, S Ruder, NA Smith
Proceedings of the 4th Workshop on Representation Learning for NLP, 2019
Transfer learning in natural language processing
S Ruder, ME Peters, S Swayamdipta, T Wolf
Proceedings of the 2019 conference of the North American chapter of the …, 2019
A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis
S Ruder, P Ghaffari, JG Breslin
Proceedings of the 2016 Conference on Empirical Methods in Natural Language …, 2016
Neural Transfer Learning for Natural Language Processing
S Ruder
National University of Ireland, Galway, 2019
An overview of gradient descent optimization algorithms. arXiv
S Ruder
arXiv preprint arXiv:1609.04747, 2016
On the Limitations of Unsupervised Bilingual Dictionary Induction
A Søgaard, S Ruder, I Vulić
Proceedings of ACL 2018, 2018
A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks
V Sanh, T Wolf, S Ruder
Proceedings of AAAI 2019, 2019
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay, M Dehghani, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
Proceedings of ICLR 2021, 2021
MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer
J Pfeiffer, I Vulić, I Gurevych, S Ruder
Proceedings of EMNLP 2020, 2020
Latent Multi-task Architecture Learning
S Ruder, J Bingel, I Augenstein, A Søgaard
Proceedings of AAAI 2019, 2019
Sluice networks: Learning what to share between loosely related tasks
S Ruder, J Bingel, I Augenstein, A Søgaard
arXiv preprint arXiv:1705.08142 2, 2017
How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions
G Glavas, R Litschko, S Ruder, I Vulic
Proceedings of ACL 2019, 2019
Strong Baselines for Neural Semi-supervised Learning under Domain Shift
S Ruder, B Plank
Proceedings of ACL 2018, 2018
Adapterhub: A framework for adapting transformers
J Pfeiffer, A Rücklé, C Poth, A Kamath, I Vulić, S Ruder, K Cho, I Gurevych
Proceedings of EMNLP 2020: System demonstrations, 2020
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20