Obserwuj
Mikel Artetxe
Mikel Artetxe
Reka AI
Zweryfikowany adres z reka.ai - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
OPT: Open pre-trained transformer language models
S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ...
arXiv preprint arXiv:2205.01068, 2022
3317*2022
Rethinking the role of demonstrations: What makes in-context learning work?
S Min, X Lyu, A Holtzman, M Artetxe, M Lewis, H Hajishirzi, L Zettlemoyer
arXiv preprint arXiv:2202.12837, 2022
11902022
Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond
M Artetxe, H Schwenk
Transactions of the association for computational linguistics 7, 597-610, 2019
11112019
Unsupervised neural machine translation
M Artetxe, G Labaka, E Agirre, K Cho
Proceedings of the Sixth International Conference on Learning Representations, 2018
10352018
On the cross-lingual transferability of monolingual representations
M Artetxe, S Ruder, D Yogatama
arXiv preprint arXiv:1910.11856, 2019
7542019
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
M Artetxe, G Labaka, E Agirre
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
7152018
Learning bilingual word embeddings with (almost) no bilingual data
M Artetxe, G Labaka, E Agirre
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
6152017
Learning principled bilingual mappings of word embeddings while preserving monolingual invariance
M Artetxe, G Labaka, E Agirre
Proceedings of the 2016 Conference on Empirical Methods in Natural Language …, 2016
4812016
Generalizing and Improving Bilingual Word Embedding Mappings with a Multi-Step Framework of Linear Transformations
M Artetxe, G Labaka, E Agirre
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence …, 2018
2742018
Unsupervised statistical machine translation
M Artetxe, G Labaka, E Agirre
arXiv preprint arXiv:1809.01272, 2018
2712018
Margin-based parallel corpus mining with multilingual sentence embeddings
M Artetxe, H Schwenk
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
2232019
An effective approach to unsupervised machine translation
M Artetxe, G Labaka, E Agirre
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
1832019
Multilingual autoregressive entity linking
N De Cao, L Wu, K Popat, M Artetxe, N Goyal, M Plekhanov, ...
Transactions of the Association for Computational Linguistics 10, 274-290, 2022
1352022
Opt: Open pre-trained transformer language models, 2022
S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ...
URL https://arxiv. org/abs/2205.01068 3, 19-0, 2023
1302023
Lifting the curse of multilinguality by pre-training modular transformers
J Pfeiffer, N Goyal, XV Lin, X Li, J Cross, S Riedel, M Artetxe
arXiv preprint arXiv:2205.06266, 2022
1122022
Translation artifacts in cross-lingual transfer learning
M Artetxe, G Labaka, E Agirre
arXiv preprint arXiv:2004.04721, 2020
1082020
Efficient large scale language modeling with mixtures of experts
M Artetxe, S Bhosale, N Goyal, T Mihaylov, M Ott, S Shleifer, XV Lin, J Du, ...
arXiv preprint arXiv:2112.10684, 2021
1042021
Jingfei Du, et al. 2021. Few-shot learning with multilingual language models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
arXiv preprint arXiv:2112.10668, 35-40, 2021
982021
Analyzing the Limitations of Cross-lingual Word Embedding Mappings
A Ormazabal, M Artetxe, G Labaka, A Soroa, E Agirre
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
822019
A call for more rigor in unsupervised cross-lingual learning
M Artetxe, S Ruder, D Yogatama, G Labaka, E Agirre
arXiv preprint arXiv:2004.14958, 2020
722020
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20