Obserwuj
Marie-Anne Lachaux
Marie-Anne Lachaux
Mistral AI
Zweryfikowany adres z mistral.ai
Tytuł
Cytowane przez
Cytowane przez
Rok
Llama: Open and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
61142023
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
47862023
Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring
S Humeau, K Shuster, MA Lachaux, J Weston
arXiv preprint arXiv:1905.01969, 2019
5332019
CCNet: Extracting high quality monolingual datasets from web crawl data
G Wenzek, MA Lachaux, A Conneau, V Chaudhary, F Guzmán, A Joulin, ...
arXiv preprint arXiv:1911.00359, 2019
5012019
Unsupervised translation of programming languages
MA Lachaux, B Roziere, L Chanussot, G Lample
arXiv preprint arXiv:2006.03511, 2020
337*2020
Mistral 7B
AQ Jiang, A Sablayrolles, A Mensch, C Bamford, DS Chaplot, D Casas, ...
arXiv preprint arXiv:2310.06825, 2023
2042023
DOBF: A Deobfuscation Pre-Training Objective for Programming Languages
MA Lachaux, B Roziere, M Szafraniec, G Lample
Advances in Neural Information Processing Systems 34, 2021
121*2021
Mixtral of experts
AQ Jiang, A Sablayrolles, A Roux, A Mensch, B Savary, C Bamford, ...
arXiv preprint arXiv:2401.04088, 2024
902024
Hypertree proof search for neural theorem proving
G Lample, T Lacroix, MA Lachaux, A Rodriguez, A Hayat, T Lavril, ...
Advances in neural information processing systems 35, 26337-26349, 2022
732022
LLaMA: open and efficient foundation language models. arXiv
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
582023
Target conditioning for one-to-many generation
MA Lachaux, A Joulin, G Lample
arXiv preprint arXiv:2009.09758, 2020
122020
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–11