Obserwuj
Xavier Martinet
Xavier Martinet
Meta
Zweryfikowany adres z meta.com
Tytuł
Cytowane przez
Cytowane przez
Rok
Llama: Open and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
127662023
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
125202023
The llama 3 herd of models
A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ...
arXiv preprint arXiv:2407.21783, 2024
24932024
LLaMA: open and efficient foundation language models. arXiv
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
3102023
Llama: Open and efficient foundation language models. arXiv 2023
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971 10, 2023
246*2023
Timo-401 thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open 402 and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux
arXiv preprint arXiv:2302.13971 403, 2023
1962023
Llama 2: open foundation and fine-tuned chat models. arXiv
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
1532023
Llama 2: Open foundation and fine-tuned chat models. arXiv 2023
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 0
144
Hypertree proof search for neural theorem proving
G Lample, T Lacroix, MA Lachaux, A Rodriguez, A Hayat, T Lavril, ...
Advances in neural information processing systems 35, 26337-26349, 2022
1352022
The llama 3 herd of models
A Grattafiori, A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, ...
arXiv e-prints, arXiv: 2407.21783, 2024
792024
Llama 2: open foundation and fine-tuned chat models. CoRR abs/2307.09288 (2023)
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288 10, 2023
622023
Polygames: Improved zero learning
T Cazenave, YC Chen, GW Chen, SY Chen, XD Chiu, J Dehos, M Elsa, ...
ICGA Journal 42 (4), 244-256, 2021
602021
Llama 2: Open foundation and fine-tuned chat models, 2023b
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
URL https://arxiv. org/abs/2307.09288, 2023
392023
Llama: Open and efficient foundation language models, CoRR abs/2302.13971 (2023). URL: https://doi. org/10.48550/arXiv. 230 2.13971. doi: 10.48550/arXiv. 2302.13971
H Touvron, T Lavril, G Izacard, X Martinet, M Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 0
17
Llama 2: Open foundation and fine-tuned chat models. arXiv [Preprint](2023)
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
URL https://arxiv. org/abs/2307 9288, 12, 0
14
Llama 2: Open Foundation and Fine-Tuned Chat Models (Jul
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
82023
Worldsense: A synthetic benchmark for grounded reasoning in large language models
Y Benchekroun, M Dervishi, M Ibrahim, JB Gaya, X Martinet, G Mialon, ...
arXiv preprint arXiv:2311.15930, 2023
72023
LLaMA: open and efficient foundation language models, 2023 [J]
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
URL https://arxiv. org/abs/2302.13971, 2023
22023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–18