Obserwuj
Emanuele Troiani
Emanuele Troiani
Zweryfikowany adres z epfl.ch
Tytuł
Cytowane przez
Cytowane przez
Rok
Rigorous dynamical mean-field theory for stochastic gradient descent methods
C Gerbelot, E Troiani, F Mignacco, F Krzakala, L Zdeborova
SIAM Journal on Mathematics of Data Science 6 (2), 400-427, 2024
222024
Optimal denoising of rotationally invariant rectangular matrices
E Troiani, V Erba, F Krzakala, A Maillard, L Zdeborová
Mathematical and Scientific Machine Learning 190, 97--112, 2022
162022
The benefits of reusing batches for gradient descent in two-layer networks: Breaking the curse of information and leap exponents
Y Dandi, E Troiani, L Arnaboldi, L Pesce, L Zdeborová, F Krzakala
arXiv preprint arXiv:2402.03220, 2024
112024
Gibbs sampling the posterior of neural networks
G Piccioli, E Troiani, L Zdeborová
Journal of Physics A: Mathematical and Theoretical 57 (12), 125002, 2024
22024
Stuttering Conway Sequences Are Still Conway Sequences
É Brier, R Géraud-Stewart, D Naccache, A Pacco, E Troiani
arXiv preprint arXiv:2006.06837, 2020
22020
Asymptotic Characterisation of the Performance of Robust Linear Regression in the Presence of Outliers
M Vilucchio, E Troiani, V Erba, F Krzakala
International Conference on Artificial Intelligence and Statistics, 811-819, 2024
1*2024
Sparse Representations, Inference and Learning
C Lauditi, E Troiani, M Mézard
arXiv preprint arXiv:2306.16097, 2023
12023
Fundamental limits of weak learnability in high-dimensional multi-index models
E Troiani, Y Dandi, L Defilippis, L Zdeborová, B Loureiro, F Krzakala
arXiv preprint arXiv:2405.15480, 2024
2024
The Look-and-Say The Biggest Sequence Eventually Cycles
É Brier, R Géraud-Stewart, D Naccache, A Pacco, E Troiani
arXiv preprint arXiv:2006.07246, 2020
2020
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–9