Obserwuj
Theodor Misiakiewicz
Theodor Misiakiewicz
Graduate student, Stanford University
Zweryfikowany adres z stanford.edu - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
S Mei, T Misiakiewicz, A Montanari
Conference on Learning Theory, 2388-2464, 2019
2152019
Linearized two-layers neural networks in high dimension
B Ghorbani, S Mei, T Misiakiewicz, A Montanari
1852021
When do neural networks outperform kernel methods?
B Ghorbani, S Mei, T Misiakiewicz, A Montanari
Advances in Neural Information Processing Systems 33, 14820-14830, 2020
1162020
Limitations of lazy training of two-layers neural network
B Ghorbani, S Mei, T Misiakiewicz, A Montanari
Advances in Neural Information Processing Systems 32, 2019
1092019
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality
S Mei, T Misiakiewicz, A Montanari, RI Oliveira
Conference on learning theory, 1476-1515, 2017
632017
Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration
S Mei, T Misiakiewicz, A Montanari
Applied and Computational Harmonic Analysis 59, 3-84, 2022
602022
Learning with invariances in random features and kernel models
S Mei, T Misiakiewicz, A Montanari
Conference on Learning Theory, 3351-3418, 2021
452021
The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks
E Abbe, EB Adsera, T Misiakiewicz
Conference on Learning Theory, 4782-4887, 2022
222022
Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
T Misiakiewicz
arXiv preprint arXiv:2204.10425, 2022
112022
Learning with convolution and pooling operations in kernel methods
T Misiakiewicz, S Mei
Advances in Neural Information Processing Systems 35, 29014-29025, 2022
92022
Efficient reconstruction of transmission probabilities in a spreading process from partial observations
AY Lokhov, T Misiakiewicz
arXiv preprint arXiv:1509.06893, 2015
92015
Discussion of:“Nonparametric regression using deep neural networks with ReLU activation function”
B Ghorbani, S Mei, T Misiakiewicz, A Montanari
72020
Minimum complexity interpolation in random features models
M Celentano, T Misiakiewicz, A Montanari
arXiv preprint arXiv:2103.15996, 2021
42021
SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
E Abbe, E Boix-Adsera, T Misiakiewicz
arXiv preprint arXiv:2302.11055, 2023
12023
Precise Learning Curves and Higher-Order Scalings for Dot-product Kernel Regression
L Xiao, H Hu, T Misiakiewicz, Y Lu, J Pennington
Advances in Neural Information Processing Systems, 2022
12022
Concentration to zero bit-error probability for regular LDPC codes on the binary symmetric channel: Proof by loop calculus
M Vuffray, T Misiakiewicz
2015 53rd Annual Allerton Conference on Communication, Control, and …, 2015
2015
Supplementary Material: When Do Neural Networks Outperform Kernel Methods?
B Ghorbani, S Mei, T Misiakiewicz, A Montanari
Estimation du bruit de fond réductible par la méthode SS dans le canal de désintégration du boson de Higgs en 4 leptons
T Misiakiewicz
Application of Graphical Models to Decoding and Machine Learning
T Misiakiewicz
Ondes gravitationnelles en théorie de la gravité massive
T Misiakiewicz
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20