Optimal rates for regularization of statistical inverse learning problems G Blanchard, N Mücke Foundations of Computational Mathematics 18 (4), 971-1013, 2018 | 130 | 2018 |
Parallelizing spectrally regularized kernel algorithms N MÞcke, G Blanchard Journal of Machine Learning Research 19 (30), 1-29, 2018 | 52 | 2018 |
Beating SGD saturation with tail-averaging and minibatching N Mücke, G Neu, L Rosasco Advances in Neural Information Processing Systems 32, 2019 | 41 | 2019 |
Reproducing kernel Hilbert spaces on manifolds: Sobolev and diffusion spaces E De Vito, N Mücke, L Rosasco Analysis and Applications 19 (03), 363-396, 2021 | 26 | 2021 |
Parallelizing spectral algorithms for kernel learning G Blanchard, N Mücke arXiv preprint arXiv:1610.07487, 2016 | 18 | 2016 |
Optimal rates for regularization of statistical inverse learning problems G Blanchard, N Mücke arXiv preprint arXiv:1604.04054, 2016 | 16 | 2016 |
Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem M Mollenhauer, N Mücke, TJ Sullivan arXiv preprint arXiv:2211.08875, 2022 | 15 | 2022 |
Reducing training time by efficient localized kernel regression N Müecke The 22nd International Conference on Artificial Intelligence and Statistics …, 2019 | 13 | 2019 |
Data-splitting improves statistical performance in overparameterized regimes N Mücke, E Reiss, J Rungenhagen, M Klein International Conference on Artificial Intelligence and Statistics, 10322-10350, 2022 | 12 | 2022 |
Global minima of DNNs: The plenty pantry N Mücke, I Steinwart arXiv preprint arXiv:1905.10686, 169, 2019 | 12 | 2019 |
Lepskii principle in supervised learning G Blanchard, P Mathé, N Mücke arXiv preprint arXiv:1905.10764, 2019 | 11 | 2019 |
Stochastic gradient descent meets distribution regression N Mücke International Conference on Artificial Intelligence and Statistics, 2143-2151, 2021 | 8 | 2021 |
From inexact optimization to learning via gradient concentration B Stankewitz, N Mücke, L Rosasco Computational Optimization and Applications 84 (1), 265-294, 2023 | 7 | 2023 |
Kernel regression, minimax rates and effective dimensionality: Beyond the regular case G Blanchard, N Mücke Analysis and Applications 18 (04), 683-696, 2020 | 7 | 2020 |
Stochastic gradient descent in Hilbert scales: Smoothness, preconditioning and earlier stopping N Mücke, E Reiss arXiv preprint arXiv:2006.10840, 2020 | 7 | 2020 |
Adaptivity for Regularized Kernel Methods by Lepskii's Principle N Mücke arXiv preprint arXiv:1804.05433, 2018 | 3 | 2018 |
Empirical Risk Minimization in the Interpolating Regime with Application to Neural Network Learning N Mücke, I Steinwart arXiv preprint arXiv:1905.10686, 2019 | 2 | 2019 |
Kernel regression, minimax rates and effective dimensionality: Beyond the regular case G Blanchard, N Mücke arXiv preprint arXiv:1611.03979, 2016 | 2 | 2016 |
Random feature approximation for general spectral methods M Nguyen, N Mücke arXiv preprint arXiv:2308.15434, 2023 | 1 | 2023 |
Direct and inverse problems in machine learning: kernel methods and spectral regularization N Mücke Universität Potsdam, 2017 | 1 | 2017 |