Reza Babanezhad
Reza Babanezhad
Samsung AI lab
Brak zweryfikowanego adresu e-mail - Strona główna
Cytowane przez
Cytowane przez
Stopwasting my gradients: Practical svrg
R Babanezhad Harikandeh, MO Ahmed, A Virani, M Schmidt, J Konečný, ...
Advances in Neural Information Processing Systems 28, 2015
Non-uniform stochastic average gradient method for training conditional random fields
M Schmidt, R Babanezhad, M Ahmed, A Defazio, A Clifton, A Sarkar
artificial intelligence and statistics, 819-828, 2015
M-adda: Unsupervised domain adaptation with deep metric learning
IH Laradji, R Babanezhad
Domain adaptation for visual understanding, 17-31, 2020
Faster stochastic variational inference using proximal-gradient methods with general divergence functions
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv preprint arXiv:1511.00146, 2015
A generic top-n recommendation framework for trading-off accuracy, novelty, and coverage
Z Zolaktaf, R Babanezhad, R Pottinger
2018 IEEE 34th International Conference on Data Engineering (ICDE), 149-160, 2018
Reducing the variance in online optimization by transporting past gradients
S Arnold, PA Manzagol, R Babanezhad Harikandeh, I Mitliagkas, ...
Advances in Neural Information Processing Systems 32, 2019
An analysis of the adaptation speed of causal models
R Le Priol, R Babanezhad, Y Bengio, S Lacoste-Julien
International Conference on Artificial Intelligence and Statistics, 775-783, 2021
Convergence of proximal-gradient stochastic variational inference under non-decreasing step-size sequence
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
J. Comp. Neurol 319, 359-386, 2015
SVRG meets AdaGrad: painless variance reduction
B Dubois-Taine, S Vaswani, R Babanezhad, M Schmidt, S Lacoste-Julien
Machine Learning, 1-51, 2022
Process patterns for web engineering
R Babanezhad, YM Bibalan, R Ramsin
2010 IEEE 34th Annual Computer Software and Applications Conference, 477-486, 2010
To each optimizer a norm, to each norm its generalization
S Vaswani, R Babanezhad, J Gallego-Posada, A Mishkin, ...
arXiv preprint arXiv:2006.06821, 2020
Masaga: A linearly-convergent stochastic first-order method for optimization on manifolds
R Babanezhad, IH Laradji, A Shafaei, M Schmidt
Machine Learning and Knowledge Discovery in Databases: European Conference …, 2019
Towards noise-adaptive, problem-adaptive stochastic gradient descent
S Vaswani, B Dubois-Taine, R Babanezhad
Semantics Preserving Adversarial Learning
OA Dia, E Barshan, R Babanezhad
arXiv preprint arXiv:1903.03905, 2019
Towards painless policy optimization for constrained mdps
A Jain, S Vaswani, R Babanezhad, C Szepesvari, D Precup
Uncertainty in Artificial Intelligence, 895-905, 2022
Geometry-aware universal mirror-prox
R Babanezhad, S Lacoste-Julien
arXiv preprint arXiv:2011.11203, 2020
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
S Vaswani, B Dubois-Taine, R Babanezhad
International Conference on Machine Learning, 22015-22059, 2022
Infinite-dimensional game optimization via variational transport
L Liu, Y Zhang, Z Yang, R Babanezhad, Z Wang
OPT, 2020
Infinite-Dimensional Optimization for Zero-Sum Games via Variational Transport
L Liu, Y Zhang, Z Yang, R Babanezhad, Z Wang
International Conference on Machine Learning, 7033-7044, 2021
Semantics Preserving Adversarial Learning
O Amadou Dia, E Barshan, R Babanezhad
arXiv e-prints, arXiv: 1903.03905, 2019
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20