Rohan Anil
Rohan Anil
Principal Engineer, Google Brain
Zweryfikowany adres z google.com
Cytowane przez
Cytowane przez
Wide & deep learning for recommender systems
HT Cheng, L Koc, J Harmsen, T Shaked, T Chandra, H Aradhye, ...
Proceedings of the 1st workshop on deep learning for recommender systems, 7-10, 2016
Large scale distributed neural network training through online distillation
R Anil, G Pereyra, AT Passos, R Ormandi, G Dahl, G Hinton
Sixth International Conference on Learning Representations, 2018
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
Lingvo: a modular and scalable framework for sequence-to-sequence modeling
J Shen, P Nguyen, Y Wu, Z Chen, MX Chen, Y Jia, A Kannan, T Sainath, ...
arXiv preprint arXiv:1902.08295, 2019
Knowledge distillation: A good teacher is patient and consistent
L Beyer, X Zhai, A Royer, L Markeeva, R Anil, A Kolesnikov
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
Efficiently Identifying Task Groupings for Multi-Task Learning
C Fifty, E Amid, Z Zhao, T Yu, R Anil, C Finn
2021 Conference on Neural Information Processing Systems, Spotlight, 2021
Tf-ranking: Scalable tensorflow library for learning-to-rank
RK Pasumarthi, S Bruch, X Wang, C Li, M Bendersky, M Najork, J Pfeifer, ...
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
Robust bi-tempered logistic loss based on bregman divergences
E Amid, MK Warmuth, R Anil, T Koren
2019 Conference on Neural Information Processing Systems, 2019
Scalable Second Order Optimization for Deep Learning
R Anil, V Gupta, T Koren, K Regan, Y Singer
arXiv preprint arXiv:2002.09018, 2020, 2020
Large-Scale Differentially Private BERT
R Anil, B Ghazi, V Gupta, R Kumar, P Manurangsi
Privacy Preserving Machine Learning, 2021
Memory-efficient adaptive optimization for large-scale learning
R Anil, V Gupta, T Koren, Y Singer
2019 Conference on Neural Information Processing Systems, 2019
A large batch optimizer reality check: Traditional, generic optimizers suffice across batch sizes
Z Nado, JM Gilmer, CJ Shallue, R Anil, GE Dahl
arXiv preprint arXiv:2102.06356, 2021
Disentangling adaptive gradient methods from learning rates
N Agarwal, R Anil, E Hazan, T Koren, C Zhang
arXiv preprint arXiv:2002.11803, 2020
Wide and deep machine learning models
T Shaked, R Anil, HB Aradhye, G Anderson, W Chai, ML Koc, J Harmsen, ...
US Patent 10,762,422, 2020
On the factory floor: ML engineering for industrial-scale ads recommendation models
R Anil, S Gadanho, D Huang, N Jacob, Z Li, D Lin, T Phillips, C Pop, ...
arXiv preprint arXiv:2209.05310, 2022
Locoprop: Enhancing backprop via local loss optimization
E Amid, R Anil, MK Warmuth
The 25th International Conference on Artificial Intelligence and Statistics …, 2021
Memory-efficient adaptive optimization for large-scale learning
R Anil, V Gupta, T Koren, Y Singer
arXiv preprint arXiv:1901.11150 4, 2019
Learning from randomly initialized neural network features
E Amid, R Anil, W Kotłowski, MK Warmuth
arXiv preprint arXiv:2202.06438, 2022
Stochastic Optimization with Laggard Data Pipelines
N Agarwal, R Anil, T Koren, K Talwar, C Zhang
2020 Conference on Neural Information Processing Systems, 2020
Step-size Adaptation Using Exponentiated Gradient Updates
E Amid, R Anil, C Fifty, MK Warmuth
ICML’20 Workshop on “Beyond First Order Methods in ML", Spotlight, 2020
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20