Follow
Dami Choi
Dami Choi
Verified email at cs.toronto.edu
Title
Cited by
Cited by
Year
Backpropagation through the void: Optimizing control variates for black-box gradient estimation
W Grathwohl, D Choi, Y Wu, G Roeder, D Duvenaud
arXiv preprint arXiv:1711.00123, 2017
2302017
On empirical comparisons of optimizers for deep learning
D Choi, CJ Shallue, Z Nado, J Lee, CJ Maddison, GE Dahl
arXiv preprint arXiv:1910.05446, 2019
1572019
Guided evolutionary strategies: Augmenting random search with surrogate gradients
N Maheswaranathan, L Metz, G Tucker, D Choi, J Sohl-Dickstein
International Conference on Machine Learning, 4264-4273, 2019
532019
Gradient estimation with stochastic softmax tricks
M Paulus, D Choi, D Tarlow, A Krause, CJ Maddison
Advances in Neural Information Processing Systems 33, 5691-5704, 2020
272020
Faster neural network training with data echoing
D Choi, A Passos, CJ Shallue, GE Dahl
arXiv preprint arXiv:1907.05550, 2019
262019
Guided evolutionary strategies: escaping the curse of dimensionality in random search
N Maheswaranathan, L Metz, G Tucker, D Choi, J Sohl-Dickstein
182018
Self-tuning stochastic optimization with curvature-aware gradient filtering
RTQ Chen, D Choi, L Balles, D Duvenaud, P Hennig
PMLR, 2020
42020
Systems and Methods for Reducing Idleness in a Machine-Learning Training System Using Data Echoing
D Choi, AT Passos, CJ Shallue, GE Dahl
US Patent App. 16/871,527, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–8