Obserwuj
Aurick Zhou
Tytuł
Cytowane przez
Cytowane przez
Rok
Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor
T Haarnoja, A Zhou, P Abbeel, S Levine
International conference on machine learning, 1861-1870, 2018
33722018
Soft actor-critic algorithms and applications
T Haarnoja, A Zhou, K Hartikainen, G Tucker, S Ha, J Tan, V Kumar, ...
arXiv preprint arXiv:1812.05905, 2018
9562018
Conservative q-learning for offline reinforcement learning
A Kumar, A Zhou, G Tucker, S Levine
Advances in Neural Information Processing Systems 33, 1179-1191, 2020
3632020
Efficient off-policy meta-reinforcement learning via probabilistic context variables
K Rakelly, A Zhou, C Finn, S Levine, D Quillen
International conference on machine learning, 5331-5340, 2019
3292019
Learning to walk via deep reinforcement learning
T Haarnoja, S Ha, A Zhou, J Tan, G Tucker, S Levine
arXiv preprint arXiv:1812.11103, 2018
2582018
Composable deep reinforcement learning for robotic manipulation
T Haarnoja, V Pong, A Zhou, M Dalal, P Abbeel, S Levine
2018 IEEE international conference on robotics and automation (ICRA), 6244-6251, 2018
1662018
MURAL: Meta-learning uncertainty-aware rewards for outcome-driven reinforcement learning
K Li, A Gupta, A Reddy, VH Pong, A Zhou, J Yu, S Levine
International Conference on Machine Learning, 6346-6356, 2021
6*2021
Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation
A Zhou, S Levine
International Conference on Machine Learning, 12803-12812, 2021
42021
Amortized conditional normalized maximum likelihood
A Zhou, S Levine
42020
Bayesian Adaptation for Covariate Shift
A Zhou, S Levine
Advances in Neural Information Processing Systems 34, 914-927, 2021
32021
Training on test data with bayesian adaptation for covariate shift
A Zhou, S Levine
arXiv preprint arXiv:2109.12746, 2021
22021
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–11