Obserwuj
Ashvin Nair
Tytuł
Cytowane przez
Cytowane przez
Rok
Overcoming exploration in reinforcement learning with demonstrations
A Nair, B McGrew, M Andrychowicz, W Zaremba, P Abbeel
IEEE International Conference on Robotics and Automation, 2017
4912017
Learning to poke by poking: Experiential learning of intuitive physics
P Agrawal, A Nair, P Abbeel, J Malik, S Levine
Advances in Neural Information Processing Systems, 2016
4682016
Visual reinforcement learning with imagined goals
A Nair, V Pong, M Dalal, S Bahl, S Lin, S Levine
Advances in Neural Information Processing Systems, 2018
3362018
Combining self-supervised learning and imitation for vision-based rope manipulation
A Nair, D Chen, P Agrawal, P Isola, P Abbeel, J Malik, S Levine
IEEE International Conference on Robotics and Automation, 2017
2082017
Residual reinforcement learning for robot control
T Johannink, S Bahl, A Nair, J Luo, A Kumar, M Loskyll, JA Ojea, ...
IEEE Conference on Robotics and Automation, 2018
1882018
Skew-fit: State-covering self-supervised reinforcement learning
VH Pong, M Dalal, S Lin, A Nair, S Bahl, S Levine
International Conference on Machine Learning, 2019
1412019
AWAC: Accelerating Online Reinforcement Learning with Offline Datasets
A Nair, A Gupta, M Dalal, S Levine
1182020
Deep reinforcement learning for industrial insertion tasks with visual inputs and natural rewards
G Schoettler, A Nair, J Luo, S Bahl, JA Ojea, E Solowjow, S Levine
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019
872019
Contextual imagined goals for self-supervised robotic learning
A Nair, S Bahl, A Khazatsky, V Pong, G Berseth, S Levine
Conference on Robot Learning, 2019
492019
Offline reinforcement learning with implicit q-learning
I Kostrikov, A Nair, S Levine
International Conference on Learning Representations, 2021, 2021
262021
Meta-reinforcement learning for robotic industrial insertion tasks
G Schoettler, A Nair, JA Ojea, S Levine, E Solowjow
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2020
242020
What Can I Do Here? Learning New Skills by Imagining Visual Affordances
A Khazatsky, A Nair, D Jing, S Levine
IEEE Interational Conference on Robotics and Automation, 2021
92021
Disco rl: Distribution-conditioned reinforcement learning for general-purpose policies
S Nasiriany, VH Pong, A Nair, A Khazatsky, G Berseth, S Levine
2021 IEEE International Conference on Robotics and Automation (ICRA), 6635-6641, 2021
82021
Offline meta-reinforcement learning with online self-supervision
VH Pong, A Nair, L Smith, C Huang, S Levine
arXiv preprint arXiv:2107.03974, 2021
62021
Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
K Fang, P Yin, A Nair, S Levine
arXiv preprint arXiv:2205.08129, 2022
2022
Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
P Hansen-Estruch, A Zhang, A Nair, P Yin, S Levine
arXiv preprint arXiv:2204.13060, 2022
2022
EE 126 Probability and Random Processes: Course Syllabus
K Chandrasekher, A Nair
University of California, Berkeley, 2016
2016
See What Your Robot Sees
M Li, A Nair
Global Conference on Educational Robotics, 2012
2012
DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies Download PDF
A Khazatsky, A Nair, S Levine, G Berseth, VH Pong, S Nasiriany
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–19