Jesse Thomason
Cytowane przez
Cytowane przez
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
M Shridhar, J Thomason, D Gordon, Y Bisk, W Han, R Mottaghi, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
Experience Grounds Language
Y Bisk, A Holtzman, J Thomason, J Andreas, Y Bengio, J Chai, M Lapata, ...
arXiv preprint arXiv:2004.10151, 2020
Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild
J Thomason, S Venugopalan, S Guadarrama, K Saenko, R Mooney
Proceedings of the Twenty Fifth International Conference on Computational …, 2014
Vision-and-dialog navigation
J Thomason, M Murray, M Cakmak, L Zettlemoyer
Conference on Robot Learning (CoRL), 2019
Learning to Interpret Natural Language Commands through Human-Robot Dialog
J Thomason, S Zhang, R Mooney, P Stone
Proceedings of the 24th International Joint Conference on Artificial …, 2015
BWIBots: A platform for bridging the gap between AI and human–robot interaction research
P Khandelwal, S Zhang, J Sinapov, M Leonetti, J Thomason, F Yang, ...
The International Journal of Robotics Research, 2017
Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy"
J Thomason, J Sinapov, M Svetlik, P Stone, RJ Mooney
Proceedings of the Twenty-Fifth international joint conference on Artificial …, 2016
Shifting the Baseline: Single Modality Performance on Visual Navigation & QA
J Thomason, D Gordon, Y Bisk
Conference of the North American Chapter of the Association for …, 2019
Improving Grounded Natural Language Understanding through Human-Robot Dialog
J Thomason, A Padmakumar, J Sinapov, N Walker, Y Jiang, H Yedidsion, ...
International Conference on Robotics and Automation (ICRA), 2019
Opportunistic active learning for grounding natural language descriptions
J Thomason, A Padmakumar, J Sinapov, J Hart, P Stone, RJ Mooney
Conference on Robot Learning, 67-76, 2017
TEACh: Task-driven Embodied Agents that Chat
A Padmakumar, J Thomason, A Shrivastava, P Lange, A Narayan-Chen, ...
arXiv preprint arXiv:2110.00534, 2021
Prosodic entrainment and tutoring dialogue success
J Thomason, HV Nguyen, D Litman
Artificial Intelligence in Education: 16th International Conference, AIED …, 2013
Jointly improving parsing and perception for natural language commands through human-robot dialog
J Thomason, A Padmakumar, J Sinapov, N Walker, Y Jiang, H Yedidsion, ...
Journal of Artificial Intelligence Research 67, 325-372, 2020
Interpreting Black Box Models via Hypothesis Testing
C Burns, J Thomason, W Tansey
Foundations of Data Science (FODS), 2020
Prospection: Interpretable Plans From Language By Predicting the Future
C Paxton, Y Bisk, J Thomason, A Byravan, D Fox
International Conference on Robotics and Automation (ICRA), 2019
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
I Singh, V Blukis, A Mousavian, A Goyal, D Xu, J Tremblay, D Fox, ...
arXiv preprint arXiv:2209.11302, 2022
RMM: A Recursive Mental Model for Dialog Navigation
HR Roman, Y Bisk, J Thomason, A Celikyilmaz, J Gao
arXiv preprint arXiv:2005.00728, 2020
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
J Gu, E Stefani, Q Wu, J Thomason, XE Wang
arXiv preprint arXiv:2203.12667, 2022
Guiding exploratory behaviors for multi-modal grounding of linguistic descriptions
J Thomason, J Sinapov, RJ Mooney, P Stone
Thirty-Second AAAI Conference on Artificial Intelligence, 2018
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion
A Suglia, Q Gao, J Thomason, G Thattai, G Sukhatme
arXiv preprint arXiv:2108.04927, 2021
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20