Obserwuj
Allen Nie
Tytuł
Cytowane przez
Cytowane przez
Rok
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
27512021
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
7272022
Data noising as smoothing in neural network language models
Z Xie, SI Wang, J Li, D Lévy, A Nie, D Jurafsky, AY Ng
arXiv preprint arXiv:1703.02573, 2017
2842017
Dissent: Sentence representation learning from explicit discourse relations
A Nie, ED Bennett, ND Goodman
arXiv preprint arXiv:1710.04334, 2017
156*2017
VetTag: improving automated veterinary diagnosis coding via large-scale language modeling
Y Zhang, A Nie, A Zehnder, RL Page, J Zou
NPJ digital medicine 2 (1), 35, 2019
242019
Pragmatic issue-sensitive image captioning
A Nie, R Cohn-Gordon, C Potts
arXiv preprint arXiv:2004.14451, 2020
212020
DeepTag: inferring diagnoses from veterinary clinical notes
A Nie, A Zehnder, RL Page, Y Zhang, AL Pineda, MA Rivas, ...
NPJ digital medicine 1 (1), 60, 2018
212018
Data-efficient pipeline for offline reinforcement learning with limited data
A Nie, Y Flet-Berliac, D Jordan, W Steenbergen, E Brunskill
Advances in Neural Information Processing Systems 35, 14810-14823, 2022
122022
Play to grade: Testing coding games as classifying markov decision process
A Nie, E Brunskill, C Piech
Advances in Neural Information Processing Systems 34, 1506-1518, 2021
102021
Giving feedback on interactive student programs with meta-exploration
E Liu, M Stephan, A Nie, C Piech, E Brunskill, C Finn
Advances in Neural Information Processing Systems 35, 36282-36294, 2022
82022
LitGen: Genetic literature recommendation guided by human explanations
A Nie, AL Pineda, MW Wright, H Wand, B Wulf, HA Costa, RY Patel, ...
PACIFIC SYMPOSIUM ON BIOCOMPUTING 2020, 67-78, 2019
82019
Waypoint transformer: Reinforcement learning via supervised learning with intermediate targets
A Badrinath, Y Flet-Berliac, A Nie, E Brunskill
Advances in Neural Information Processing Systems 36, 2024
72024
Reinforcement Learning Tutor Better Supported Lower Performers in a Math Task
S Ruan, A Nie, W Steenbergen, J He, JQ Zhang, M Guo, Y Liu, ...
arXiv preprint arXiv:2304.04933, 2023
72023
Learning to explain: Answering why-questions via rephrasing
A Nie, ED Bennett, ND Goodman
arXiv preprint arXiv:1906.01243, 2019
52019
Computational exploration to linguistic structures of future: Classification and categorization
A Nie, JD Choi, J Shepard, P Wolff
Proceedings of the 2015 conference of the North American Chapter of the …, 2015
52015
MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks
A Nie, Y Zhang, AS Amdekar, C Piech, TB Hashimoto, T Gerstenberg
Advances in Neural Information Processing Systems 36, 2024
4*2024
Exploring Adversarial Learning on Neural Network Models for Text Classification
I Caswell, A Nie, O Sen
Class Project for Stanford CS224N: Natural Language Processing, Fall semester, 2015
4*2015
Representations of Time Affect Willingness to Wait for Future Rewards.
R Thorstad, A Nie, P Wolff
CogSci, 2015
32015
Llf-bench: Benchmark for interactive learning from language feedback
CA Cheng, A Kolobov, D Misra, A Nie, A Swaminathan
arXiv preprint arXiv:2312.06853, 2023
22023
Importance of Directional Feedback for LLM-based Optimizers
A Nie, CA Cheng, A Kolobov, A Swaminathan
NeurIPS 2023 Foundation Models for Decision Making Workshop, 2023
22023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20