Obserwuj
Ananya Kumar
Ananya Kumar
Research Scientist, OpenAI
Zweryfikowany adres z cs.stanford.edu - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
25792021
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
Transactions on Machine Learning Research (TMLR), 2023
5672023
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
A Kumar, A Raghunathan, R Jones, T Ma, P Liang
International Conference on Learning Representations (ICLR), 2022
4262022
Verified Uncertainty Calibration
A Kumar, P Liang, T Ma
Neural Information Processing Systems (NeurIPS), 2019
3242019
Understanding Self-Training for Gradual Domain Adaptation
A Kumar, T Ma, P Liang
International Conference on Machine Learning (ICML), 2020
2152020
Surgical fine-tuning improves adaptation to distribution shifts
Y Lee, AS Chen, F Tajwar, A Kumar, H Yao, P Liang, C Finn
International Conference on Learning Representations (ICLR), 2023
1102023
Extending the wilds benchmark for unsupervised adaptation
S Sagawa*, PW Koh*, T Lee*, I Gao*, SM Xie, K Shen, A Kumar, W Hu, ...
International Conference on Learning Representations (ICLR), 2022
922022
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
K Shen*, R Jones*, A Kumar*, SM Xie*, JZ HaoChen, T Ma, P Liang
International Conference on Machine Learning (ICML), 2022
792022
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures
J Uesato*, A Kumar*, C Szepesvari*, T Erez, A Ruderman, K Anderson, ...
International Conference on Learning Representations (ICLR), 2019
752019
Self-training avoids using spurious features under domain shift
Y Chen*, C Wei*, A Kumar, T Ma
Neural Information Processing Systems (NeurIPS), 2020
692020
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
SM Xie*, A Kumar*, R Jones*, F Khani, T Ma, P Liang
International Conference on Learning Representations (ICLR), 2021
512021
Finetune like you pretrain: Improved finetuning of zero-shot vision models
S Goyal, A Kumar, S Garg, Z Kolter, A Raghunathan
Conference on Computer Vision and Pattern Recognition (CVPR), 2023
472023
Selective Classification Can Magnify Disparities Across Groups
E Jones*, S Sagawa*, PW Koh*, A Kumar, P Liang
International Conference on Learning Representations (ICLR), 2021
462021
Consistent generative query networks
A Kumar, SM Eslami, DJ Rezende, M Garnelo, F Viola, E Lockhart, ...
NeurIPS workshop on Bayesian Deep Learning, 2018
42*2018
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?
R Bommasani, K Creel, A Kumar, D Jurafsky, P Liang
Advances in Neural Information Processing Systems (NeurIPS), 2022
412022
Beyond separability: Analyzing the linear transferability of contrastive representations to related subpopulations
JZ HaoChen, C Wei, A Kumar, T Ma
Neural Information Processing Systems (NeurIPS), 2022
352022
Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
A Kumar, T Ma, P Liang, A Raghunathan
Conference on Uncertainty in Artificial Intelligence (UAI), 2022
31*2022
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
F Tajwar, A Kumar*, SM Xie*, P Liang
ICML UDL Workshop, 2021
162021
Uncovering Surprising Behaviors in Reinforcement Learning via Worst-case Analysis
A Ruderman, R Everett, B Sikder, H Soyer, C Beattie, J Uesato, A Kumar, ...
ICLR SafeML Workshop, 2019
16*2019
How to fine-tune vision models with sgd
A Kumar, R Shen, S Bubeck, S Gunasekar
International Conference on Learning Representations (ICLR), 2024
122024
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20