Obserwuj
Erin Grant
Erin Grant
Senior Research Fellow, University College London
Zweryfikowany adres z berkeley.edu - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Recasting gradient-based meta-learning as hierarchical Bayes
E Grant, C Finn, S Levine, T Darrell, TL Griffiths
International Conference on Learning Representations (ICLR), 2018
4532018
Reconciling meta-learning and continual learning with online mixtures of tasks
G Jerfel*, E Grant*, TL Griffiths, K Heller
Advances in Neural Information Processing Systems (NeurIPS), 2019
97*2019
Doing more with less: Meta-reasoning and meta-learning in humans and machines
TL Griffiths, F Callaway, MB Chang, E Grant, PM Krueger, F Lieder
Current Opinion in Behavioral Sciences 29, 24-30, 2019
772019
Are convolutional neural networks or transformers more like human vision?
S Tuli, I Dasgupta, E Grant, TL Griffiths
Annual Meeting of the Cognitive Science Society (CogSci), 2021
622021
Evaluating theory of mind in question answering
A Nematzadeh, K Burns, E Grant, A Gopnik, TL Griffiths
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018
322018
Universal linguistic inductive biases via meta-learning
RT McCoy, E Grant, P Smolensky, TL Griffiths, T Linzen
Annual Meeting of the Cognitive Science Society (CogSci), 2020
152020
Exploiting attention to reveal shortcomings in memory models
K Burns, A Nematzadeh, E Grant, A Gopnik, TL Griffiths
EMNLP Workshop on BlackboxNLP: Analyzing and Interpreting Neural Networks …, 2018
82018
Passive attention in artificial neural networks predicts human visual selectivity
TA Langlois, HC Zhao, E Grant, I Dasgupta, TL Griffiths, N Jacoby
Advances in Neural Information Processing Systems (NeurIPS), 2021
72021
A computational cognitive model of novel word generalization
A Nematzadeh, E Grant, S Stevenson
Conference on Empirical Methods in Natural Language Processing (EMNLP), 1795 …, 2015
62015
How can memory-augmented neural networks pass a false-belief task?
E Grant, A Nematzadeh, TL Griffiths
Annual Meeting of the Cognitive Science Society (CogSci), 2017
52017
Learning deep taxonomic priors for concept learning from few positive examples
E Grant, JC Peterson, TL Griffiths
Annual Meeting of the Cognitive Science Society (CogSci), 2019
32019
Distinguishing rule-and exemplar-based generalization in learning systems
I Dasgupta*, E Grant*, TL Griffiths
International Conference on Machine Learning (ICML), 2022
22022
Concept acquisition through meta-learning
E Grant, C Finn, J Peterson, J Abbott, S Levine, T Darrell, TL Griffiths
NeurIPS Workshop on Cognitively Informed Artificial Intelligence, 2017
22017
The interaction of memory and attention in novel word generalization: A computational investigation
E Grant, A Nematzadeh, S Stevenson
Annual Meeting of the Cognitive Science Society (CogSci), 2016
22016
Predicting generalization with degrees of freedom in neural networks
E Grant, Y Wu
ICML 2022 2nd AI for Science Workshop, 2022
12022
Meta-learning inductive biases of learning systems with Gaussian processes
MY Li, E Grant, TL Griffiths
Fifth Workshop on Meta-Learning at the Conference on Neural Information …, 2021
12021
Connecting context-specific adaptation in humans to meta-learning
R Dubey, E Grant, M Luo, K Narasimhan, T Griffiths
arXiv e-prints, arXiv: 2011.13782, 2020
12020
Tracing the emergence of gendered language in childhood
B Prystawski, E Grant, A Nematzadeh, SWS Lee, S Stevenson, Y Xu
Annual Meeting of the Cognitive Science Society (CogSci), 2020
12020
Gaussian process surrogate models for neural networks
MY Li, E Grant, TL Griffiths
arXiv preprint arXiv:2208.06028, 2022
2022
Cognitive analyses of machine learning systems
EM Grant
UC Berkeley, 2022
2022
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20