Tianyi Zhang
Tianyi Zhang
Stanford University
Zweryfikowany adres z cornell.edu - Strona główna
Cytowane przez
Cytowane przez
Bertscore: Evaluating text generation with bert
T Zhang, V Kishore, F Wu, KQ Weinberger, Y Artzi
arXiv preprint arXiv:1904.09675, 2019
Simplifying Graph Convolutional Networks
F Wu, T Zhang, AH Souza Jr, C Fifty, T Yu, KQ Weinberger
Proceedings of the 36th International Conference on Machine Learning, 2019
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
Stanford alpaca: An instruction-following llama model
R Taori, I Gulrajani, T Zhang, Y Dubois, X Li, C Guestrin, P Liang, ...
Revisiting few-sample BERT fine-tuning
T Zhang, F Wu, A Katiyar, KQ Weinberger, Y Artzi
arXiv preprint arXiv:2006.05987, 2020
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
Identifying mislabeled data using the area under the margin ranking
G Pleiss, T Zhang, E Elenberg, KQ Weinberger
Advances in Neural Information Processing Systems 33, 17044-17056, 2020
SWALP: Stochastic Weight Averaging in Low-Precision Training
G Yang, T Zhang, P Kirichenko, J Bai, AG Wilson, C De Sa
Proceedings of the 36th International Conference on Machine Learning, 2019
Benchmarking large language models for news summarization
T Zhang, F Ladhak, E Durmus, P Liang, K McKeown, TB Hashimoto
arXiv preprint arXiv:2301.13848, 2023
Alpacafarm: A simulation framework for methods that learn from human feedback
Y Dubois, X Li, R Taori, T Zhang, I Gulrajani, J Ba, C Guestrin, P Liang, ...
arXiv preprint arXiv:2305.14387, 2023
Evaluating verifiability in generative search engines
NF Liu, T Zhang, P Liang
arXiv preprint arXiv:2304.09848, 2023
Qpytorch: A low-precision arithmetic simulation framework
T Zhang, Z Lin, G Yang, C De Sa
2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive …, 2019
DS-1000: A natural and reliable benchmark for data science code generation
Y Lai, C Li, Y Wang, T Zhang, R Zhong, L Zettlemoyer, W Yih, D Fried, ...
International Conference on Machine Learning, 18319-18345, 2023
On the inductive bias of masked language modeling: From statistical to syntactic dependencies
T Zhang, T Hashimoto
arXiv preprint arXiv:2104.05694, 2021
Decentralized training of foundation models in heterogeneous environments
B Yuan, Y He, J Davis, T Zhang, T Dao, B Chen, PS Liang, C Re, C Zhang
Advances in Neural Information Processing Systems 35, 25464-25477, 2022
Coder reviewer reranking for code generation
T Zhang, T Yu, T Hashimoto, M Lewis, W Yih, D Fried, S Wang
International Conference on Machine Learning, 41832-41846, 2023
LADIS: Language disentanglement for 3D shape editing
I Huang, P Achlioptas, T Zhang, S Tulyakov, M Sung, L Guibas
arXiv preprint arXiv:2212.05011, 2022
When Do Pre-Training Biases Propagate to Downstream Tasks? A Case Study in Text Summarization
F Ladhak, E Durmus, M Suzgun, T Zhang, D Jurafsky, K Mckeown, ...
Proceedings of the 17th Conference of the European Chapter of the …, 2023
TempLM: Distilling Language Models into Template-Based Generators
T Zhang, M Lee, L Li, E Shen, TB Hashimoto
arXiv preprint arXiv:2205.11055, 2022
Mistral–a journey towards reproducible language model training
S Karamcheti, L Orr, J Bolton, T Zhang, K Goel, A Narayan, R Bommasani, ...
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20