Obserwuj
Yi Zhang
Yi Zhang
Senior Researcher at Microsoft Research Redmond
Zweryfikowany adres z microsoft.com - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Sparks of artificial general intelligence: Early experiments with gpt-4
S Bubeck, V Chandrasekaran, R Eldan, J Gehrke, E Horvitz, E Kamar, ...
arXiv preprint arXiv:2303.12712, 2023
9842023
Generalization and Equilibrium in Generative Adversarial Nets (GANs)
S Arora, R Ge, Y Liang, T Ma, Y Zhang
arXiv preprint arXiv:1703.00573, 2017
7372017
Stronger generalization bounds for deep nets via a compression approach
S Arora, R Ge, B Neyshabur, Y Zhang
International Conference on Machine Learning, 254-263, 2018
6122018
Convolutional neural networks with low-rank regularization
C Tai, T Xiao, Y Zhang, X Wang
arXiv preprint arXiv:1511.06067, 2015
4922015
Deep visual analogy-making
SE Reed, Y Zhang, Y Zhang, H Lee
Advances in neural information processing systems 28, 2015
3312015
Do GANs actually learn the distribution? An empirical study
S Arora, Y Zhang
arXiv:1706.08224, 2017
1872017
Do GANs learn the distribution? some theory and empirics
S Arora, A Risteski, Y Zhang
International conference on learning representations, 2018
1622018
Spectral filtering for general linear dynamical systems
E Hazan, H Lee, K Singh, C Zhang, Y Zhang
Advances in Neural Information Processing Systems 31, 2018
892018
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets
R Kuditipudi, X Wang, H Lee, Y Zhang, Z Li, W Hu, S Arora, R Ge
arXiv:1906.06247, 2019
772019
Towards Understanding the Invertibility of Convolutional Neural Networks
CA Gilbert, Y Zhang, K Lee, Y Zhang, H Lee
arXiv preprint arXiv:1705.08664, 2017
752017
Textbooks Are All You Need
S Gunasekar, Y Zhang, J Aneja, CCT Mendes, A Del Giorno, S Gopi, ...
arXiv preprint arXiv:2306.11644, 2023
642023
Efficient full-matrix adaptive regularization
N Agarwal, B Bullins, X Chen, E Hazan, K Singh, C Zhang, Y Zhang
International Conference on Machine Learning, 102-110, 2019
542019
Why are convolutional nets more sample-efficient than fully-connected nets?
Z Li, Y Zhang, S Arora
arXiv preprint arXiv:2010.08515, 2020
452020
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Y Zhang, O Plevrakis, SS Du, X Li, Z Song, S Arora
arXiv:2002.06668, 2020
452020
Unveiling transformers with lego: a synthetic reasoning task
Y Zhang, A Backurs, S Bubeck, R Eldan, S Gunasekar, T Wagner
arXiv preprint arXiv:2206.04301, 2022
352022
What Makes Convolutional Models Great on Long Sequence Modeling?
Y Li, T Cai, Y Zhang, D Chen, D Dey
arXiv preprint arXiv:2210.09298, 2022
292022
Towards provable control for unknown linear dynamical systems
S Arora, E Hazan, H Lee, K Singh, C Zhang, Y Zhang
272018
Calibration, Entropy Rates, and Memory in Language Models
M Braverman, X Chen, SM Kakade, K Narasimhan, C Zhang, Y Zhang
arXiv preprint arXiv:1906.05664, 2019
262019
Not-So-Random Features
B Brian, Z Cyril, Z Yi
arXiv:1710.10230, 2017
24*2017
Theoretical limitations of Encoder-Decoder GAN architectures
S Arora, A Risteski, Y Zhang
arXiv:1711.02651, 2017
202017
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20