Obserwuj
Nishanth Dikkala
Nishanth Dikkala
Zweryfikowany adres z google.com
Tytuł
Cytowane przez
Cytowane przez
Rok
Testing ising models
C Daskalakis, N Dikkala, G Kamath
IEEE Transactions on Information Theory 65 (11), 6829-6852, 2019
932019
Minimax estimation of conditional moment models
N Dikkala, G Lewis, L Mackey, V Syrgkanis
Advances in Neural Information Processing Systems 33, 12248-12262, 2020
592020
Tight hardness results for maximum weight rectangles
A Backurs, N Dikkala, C Tzamos
arXiv preprint arXiv:1602.05837, 2016
412016
From soft classifiers to hard decisions: How fair can we be?
R Canetti, A Cohen, N Dikkala, G Ramnarayan, S Scheffler, A Smith
Proceedings of the conference on fairness, accountability, and transparency …, 2019
362019
Concentration of multilinear functions of the Ising model with applications to network data
C Daskalakis, N Dikkala, G Kamath
Advances in Neural Information Processing Systems 30, 2017
252017
Regression from dependent observations
C Daskalakis, N Dikkala, I Panageas
Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing …, 2019
242019
Learning from weakly dependent data under Dobrushin’s condition
Y Dagan, C Daskalakis, N Dikkala, S Jayanti
Conference on Learning Theory, 914-928, 2019
222019
Testing symmetric markov chains from a single trajectory
C Daskalakis, N Dikkala, N Gravin
Conference On Learning Theory, 385-409, 2018
202018
Hogwild!-gibbs can be panaccurate
C Daskalakis, N Dikkala, S Jayanti
Advances in Neural Information Processing Systems 31, 2018
142018
Estimating ising models from one sample
Y Dagan, C Daskalakis, N Dikkala, AV Kandiros
132020
Logistic regression with peer-group effects via inference in higher-order Ising models
C Daskalakis, N Dikkala, I Panageas
International Conference on Artificial Intelligence and Statistics, 3653-3663, 2020
92020
Learning Ising models from one or multiple samples
Y Dagan, C Daskalakis, N Dikkala, AV Kandiros
Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing …, 2021
72021
Do More Negative Samples Necessarily Hurt in Contrastive Learning?
P Awasthi, N Dikkala, P Kamath
arXiv preprint arXiv:2205.01789, 2022
62022
Statistical estimation from dependent data
V Kandiros, Y Dagan, N Dikkala, S Goel, C Daskalakis
International Conference on Machine Learning, 5269-5278, 2021
52021
Can Credit Increase Revenue?
N Dikkala, É Tardos
International Conference on Web and Internet Economics, 121-133, 2013
52013
For manifold learning, deep neural networks can be locality sensitive hash functions
N Dikkala, G Kaplun, R Panigrahy
arXiv preprint arXiv:2103.06875, 2021
42021
A theoretical view on sparsely activated networks
C Baykal, N Dikkala, R Panigrahy, C Rashtchian, X Wang
arXiv preprint arXiv:2208.04461, 2022
12022
A Quantum Computing Approach to Part-of-Speech Tagging: A Quantum Viterbi decoding Algorithm
V Singh, N Dikkala, P Bhattacharyya
Semantic Scholar, 2015
12015
Statistical Estimation from Dependent Data
Y Dagan, C Daskalakis, N Dikkala, S Goel, AV Kandiros
arXiv preprint arXiv:2107.09773, 2021
2021
Generalization and Learning Under Dobrushin's Condition
Y Dagan, C Daskalakis, N Dikkala, S Jayanti
32nd Annual Conference on Learning Theory, 2019
2019
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20