Volgen
Tengyu MA
Titel
Geciteerd door
Geciteerd door
Jaar
A simple but tough-to-beat baseline for sentence embeddings
S Arora, Y Liang, T Ma
ICLR 2017, 2016
13252016
Learning imbalanced datasets with label-distribution-aware margin loss
K Cao, C Wei, A Gaidon, N Arechiga, T Ma
NeurIPS 2019; arXiv preprint arXiv:1906.07413, 2019
7602019
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
6712021
Generalization and Equilibrium in Generative Adversarial Nets (GANs)
S Arora, R Ge, Y Liang, T Ma, Y Zhang
ICML 2017;arXiv preprint arXiv:1703.00573, 2017, 2017
6412017
Matrix Completion has No Spurious Local Minimum
R Ge, JD Lee, T Ma
NIPS 2016 (best student paper). arXiv preprint arXiv:1605.07272, 2016
6322016
A latent variable model approach to pmi-based word embeddings
S Arora, Y Li, Y Liang, T Ma, A Risteski
Transactions of the Association for Computational Linguistics 4, 385-399, 2016
421*2016
Provable bounds for learning some deep representations
S Arora, A Bhaskara, R Ge, T Ma
International conference on machine learning, 584-592, 2014
4132014
Identity Matters in Deep Learning
M Hardt, T Ma
ICLR 2017, 2016
3632016
Mopo: Model-based offline policy optimization
T Yu, G Thomas, L Yu, S Ermon, JY Zou, S Levine, C Finn, T Ma
Advances in Neural Information Processing Systems 33, 14129-14142, 2020
3612020
Finding Approximate Local Minima for Nonconvex Optimization in Linear Time
N Agarwal, Z Allen-Zhu, B Bullins, E Hazan, T Ma
STOC 2017, 2016
305*2016
Gradient descent learns linear dynamical systems
M Hardt, T Ma, B Recht
arXiv preprint arXiv:1609.05191, 2016
2792016
Fixup initialization: Residual learning without normalization
H Zhang, YN Dauphin, T Ma
arXiv preprint arXiv:1901.09321, 2019
2512019
Learning one-hidden-layer neural networks with landscape design
R Ge, JD Lee, T Ma
ICLR 2017; arXiv preprint arXiv:1711.00501, 2017
2512017
Algorithmic Regularization in Over-parameterized Matrix Recovery and Neural Networks with Quadratic Activations
Y Li, T Ma, H Zhang
COLT 2018 (best paper); arXiv preprint arXiv:1712.09203, 2017
241*2017
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
TM Colin Wei, Jason D. Lee, Qiang Liu
arXiv preprint arXiv:1810.05369, 2019
211*2019
Verified uncertainty calibration
A Kumar, PS Liang, T Ma
Advances in Neural Information Processing Systems 32, 2019
2072019
Linear algebraic structure of word senses, with applications to polysemy
S Arora, Y Li, Y Liang, T Ma, A Risteski
arXiv preprint arXiv:1601.03764, 2016
2032016
Towards explaining the regularization effect of initial large learning rate in training neural networks
Y Li, C Wei, T Ma
Advances in Neural Information Processing Systems 32, 2019
2002019
Simple, efficient, and neural algorithms for sparse coding
S Arora, R Ge, T Ma, A Moitra
Conference on Learning Theory (COLT) 2015. arXiv preprint arXiv:1503.00778, 2015
1992015
Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees
Y Luo, H Xu, Y Li, Y Tian, T Darrell, T Ma
arXiv preprint arXiv:1807.03858, 2018
1752018
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20