Glove: Global vectors for word representation J Pennington, R Socher, CD Manning Proceedings of the 2014 conference on empirical methods in natural language …, 2014 | 43975 | 2014 |
Semi-supervised recursive autoencoders for predicting sentiment distributions R Socher, J Pennington, EH Huang, AY Ng, CD Manning Proceedings of the 2011 conference on empirical methods in natural language …, 2011 | 1771 | 2011 |
Deep neural networks as gaussian processes J Lee, Y Bahri, R Novak, SS Schoenholz, J Pennington, J Sohl-Dickstein arXiv preprint arXiv:1711.00165, 2017 | 1327 | 2017 |
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection R Socher, EH Huang, J Pennington, CD Manning, AY Ng Advances in Neural Information Processing Systems 2011, 801--809, 2011 | 1173 | 2011 |
Wide neural networks of any depth evolve as linear models under gradient descent J Lee, L Xiao, S Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ... Advances in neural information processing systems 32, 2019 | 1166 | 2019 |
Sensitivity and generalization in neural networks: an empirical study R Novak, Y Bahri, DA Abolafia, J Pennington, J Sohl-Dickstein arXiv preprint arXiv:1802.08760, 2018 | 505 | 2018 |
Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks L Xiao, Y Bahri, J Sohl-Dickstein, S Schoenholz, J Pennington International Conference on Machine Learning, 5393-5402, 2018 | 385 | 2018 |
Bayesian deep convolutional networks with many channels are gaussian processes R Novak, L Xiao, J Lee, Y Bahri, G Yang, J Hron, DA Abolafia, ... arXiv preprint arXiv:1810.05148, 2018 | 384 | 2018 |
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice J Pennington, S Schoenholz, S Ganguli Advances in neural information processing systems 30, 2017 | 315 | 2017 |
Statistical mechanics of deep learning Y Bahri, J Kadmon, J Pennington, SS Schoenholz, J Sohl-Dickstein, ... Annual Review of Condensed Matter Physics 11 (1), 501-528, 2020 | 279 | 2020 |
Nonlinear random matrix theory for deep learning J Pennington, P Worah Advances in neural information processing systems 30, 2017 | 243 | 2017 |
Finite versus infinite neural networks: an empirical study J Lee, S Schoenholz, J Pennington, B Adlam, L Xiao, R Novak, ... Advances in Neural Information Processing Systems 33, 15156-15172, 2020 | 230 | 2020 |
Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) J Pennington, R Socher, C Manning GloVe: Global Vectors for Word Representation, 1532-1543, 2014 | 227 | 2014 |
A mean field theory of batch normalization G Yang, J Pennington, V Rao, J Sohl-Dickstein, SS Schoenholz arXiv preprint arXiv:1902.08129, 2019 | 210 | 2019 |
Hexagon functions and the three-loop remainder function LJ Dixon, JM Drummond, M von Hippel, J Pennington Journal of High Energy Physics 2013 (12), 1-95, 2013 | 208 | 2013 |
The emergence of spectral universality in deep networks J Pennington, S Schoenholz, S Ganguli International Conference on Artificial Intelligence and Statistics, 1924-1932, 2018 | 196 | 2018 |
The four-loop remainder function and multi-Regge behavior at NNLLA in planar = 4 super-Yang-Mills theory LJ Dixon, JM Drummond, C Duhr, J Pennington Journal of High Energy Physics 2014 (6), 1-59, 2014 | 188 | 2014 |
Geometry of neural network loss surfaces via random matrix theory J Pennington, Y Bahri International conference on machine learning, 2798-2806, 2017 | 173 | 2017 |
The neural tangent kernel in high dimensions: Triple descent and a multi-scale theory of generalization B Adlam, J Pennington International Conference on Machine Learning, 74-84, 2020 | 158 | 2020 |
Single-valued harmonic polylogarithms and the multi-Regge limit LJ Dixon, C Duhr, J Pennington Journal of High Energy Physics 2012 (10), 1-68, 2012 | 157 | 2012 |