Simplified state space layers for sequence modeling JTH Smith, A Warrington, SW Linderman arXiv preprint arXiv:2208.04933, 2022 | 551 | 2022 |
Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems J Smith, S Linderman, D Sussillo Advances in Neural Information Processing Systems 34, 16700-16713, 2021 | 31 | 2021 |
Convolutional state space models for long-range spatiotemporal modeling J Smith, S De Mello, J Kautz, S Linderman, W Byeon Advances in Neural Information Processing Systems 36, 80690-80729, 2023 | 23 | 2023 |
All-action policy gradient methods: A numerical integration approach B Petit, L Amdahl-Culleton, Y Liu, J Smith, PL Bacon arXiv preprint arXiv:1910.09093, 2019 | 9 | 2019 |
State-free inference of state-space models: The transfer function approach RN Parnichkun, S Massaroli, A Moro, JTH Smith, R Hasani, M Lechner, ... arXiv preprint arXiv:2405.06147, 2024 | 6 | 2024 |
Towards scalable and stable parallelization of nonlinear rnns X Gonzalez, A Warrington, J Smith, S Linderman Advances in Neural Information Processing Systems 37, 5817-5849, 2024 | 4 | 2024 |
Towards a theory of learning dynamics in deep state space models J Smékal, JTH Smith, M Kleinman, D Biderman, SW Linderman arXiv preprint arXiv:2407.07279, 2024 | 3 | 2024 |
Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula S Blouir, JTH Smith, A Anastasopoulos, A Shehu arXiv preprint arXiv:2411.01030, 2024 | 2 | 2024 |
On the interplay between learning and memory in deep state space models J Smekal, N Zucchet, D Biderman, EK Buchanan, JTH Smith, ... | 1 | 2024 |
Convolutional structured state space model J Smith, W Byeon, S De Mello US Patent App. 18/452,714, 2024 | | 2024 |
Bayesian Inference in Augmented Bow Tie Networks JTH Smith, D Lawson, SW Linderman | | |