Volgen
Will Dabney
Will Dabney
DeepMind
Geverifieerd e-mailadres voor google.com - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Rainbow: Combining improvements in deep reinforcement learning
M Hessel, J Modayil, H Van Hasselt, T Schaul, G Ostrovski, W Dabney, ...
Thirty-second AAAI conference on artificial intelligence, 2018
14772018
A distributional perspective on reinforcement learning
MG Bellemare*, W Dabney*, R Munos
arXiv preprint arXiv:1707.06887, 2017
9392017
Distributed distributional deterministic policy gradients
G Barth-Maron, MW Hoffman, D Budden, W Dabney, D Horgan, D Tb, ...
arXiv preprint arXiv:1804.08617, 2018
3632018
Successor features for transfer in reinforcement learning
A Barreto, W Dabney, R Munos, JJ Hunt, T Schaul, HP van Hasselt, ...
Advances in neural information processing systems 30, 2017
3592017
Distributional reinforcement learning with quantile regression
W Dabney, M Rowland, M Bellemare, R Munos
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
3472018
The cramer distance as a solution to biased wasserstein gradients
MG Bellemare, I Danihelka, W Dabney, S Mohamed, ...
arXiv preprint arXiv:1705.10743, 2017
2772017
Recurrent experience replay in distributed reinforcement learning
S Kapturowski, G Ostrovski, J Quan, R Munos, W Dabney
International conference on learning representations, 2018
2602018
Implicit quantile networks for distributional reinforcement learning
W Dabney, G Ostrovski, D Silver, R Munos
International conference on machine learning, 1096-1105, 2018
2562018
A distributional code for value in dopamine-based reinforcement learning
W Dabney, Z Kurth-Nelson, N Uchida, CK Starkweather, D Hassabis, ...
Nature 577 (7792), 671-675, 2020
2162020
An analysis of categorical distributional reinforcement learning
M Rowland, M Bellemare, W Dabney, R Munos, YW Teh
International Conference on Artificial Intelligence and Statistics, 29-37, 2018
762018
Revisiting fundamentals of experience replay
W Fedus, P Ramachandran, R Agarwal, Y Bengio, H Larochelle, ...
International Conference on Machine Learning, 3061-3071, 2020
702020
The reactor: A fast and sample-efficient actor-critic agent for reinforcement learning
A Gruslys, W Dabney, MG Azar, B Piot, M Bellemare, R Munos
arXiv preprint arXiv:1704.04651, 2017
652017
Deep reinforcement learning and its neuroscientific implications
M Botvinick, JX Wang, W Dabney, KJ Miller, Z Kurth-Nelson
Neuron 107 (4), 603-616, 2020
632020
Autoregressive quantile networks for generative modeling
G Ostrovski, W Dabney, R Munos
International Conference on Machine Learning, 3936-3945, 2018
632018
Fast task inference with variational intrinsic successor features
S Hansen, W Dabney, A Barreto, T Van de Wiele, D Warde-Farley, V Mnih
arXiv preprint arXiv:1906.05030, 2019
622019
Adaptive step-size for online temporal difference learning
W Dabney, AG Barto
Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012
592012
RLPy: a value-function-based reinforcement learning framework for education and research.
A Geramifard, C Dann, RH Klein, W Dabney, JP How
J. Mach. Learn. Res. 16 (1), 1573-1578, 2015
582015
A geometric perspective on optimal representations for reinforcement learning
M Bellemare, W Dabney, R Dadashi, A Ali Taiga, PS Castro, N Le Roux, ...
Advances in neural information processing systems 32, 2019
542019
Hindsight credit assignment
A Harutyunyan, W Dabney, T Mesnard, M Gheshlaghi Azar, B Piot, ...
Advances in neural information processing systems 32, 2019
492019
Proximal reinforcement learning: A new theory of sequential decision making in primal-dual spaces
S Mahadevan, B Liu, P Thomas, W Dabney, S Giguere, N Jacek, I Gemp, ...
arXiv preprint arXiv:1405.6757, 2014
462014
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20