Raquel G. Alhama
Raquel G. Alhama
Institute for Logic, Language and Computation. University of Amsterdam.
Geverifieerd e-mailadres voor uva.nl - Homepage
Geciteerd door
Geciteerd door
A review of computational models of basic rule learning: The neural-symbolic debate and beyond
RG Alhama, W Zuidema
Psychonomic bulletin & review 26, 1174-1194, 2019
Pre-wiring and pre-training: What does a neural network need to learn truly general identity rules?
RG Alhama, W Zuidema
Journal of Artificial Intelligence Research 61, 927-946, 2018
When the “Tabula” is anything but “Rasa:” What determines performance in the auditory statistical learning task?
A Elazar, RG Alhama, L Bogaerts, N Siegelman, C Baus, R Frost
Cognitive Science 46 (2), e13102, 2022
Neural discontinuous constituency parsing
M Stanojevic, RG Alhama
2017 Conference on Empirical Methods in Natural Language Processing, 1666-1676, 2017
Five ways in which computational modeling can help advance cognitive science: Lessons from artificial grammar learning
W Zuidema, RM French, RG Alhama, K Ellis, TJ O'Donnell, T Sainburg, ...
Topics in cognitive science 12 (3), 925-941, 2020
The role of information in visual word recognition: A perceptually-constrained connectionist account
RG Alhama, N Siegelman, R Frost, BC Armstrong
The 41st annual meeting of the cognitive science society (cogsci 2019), 83-89, 2019
How should we evaluate models of segmentation in artificial language learning?
RG Alhama, R Scha, W Zudema
University of Groningen, 2015
Evaluating word embeddings for language acquisition
RG Alhama, CF Rowland, E Kidd
(Online) Workshop on Cognitive Modeling and Computational Linguistics (CMCL …, 2020
Segmentation as Retention and Recognition: the R&R model
RG Alhama, W Zuidema
Proceedings of the 39th Annual Conference of the Cognitive Science Society., 2017
Computational modelling of artificial language learning: Retention, recognition & recurrence
RG Alhama
University of Amsterdam, 2017
Generalization in Artificial Language Learning: Modelling the Propensity to Generalize
RG Alhama, W Zuidema
Proceedings of the 7th Workshop on Cognitive Aspects of Computational …, 2016
Los avances tecnológicos y la ciencia del lenguaje
M Martí, RG Alhama, M Recasens
Universidad de Santiago de Compostela, 2012
Word Segmentation as Unsupervised Constituency Parsing
RG Alhama
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
Retrodiction as delayed recurrence: the case of adjectives in italian and english
RG Alhama, F Zermiani, A Khaliq
Proceedings of the The 19th Annual Workshop of the Australasian Language …, 2021
Rule learning in humans and animals
RG Alhama, R Scha, W Zuidema
Proceedings of the international conference on the evolution of language …, 2014
Linguistic Productivity: the Case of Determiners in English
RG Alhama, R Foushee, D Byrne, A Ettinger, S Goldin-Meadow, ...
Proceedings of the 13th International Joint Conference on Natural Language …, 2023
How does linguistic context influence word learning?
RG Alhama, CF Rowland, K Evan
Journal of Child Language 50 (6), 1374-1393, 2023
Preface: Computational Linguistics in the Netherlands Journal
E Vanmassenhove, M De Sisto, RG Alhama, TO Lentz, J Engelen, ...
Computational Linguistics in the Netherlands Journal 12, 3-5, 2022
How much context is helpful for noun and verb acquisition?
RG Alhama, C Rowland, E Kidd
International Conference on Cognitive Modelling, 2021
'Long nose’and ‘naso lungo’: Establishing the need for retrodiction in computational models of word learning
F Zermiani, A Khaliq, RG Alhama
Many Paths to Language, 2020
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20