Lyan Verwimp
Lyan Verwimp
Geverifieerd e-mailadres voor - Homepage
Geciteerd door
Geciteerd door
Character-Word LSTM Language Models
L Verwimp, J Pelemans, H Van hamme, P Wambacq
European Chapter of the Association for Computational Linguistics (EACL …, 2017
Latent relation language models
H Hayashi, Z Hu, C Xiong, G Neubig
Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 7911-7918, 2020
Definite il y a-clefts in spoken French
L Verwimp, K Lahousse
Journal of French Language Studies, 2016
Analyzing the contribution of top-down lexical and bottom-up acoustic cues in the detection of sentence prominence
S Kakouros, J Pelemans, L Verwimp, P Wambacq, O Räsänen
Proceedings Interspeech 2016 8, 1074-1078, 2016
A comparison of different punctuation prediction approaches in a translation context
V Vandeghinste, L Verwimp, J Pelemans, P Wambacq
Proceedings of the 21st Annual Conference of the European Association for …, 2018
Improving the translation environment for professional translators
V Vandeghinste, T Vanallemeersch, L Augustinus, B Bulté, F Van Eynde, ...
Informatics 6 (2), 24, 2019
TF-LM: TensorFlow-based Language Modeling Toolkit
L Verwimp, H Van hamme, P Wambacq
Proceedings Language Resources and Evaluation Conference (LREC), 2018
Language model adaptation for ASR of spoken translations using phrase-based translation models and named entity models
J Pelemans, T Vanallemeersch, K Demuynck, L Verwimp, P Wambacq
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
State Gradients for RNN Memory Analysis
L Verwimp, H Van hamme, V Renkens, P Wambacq
Interspeech 2018, 1467-1471, 2018
Literary Machine Translation under the Magnifying Glass: Assessing the Quality of an NMT-Translated Detective Novel on Document Level
M Fonteyne, A Tezcan, L Macken
Proceedings of The 12th Language Resources and Evaluation Conference, 3790-3798, 2020
State Gradients for Analyzing Memory in LSTM Language Models
L Verwimp, H Van hamme, P Wambacq
Computer Speech & Language, 101034, 2019
Domain adaptation for LSTM language models
W Boes, R Van Rompaey, J Pelemans, L Verwimp, P Wambacq
Book of abstracts CLIN27, 57, 2017
STON: Efficient Subtitling in Dutch Using State-of-the-Art Tools
L Verwimp, B Desplanques, K Demuynck, J Pelemans, M Lycke, ...
Interspeech 2016, 780-781, 2016
Information-Weighted Neural Cache Language Models for ASR
L Verwimp, J Pelemans, H Van hamme, P Wambacq
IEEE Workshop on Spoken Language Technology (SLT), 2018
145 ways to insert punctuation for speech translation
V Vandeghinste, L Verwimp, J Pelemans, P Wambacq
Book of abstracts CLIN28, 2018
Pictograph-to-Text Translation for Augmented and Alternative Communication
L Sevens, V Vandeghinste, L Verwimp, I Schuurman, P Wambacq, ...
Book of abstracts CLIN28, 2018
Language Models of Spoken Dutch
L Verwimp, J Pelemans, M Lycke, H Van hamme, P Wambacq
arXiv preprint arXiv:1709.03759, 2017
SCALE: A scalable language engineering toolkit
J Pelemans, L Verwimp, K Demuynck, P Wambacq
LREC, 2016
Error-driven Pruning of Language Models for Virtual Assistants
S Gondala, L Verwimp, E Pusateri, M Tsagkias, C Van Gysel
arXiv preprint arXiv:2102.07219, 2021
Reverse Transfer Learning: Can Word Embeddings Trained for Different NLP Tasks Improve Neural Language Models?
L Verwimp, JR Bellegarda
Interspeech 2019, 2019
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20