Follow
Sungbok Lee
Title
Cited by
Cited by
Year
IEMOCAP: Interactive emotional dyadic motion capture database
C Busso, M Bulut, CC Lee, A Kazemzadeh, E Mower, S Kim, JN Chang, ...
Language resources and evaluation 42, 335-359, 2008
39642008
Analysis of emotion recognition using facial expressions, speech and multimodal information
C Busso, Z Deng, S Yildirim, M Bulut, CM Lee, A Kazemzadeh, S Lee, ...
Proceedings of the 6th international conference on Multimodal interfaces …, 2004
12132004
Acoustics of children’s speech: Developmental changes of temporal and spectral parameters
S Lee, A Potamianos, S Narayanan
The Journal of the Acoustical Society of America 105 (3), 1455-1468, 1999
11761999
Emotion recognition using a hierarchical binary decision tree approach
CC Lee, E Mower, C Busso, S Lee, S Narayanan
Speech communication 53 (9-10), 1162-1171, 2011
5532011
An approach to real-time magnetic resonance imaging for speech production
S Narayanan, K Nayak, S Lee, A Sethy, D Byrd
The Journal of the Acoustical Society of America 115 (4), 1771-1776, 2004
4242004
Analysis of emotionally salient aspects of fundamental frequency for emotion detection
C Busso, S Lee, S Narayanan
IEEE transactions on audio, speech, and language processing 17 (4), 582-596, 2009
4092009
Emotion recognition based on phoneme classes.
CM Lee, S Yildirim, M Bulut, A Kazemzadeh, C Busso, Z Deng, S Lee, ...
Interspeech, 889-892, 2004
3352004
How far, how long: On the temporal scope of prosodic boundary effects
D Byrd, J Krivokapić, S Lee
The Journal of the Acoustical Society of America 120 (3), 1589-1599, 2006
2562006
An acoustic study of emotions expressed in speech.
S Yildirim, M Bulut, CM Lee, A Kazemzadeh, Z Deng, S Lee, ...
Interspeech, 2193-2196, 2004
2502004
Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research (TC)
S Narayanan, A Toutios, V Ramanarayanan, A Lammert, J Kim, S Lee, ...
The Journal of the Acoustical Society of America 136 (3), 1307-1311, 2014
2182014
Interpreting ambiguous emotional expressions
E Mower, A Metallinou, CC Lee, A Kazemzadeh, C Busso, S Lee, ...
2009 3rd International Conference on Affective Computing and Intelligent …, 2009
1612009
Automatic speech recognition for children.
A Potamianos, SS Narayanan, S Lee
Eurospeech 97, 2371-2374, 1997
1611997
An articulatory study of emotional speech production.
S Lee, S Yildirim, A Kazemzadeh, SS Narayanan
Interspeech, 497-500, 2005
1492005
DARPA communicator dialog travel planning systems: the june 2000 data collection.
MA Walker, JS Aberdeen, JE Boland, EO Bratt, JS Garofolo, L Hirschman, ...
INTERSPEECH, 1371-1374, 2001
1452001
The AT&t-DARPA communicator mixed-initiative spoken dialog system.
E Levin, SS Narayanan, R Pieraccini, K Biatov, E Bocchieri, ...
INTERSPEECH, 122-125, 2000
1442000
The psychologist as an interlocutor in autism spectrum disorder assessment: Insights from a study of spontaneous prosody
D Bone, CC Lee, MP Black, ME Williams, S Lee, P Levitt, S Narayanan
Journal of Speech, Language, and Hearing Research 57 (4), 1162-1177, 2014
1432014
Real-time emotion detection system using speech: Multi-modal fusion of different timescale features
S Kim, PG Georgiou, S Lee, S Narayanan
2007 IEEE 9th Workshop on Multimedia Signal Processing, 48-51, 2007
1362007
Audio-visual emotion recognition using gaussian mixture models for face and voice
A Metallinou, S Lee, S Narayanan
2008 Tenth IEEE international symposium on multimedia, 250-257, 2008
1322008
Using neutral speech models for emotional speech analysis.
C Busso, S Lee, SS Narayanan
Interspeech, 2225-2228, 2007
1182007
Decision level combination of multiple modalities for recognition and analysis of emotional expression
A Metallinou, S Lee, S Narayanan
2010 IEEE International Conference on Acoustics, Speech and Signal …, 2010
1172010
The system can't perform the operation now. Try again later.
Articles 1–20