Flow-guided one-shot talking face generation with a high-resolution audio-visual dataset Z Zhang, L Li, Y Ding, C Fan Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021 | 84 | 2021 |
Freenet: Multi-identity face reenactment J Zhang, X Zeng, M Wang, Y Pan, L Liu, Y Liu, Y Ding, C Fan Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020 | 68 | 2020 |
Laughter animation synthesis Y Ding, K Prepin, J Huang, C Pelachaud, T Artières Proceedings of the 2014 international conference on Autonomous agents and …, 2014 | 57 | 2014 |
Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion S Wang, L Li, Y Ding, C Fan, X Yu International Joint Conference on Artificial Intelligence (IJCAI-21), 2021 | 51 | 2021 |
Learning a facial expression embedding disentangled from identity W Zhang, X Ji, K Chen, Y Ding, C Fan Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021 | 41 | 2021 |
Transformer-based multimodal information fusion for facial expression analysis W Zhang, F Qiu, S Wang, H Zeng, Z Zhang, R An, B Ma, Y Ding Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 39 | 2022 |
Modeling multimodal behaviors from speech prosody Y Ding, C Pelachaud, T Artieres International Conference on Intelligent Virtual Agents, 217-228, 2013 | 37 | 2013 |
Write-a-speaker: Text-based emotional and rhythmic talking-head generation L Li, S Wang, Z Zhang, Y Ding, Y Zheng, X Yu, C Fan Proceedings of the AAAI Conference on Artificial Intelligence 35 (3), 1911-1920, 2021 | 33 | 2021 |
One-shot talking face generation from single-speaker audio-visual correlation learning S Wang, L Li, Y Ding, X Yu Proceedings of the AAAI Conference on Artificial Intelligence 36 (3), 2531-2539, 2022 | 29 | 2022 |
Rhythmic body movements of laughter R Niewiadomski, M Mancini, Y Ding, C Pelachaud, G Volpe Proceedings of the 16th international conference on multimodal interaction …, 2014 | 26 | 2014 |
Faceswapnet: Landmark guided many-to-many face reenactment J Zhang, X Zeng, Y Pan, Y Liu, Y Ding, C Fan arXiv preprint arXiv:1905.11805 2, 3, 2019 | 25 | 2019 |
Prior aided streaming network for multi-task affective recognitionat the 2nd abaw2 competition W Zhang, Z Guo, K Chen, L Li, Z Zhang, Y Ding arXiv preprint arXiv:2107.03708, 2021 | 22 | 2021 |
Speech-driven eyebrow motion synthesis with contextual markovian models Y Ding, M Radenen, T Artieres, C Pelachaud 2013 IEEE International Conference on Acoustics, Speech and Signal …, 2013 | 22 | 2013 |
Implementing and evaluating a laughing virtual character M Mancini, B Biancardi, F Pecune, G Varni, Y Ding, C Pelachaud, G Volpe, ... ACM Transactions on Internet Technology (TOIT) 17 (1), 1-22, 2017 | 21 | 2017 |
Laughing with a Virtual Agent. F Pecune, M Mancini, B Biancardi, G Varni, Y Ding, C Pelachaud, G Volpe, ... AAMAS, 1817-1818, 2015 | 21 | 2015 |
One-shot voice conversion using star-GAN R Wang, Y Ding, L Li, C Fan ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020 | 16 | 2020 |
Laugh when you’re winning M Mancini, L Ach, E Bantegnie, T Baur, N Berthouze, D Datta, Y Ding, ... Innovative and Creative Developments in Multimodal Interaction Systems: 9th …, 2014 | 16 | 2014 |
Real-time visual prosody for interactive virtual agents H Van Welbergen, Y Ding, K Sattler, C Pelachaud, S Kopp Intelligent Virtual Agents: 15th International Conference, IVA 2015, Delft …, 2015 | 15 | 2015 |
Triterpenoid saponins from Clematis argentilucida. M Zhao, HF Tang, F Qiu, XR Tian, Y Ding, XY Wang, XM Zhou Biochemical Systematics and Ecology 40, 49-52, 2012 | 14 | 2012 |
A multifaceted study on eye contact based speaker identification in three-party conversations Y Ding, Y Zhang, M Xiao, Z Deng Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems …, 2017 | 13 | 2017 |