Meta-learning with differentiable convex optimization K Lee, S Maji, A Ravichandran, S Soatto CVPR 2019 (Oral), 2019 | 1378 | 2019 |
Vitgan: Training gans with vision transformers K Lee, H Chang, L Jiang, H Zhang, Z Tu, C Liu ICLR 2022 (Spotlight), 2022 | 190 | 2022 |
Learning instance occlusion for panoptic segmentation J Lazarow, K Lee, K Shi, Z Tu CVPR 2020, 2020 | 88 | 2020 |
Dual contradistinctive generative autoencoder G Parmar, D Li, K Lee, Z Tu CVPR 2021, 2021 | 83 | 2021 |
Wasserstein introspective neural networks K Lee, W Xu, F Fan, Z Tu CVPR 2018 (Oral), 2018 | 57 | 2018 |
AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos? Q Zhao, C Zhang, S Wang, C Fu, N Agarwal, K Lee, C Sun ICLR 2024, 2024 | 11 | 2024 |
Object-centric Video Representation for Long-term Action Anticipation C Zhang, C Fu, S Wang, N Agarwal, K Lee, C Choi, C Sun WACV 2024, 2024 | 3 | 2024 |
AdamsFormer for Spatial Action Localization in the Future H Chi, K Lee, N Agarwal, Y Xu, K Ramani, C Choi CVPR 2023, 2023 | 3 | 2023 |
Vamos: Versatile Action Models for Video Understanding S Wang, Q Zhao, MQ Do, N Agarwal, K Lee, C Sun arXiv preprint arXiv:2311.13627, 2023 | 2 | 2023 |
Controllable top-down feature transformer Z Jia, H Hong, S Wang, K Lee, Z Tu arXiv preprint arXiv:1712.02400, 2017 | 2 | 2017 |
Vicor: Bridging visual understanding and commonsense reasoning with large language models K Zhou, K Lee, T Misu, XE Wang ACL (Findings) 2024, 2024 | 1 | 2024 |
Learning Generative Models with Energy-Based Models and Transformer GANs K Lee University of California San Diego, 2022 | 1 | 2022 |
Can’t make an Omelette without Breaking some Eggs: Plausible Action Anticipation using Large Video-Language Models H Mittal, N Agarwal, SY Lo, K Lee CVPR 2024, 2024 | | 2024 |
Uncertainty-aware Action Decoupling Transformer for Action Anticipation H Guo, N Agarwal, SY Lo, K Lee, Q Ji CVPR 2024 (Highlight), 2024 | | 2024 |
Unaligned Image-to-Sequence Transformation with Loop Consistency S Wang, J Lazarow, K Lee, Z Tu arXiv preprint arXiv:1910.04149, 2019 | | 2019 |