Volgen
Doyoung Kim
Doyoung Kim
Geverifieerd e-mailadres voor kaist.ac.kr
Titel
Geciteerd door
Geciteerd door
Jaar
The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning
S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo
arXiv preprint arXiv:2305.14045, 2023
642023
Exploring the benefits of training expert language models over instruction tuning
J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo
International Conference on Machine Learning, 14702-14729, 2023
552023
Flask: Fine-grained language model evaluation based on alignment skill sets
S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo
arXiv preprint arXiv:2307.10928, 2023
532023
Selfee: Iterative self-revising llm empowered by self-feedback generation
S Ye, Y Jo, D Kim, S Kim, H Hwang, M Seo
Blog post, 2023
442023
Guess the instruction! flipped learning makes language models stronger zero-shot learners
S Ye, D Kim, J Jang, J Shin, M Seo
arXiv preprint arXiv:2210.02969, 2022
182022
Guess the instruction! making language models stronger zero-shot learners
S Ye, D Kim, J Jang, J Shin, M Seo
arXiv preprint arXiv:2210.02969, 2022
142022
Retrieval of soft prompt enhances zero-shot task generalization
S Ye, J Jang, D Kim, Y Jo, M Seo
ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation …, 2022
122022
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
H Hwang, D Kim, S Kim, S Ye, M Seo
arXiv preprint arXiv:2404.10346, 2024
52024
How Well Do Large Language Models Truly Ground?
H Lee, S Joo, C Kim, J Jang, D Kim, KW On, M Seo
arXiv preprint arXiv:2311.09069, 2023
32023
Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model
D Kim, J Lee, J Park, M Seo
arXiv preprint arXiv:2406.15275, 2024
2024
Semiparametric Token-Sequence Co-Supervision
H Lee, D Kim, J Jun, S Joo, J Jang, KW On, M Seo
arXiv preprint arXiv:2403.09024, 2024
2024
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
S Ye, J Jang, D Kim, Y Jo, M Seo
Findings of the Association for Computational Linguistics: EMNLP 2023, 12288 …, 2023
2023
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–12