Volgen
Seonghyeon Ye
Seonghyeon Ye
Geverifieerd e-mailadres voor kaist.ac.kr - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Towards Continual Knowledge Learning of Language Models
J Jang, S Ye, S Yang, J Shin, J Han, G Kim, SJ Choi, M Seo
ICLR 2022, 2022
1352022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
J Jang, S Ye, C Lee, S Yang, J Shin, J Han, G Kim, M Seo
EMNLP 2022, 2022
862022
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
J Jang, S Ye, M Seo
Transfer Learning for NLP Workshop @ NeurIPS 2022, 2022
682022
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo
EMNLP 2023, 2023
672023
In-context instruction learning
S Ye, H Hwang, S Yang, H Yun, Y Kim, M Seo
AAAI 2024, 2024
64*2024
Flask: Fine-grained language model evaluation based on alignment skill sets
S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo
ICLR 2024, 2024
592024
Exploring the benefits of training expert language models over instruction tuning
J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo
ICML 2023, 2023
592023
Selfee: Iterative self-revising llm empowered by self-feedback generation
S Ye, Y Jo, D Kim, S Kim, H Hwang, M Seo
Blog post, 2023
512023
Dimensional Emotion Detection from Categorical Emotion
S Park, J Kim, S Ye, J Jeon, HY Park, A Oh
EMNLP 2021, 2021
512021
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
S Ye, D Kim, J Jang, J Shin, M Seo
ICLR 2023, 2023
34*2023
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning
S Ye, J Kim, A Oh
EMNLP 2021, 2021
202021
Consent in Crisis: The Rapid Decline of the AI Data Commons
S Longpre, R Mahari, A Lee, C Lund, H Oderinwale, W Brannon, ...
NeurIPS 2024, 2024
152024
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
H Chang, J Park, S Ye, S Yang, Y Seo, DS Chang, M Seo
NeurIPS 2024, 2024
142024
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
S Ye, J Jang, D Kim, Y Jo, M Seo
EMNLP 2023 Findings, 2023
12*2023
Improving probability-based prompt selection through unified evaluation and analysis
S Yang, J Kim, J Jang, S Ye, H Lee, M Seo
TACL 2024, 2024
92024
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models
H Oh, H Lee, S Ye, H Shin, H Jang, C Jun, M Seo
arXiv preprint arXiv:2402.14334, 2024
62024
Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards
H Hwang, D Kim, S Kim, S Ye, M Seo
EMNLP 2024 Findings, 2024
5*2024
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Y Kim, J Yoon, S Ye, SJ Hwang, S Yun
NAACL 2024, 2024
52024
Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks
C Lee, J Han, S Ye, SJ Choi, H Lee, K Bae
EMNLP 2024, 2024
42024
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
S Kim, J Suk, JY Cho, S Longpre, C Kim, D Yoon, G Son, Y Cho, ...
arXiv preprint arXiv:2406.05761, 2024
12024
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20