Volgen
Sohee Yang
Sohee Yang
UCL / DeepMind
Geverifieerd e-mailadres voor deepmind.com - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Efficient Dialogue State Tracking by Selectively Overwriting Memory
S Kim, S Yang, G Kim, SW Lee
ACL 2020, 2020
2022020
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
J Jang, D Yoon, S Yang, S Cha, M Lee, L Logeswaran, M Seo
ACL 2023, 2022
1372022
Towards Continual Knowledge Learning of Language Models
J Jang, S Ye, S Yang, J Shin, J Han, G Kim, SJ Choi, M Seo
ICLR 2022, 2021
1362021
Spatial Dependency Parsing for Semi-Structured Document Information Extraction
W Hwang, J Yim, S Park, S Yang, M Seo
Findings of ACL 2021, 2020
1122020
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
J Jang, S Ye, C Lee, S Yang, J Shin, J Han, G Kim, M Seo
EMNLP 2022, 2022
872022
NeurIPS 2020 EfficientQA competition: Systems, analyses and lessons learned
S Min, J Boyd-Graber, C Alberti, D Chen, E Choi, M Collins, K Guu, ...
Proceedings of Machine Learning Research (PMLR) 133, 86-111, 2021
742021
Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following
S Ye, H Hwang, S Yang, H Yun, Y Kim, M Seo
AAAI 2024, 2023
65*2023
Is Retriever Merely an Approximator of Reader?
S Yang, M Seo
Spa-NLP Workshop at ACL 2022, 2020
382020
Do Large Language Models Latently Perform Multi-Hop Reasoning?
S Yang, E Gribovskaya, N Kassner, M Geva, S Riedel
ACL 2024, 2024
332024
ClovaCall: Korean Goal-Oriented Dialog Speech Corpus for Automatic Speech Recognition of Contact Centers
JW Ha, K Nam, JG Kang, SW Lee, S Yang, H Jung, E Kim, H Kim, S Kim, ...
INTERSPEECH 2020, 2020
322020
Nonparametric Decoding for Generative Retrieval
H Lee, J Kim, H Chang, H Oh, S Yang, V Karpukhin, Y Lu, M Seo
Findings of ACL 2023, 2023
25*2023
Generative Multi-hop Retrieval
H Lee, S Yang, H Oh, M Seo
EMNLP 2022, 2022
25*2022
Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation
SW Lee, T Gao, S Yang, J Yoo, JW Ha
ICLR 2019, 2019
202019
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
H Chang, J Park, S Ye, S Yang, Y Seo, DS Chang, M Seo
NeurIPS 2024, 2024
142024
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
S Yang, J Kim, J Jang, S Ye, H Lee, M Seo
Transactions of the Association for Computational Linguistics (TACL) 12, 758-774, 2024
92024
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering
S Yang, M Seo
NAACL 2021, 2021
72021
T-commerce sale prediction using deep learning and statistical model
I Kim, K Na, S Yang, J Jang, Y Kim, W Shin, D Kim
Journal of KIISE 44 (8), 803-812, 2017
7*2017
Exploring the Practicality of Generative Retrieval on Dynamic Corpora
S Yoon, C Kim, H Lee, J Jang, S Yang, M Seo
EMNLP 2024, 2023
6*2023
Hopping Too Late: Exploring the Limitations of Large Language Models on Multi-Hop Queries
E Biran, D Gottesman, S Yang, M Geva, A Globerson
EMNLP 2024, 2024
22024
Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?
S Yang, N Kassner, E Gribovskaya, S Riedel, M Geva
arXiv preprint arXiv:2411.16679, 2024
2024
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20