Obserwuj
Seonghyeon Ye
Tytuł
Cytowane przez
Cytowane przez
Rok
Towards Continual Knowledge Learning of Language Models
J Jang, S Ye, S Yang, J Shin, J Han, G Kim, SJ Choi, M Seo
ICLR 2022, 2022
862022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
J Jang, S Ye, C Lee, S Yang, J Shin, J Han, G Kim, M Seo
EMNLP 2022, 2022
502022
In-context instruction learning
S Ye, H Hwang, S Yang, H Yun, Y Kim, M Seo
AAAI 2024, 2024
442024
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
J Jang, S Ye, M Seo
Transfer Learning for NLP Workshop @ NeurIPS 2022, 2022
432022
Exploring the benefits of training expert language models over instruction tuning
J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo
ICML 2023, 2023
332023
Dimensional Emotion Detection from Categorical Emotion
S Park, J Kim, S Ye, J Jeon, HY Park, A Oh
EMNLP 2021, 2021
322021
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
S Ye, D Kim, J Jang, J Shin, M Seo
ICLR 2023, 2023
27*2023
Flask: Fine-grained language model evaluation based on alignment skill sets
S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo
ICLR 2024, 2024
252024
Selfee: Iterative self-revising llm empowered by self-feedback generation
S Ye, Y Jo, D Kim, S Kim, H Hwang, M Seo
Blog post, 2023
242023
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo
EMNLP 2023, 2023
222023
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning
S Ye, J Kim, A Oh
EMNLP 2021, 2021
152021
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
S Ye, J Jang, D Kim, Y Jo, M Seo
EMNLP 2023 Findings, 2023
10*2023
Improving probability-based prompt selection through unified evaluation and analysis
S Yang, J Kim, J Jang, S Ye, H Lee, M Seo
TACL 2024, 2024
22024
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models
H Oh, H Lee, S Ye, H Shin, H Jang, C Jun, M Seo
arXiv preprint arXiv:2402.14334, 2024
12024
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Y Kim, J Yoon, S Ye, SJ Hwang, S Yun
NAACL 2024, 2024
12024
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
H Hwang, D Kim, S Kim, S Ye, M Seo
arXiv preprint arXiv:2404.10346, 2024
2024
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–16