Obserwuj
Xisen Jin
Tytuł
Cytowane przez
Cytowane przez
Rok
Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures
W Lei, X Jin, MY Kan, Z Ren, X He, D Yin
ACL 2018, 1437-1447, 2018
3962018
Recurrent event network: Autoregressive structure inference over temporal knowledge graphs
W Jin, M Qu, X Jin, X Ren
arXiv preprint arXiv:1904.05530, 2019
3692019
Contextualizing hate speech classifiers with post-hoc explanation
B Kennedy, X Jin, AM Davani, M Dehghani, X Ren
ACL 2020, 2020
1642020
Gradient-based editing of memory examples for online task-free continual learning
X Jin, A Sadhu, J Du, X Ren
NeurIPS 2021, 2021
136*2021
Dataless knowledge fusion by merging weights of language models
X Jin, X Ren, D Preotiuc-Pietro, P Cheng
ICLR 2023, 2022
1292022
Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models
X Jin, Z Wei, J Du, X Xue, X Ren
ICLR 2020, 2019
1252019
Lifelong pretraining: Continually adapting language models to emerging corpora
X Jin, D Zhang, H Zhu, W Xiao, SW Li, X Wei, A Arnold, X Ren
NAACL 2022, 2021
1192021
On transferability of bias mitigation effects in language model fine-tuning
X Jin, F Barbieri, B Kennedy, AM Davani, L Neves, X Ren
NAACL 2021, 2020
78*2020
Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation
X Jin, W Lei, Z Ren, H Chen, S Liang, Y Zhao, D Yin
CIKM 2018, 1403-1412, 2018
58*2018
Refining language models with compositional explanations
H Yao, Y Chen, Q Ye, X Jin, X Ren
NeurIPS 2021, 8954-8967, 2021
44*2021
Learn continually, generalize rapidly: Lifelong knowledge accumulation for few-shot learning
X Jin, BY Lin, M Rostami, X Ren
EMNLP 2021 Findings, 2021
432021
Visually grounded continual learning of compositional phrases
X Jin, J Du, A Sadhu, R Nevatia, X Ren
EMNLP 2020, 2020
20*2020
Overcoming catastrophic forgetting in massively multilingual continual learning
GI Winata, L Xie, K Radhakrishnan, S Wu, X Jin, P Cheng, M Kulkarni, ...
ACL 2023 Findings, 2023
192023
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
X Jin, X Ren
ICML 2024 Spotlight, 2024
42024
Demystifying Forgetting in Language Model Fine-Tuning with Statistical Analysis of Example Associations
X Jin, X Ren
arXiv preprint arXiv:2406.14026, 2024
12024
Demystifying Language Model Forgetting with Low-Rank Example Associations
X Jin, X Ren
NeurIPS 2024 Workshop on Scalable Continual Learning for Lifelong Foundation …, 0
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–16