About Me
I am currently a Postdoctoral Researcher at the Department of Computer Science, University of California, Los Angeles (UCLA), hosted by Prof. Nanyun (Violet) Peng and Prof. Kai-Wei Chang. I received my Ph.D. degree from the University of Science and Technology of China (USTC) in 2022, under the supervision of Prof. Zhen-Hua Ling. I have previously interned at Microsoft (2020-2021) and visited Queen’s University (2019-2020). I fortunately received the Best Paper Honorable Mention Award of ACL 2023 and Best Paper Award of DialDoc@ACL 2022.
My primary research interests lie within machine learning for natural language processing, particularly the techniques that can potentially build Trustworthy and Efficient AI. A list of topics that I am actively working on:
- Retrieval-Augmented Language Models to augment the capabilities of given large language models (LLMs) and scale down the size of LLMs via retrieval from knowledge indexes. Recent work: [BRIEF-Pro], [Self-Routing RAG], [CRAG], [TemMed-Bench], [MRAG-Bench, ICLR 2025], [Sparse-RAG, ICLR 2025], [BRIEF, Findings of NAACL 2025], [SyncCheck, EMNLP 2024], [TEGTOK, Findings of ACL 2022], [SPD, EMNLP 2021], [Partner Knowledge, SIGIR 2021], [FIRE, Findings of EMNLP 2020], [DIM, EMNLP 2019].
- Model Editing to reduce the cost of upgrading LLMs and enable data-efficient alterations to the behavior of LLMs, while ensuring no adverse impact on other inputs. Recent work: [SPHERE], [Reversal Editing], [KnowEdit], [UltraEdit], [EditAttack, AAAI 2026], [CaKE, EMNLP 2025], [PRUNE, ICLR 2025], [RECT, EMNLP 2024], [PEAK, ICML 2024].
- Representation Learning with Weak Supervision in the context of dialogue systems: [MADNet, EMNLP 2023], [GIFT, ACL 2023], [HeterMPC, ACL 2022], [MPC Survey, IJCAI 2022], [MPC-BERT, ACL 2021], [SA-BERT, CIKM 2020], [U2U-IMN, TASLP 2020], [IMN, CIKM 2019].
If you are interested in talking about research and life with me, please feel free to reach out.