💍📖💻🐶⛰️🤿.

turing_bench.jpg

w/ Jolly at the Alan Turing Memorial, Manchester

I am currently a Computer Science PhD candidate funded by the University of Manchester, supervised by Prof. Chenghua Lin. I am also a co-founder of the Multimodal Art Projection (M-A-P) research community, which aims to drive open-source academia-level research to cutting-edge level as the industry. I’ve collaborated with Dr. Jie Fu and had a lot fun.


Research

My current research focus is mainly about the the post-training of LLM with reinforce learning. Specifically, as we have stepped into the new era of training LLM with one/similar recipe by setting customized goals, how to train a real generalized model under such a paradigm?

Other research questions involving post-training of LLMs and multi-modal alignment:

  • How to build an effective and robust self-evolved framework for LLMs with data synthesis (maingly during post-trianing)?
  • How to unifiy the understanding and generation of vision-langauge models?
  • The paradgim of aligning model among the text, vision and audio modalities.

Before the LLM era, my research interests could be concluded as these topics: language model evaluation, information retrieval, fairness in NLP, music modelling, and general topics natural language modelling. More recent and detailed topics can be referred to my publication pages.


Passed Experience

Academic Service: reviewer at ACL, EACL, EMNLP, INLG, ISMIR, ICLR, ICASSP, NeurIPS.

news

Aug 07, 2025 Delivered a tutorial on Domain-specific LLM at NLPCC 2025 and share the overview about the organized shared task on gender bias mitigation challenge.
May 15, 2025 Two papers are accepted by the ACL’25.
Sep 23, 2024 Release the text, image and audio tri-modal OmniBench.
Aug 26, 2024 Release the comprehensive review paper Foundation Models for Music: A Survey.
May 30, 2024 Four papers are accepted by the ACL’24.