💍📖💻🐶⛰️🤿.

turing_bench.jpg

w/ Jolly at the Alan Turing Memorial, Manchester

I am currently a final year PhD candidate funded by the University of Manchester, supervised by Prof. Chenghua Lin. Currently an intern at IQuest Research. Co-founder of the Multimodal Art Projection (M-A-P) research community, which aims to drive open-source academia-level research to cutting-edge level as the industry. I’ve collaborated with Dr. Jie Fu and had a lot fun.

I am on the job market now for research scientist/engineer positions!


Research

My current research focus involves:

  • As we have stepped into the new era of training LLM with one/similar recipe by setting customized goals, how to train a real generalized model under such a paradigm?
  • What is the role of syntheic data / data engineering in the training life cycle?
  • Unified representation as the solution to world modelling.

Before that, my research interests could be concluded as these topics: vision-language model, language model evaluation, information retrieval, fairness in NLP, music modelling, and general topics natural language modelling. More recent and detailed topics can be referred to my publication pages.


Passed Experience

Academic Service: workshop organizer at Open Science for Foundation Models, ICLR’25 and LLM4MA, ISMIR’25; reviewer at ACL, EMNLP, CVPR, ICLR, NeurIPS and more.

news

Jan 01, 2026 A family of strong OSS (looped) code LLM IQuest-Coder-V1.
Nov 01, 2025 Two papers are accepted by the NeurIPS’25.
Aug 07, 2025 Delivered a tutorial on Domain-specific LLM at NLPCC 2025 and share the overview about the organized shared task on gender bias mitigation challenge.
May 15, 2025 Two papers are accepted by the ACL’25.
Sep 23, 2024 Release the text, image and audio tri-modal OmniBench.