w/ Jolly at the Alan Turing Memorial, Manchester
I am currently a final year PhD candidate funded by the University of Manchester, supervised by Prof. Chenghua Lin. Currently an intern at IQuest Research, UbiQuant for building Coding LLM & Agent from scratch. Initiated the Multimodal Art Projection (M-A-P) research community, which aims to drive open-source academia-level research to cutting-edge level as the industry. I’ve collaborated with Dr. Jie Fu and had a lot fun.
I am on the job market now for research scientist/engineer positions!
Research
My current research focus involves:
- Building generalizable and scalable agents (IQuest-Coder-V1).
- Reinforcement Learning for LLM (TreePO).
- Latent & looped models.
Before that, some of my research interests involve:
- Synthetic Data for Vision-Language Model (MAmmoTH-VL),
- LM Evaluation (Encyclo-K, SuperGPQA)
- Music modelling, built the universal understanding (MERT) and generative model (YuE) for music .
More recent and detailed topics can be referred to my publication pages.
Passed Experience
- Intenrned at J.P. Morgan Artificial Intelligence Research in 2024 summer.
- I previously worked as a research assistant at Tsinghua NLP Lab with Prof. Zhiyuan Liu in 2021-2022.
Academic Service: workshop organizer at Open Science for Foundation Models, ICLR’25 and LLM4MA, ISMIR’25; reviewer at ACL, EMNLP, CVPR, ICLR, NeurIPS and more.
news
| Jan 01, 2026 | A family of strong OSS (looped) code LLM IQuest-Coder-V1. |
|---|---|
| Nov 01, 2025 | Two papers are accepted by the NeurIPS’25. |
| Aug 07, 2025 | Delivered a tutorial on Domain-specific LLM at NLPCC 2025 and share the overview about the organized shared task on gender bias mitigation challenge. |
| May 15, 2025 | Two papers are accepted by the ACL’25. |
| Sep 23, 2024 | Release the text, image and audio tri-modal OmniBench. |