Yizhi Li

💍📖💻🐶⛰️🤿.

prof_pic.jpg

I am currently a Computer Science PhD student funded by the University of Manchester, supervised by Prof. Chenghua Lin. I am also a co-founder of the Multimodal Art Projection (M-A-P) research community, which aims to drive open-source academia-level research to cutting-edge level as the industry. I’ve collaborated with Dr. Jie Fu and had a lot fun.


Research

My current research focus is mainly about the the post-training of LLM with reinforce learning. Specifically, as we have stepped into the new era of training LLM with one/similar recipe by setting customized goals, how to train a real generalized model under such a paradigm?

Other research questions involving post-training of LLMs and multi-modal alignment:

  • How to build an effective and robust self-evolved framework for LLMs with data synthesis (maingly during post-trianing)?
  • How to unifiy the understanding and generation of vision-langauge models?
  • The paradgim of aligning model among the text, vision and audio modalities.

Before the LLM era, my research interests could be concluded as these topics: language model evaluation, information retrieval, fairness in NLP, music modelling, and general topics natural language modelling. More recent and detailed topics can be referred to my publication pages.


Passed Experience

Academic Service: reviewer at ACL, EACL, EMNLP, INLG, ISMIR, ICLR, ICASSP, NeurIPS.

news

Dec 22, 2024 The workshop Open Science for Foundation Models (SCI-FM) is accepted at ICLR 2025, Singapore!
Sep 23, 2024 Release the text, image and audio tri-modal OmniBench.
Aug 26, 2024 Release the comprehensive review paper Foundation Models for Music: A Survey.
May 30, 2024 Four papers are accepted by the ACL’24.
May 29, 2024 We release the fully transparent pre-trained LLM MAP-Neo and its corpus Matrix.

latest posts