Efficient ML Systems and LLM Infrastructure
Building scalable, practical, and trustworthy machine learning systems.
I am an Assistant Professor in the Department of Computer Science at Rutgers University. My research focuses on scalable and efficient machine learning algorithms and systems, with a current emphasis on LLMs.
Background
I was a Senior Project Scientist at the Machine Learning Department at CMU, working with Eric Xing. I obtained my PhD in Computer Science from UW-Madison, advised by Dimitris Papailiopoulos.
Current Focus
I study efficient training and serving of large-scale machine learning models, especially large language models under real system constraints.
Selected Publications
- SHED: Shapley-Based Automated Dataset Refinement, NeurIPS 2024
- Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild, NeurIPS 2024 Datasets and Benchmarks
- FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Architectures, NeurIPS 2024
- TrustLLM: Trustworthiness in Large Language Models, ICML 2024
For the complete list, see the Publications page.
News
- Our lab received an AMD University Program AI and HPC Cluster Allocation Award.
- Our work RedCoast won the NAACL 2024 Best Demo Paper Runner-Up.
- I received the Rising Star Award at CPAL 2024.
Research Group
- Daize Dong (PhD at Rutgers CS)
- Haolong Jia (PhD at Rutgers CS)
- Jiawei Wu (PhD at Rutgers CS)
- Junlin Chen (Undergraduate at Rutgers)
- Ryan Cheng (Undergraduate at Rutgers)
Teaching
- Spring 2026: CS 439, Intro to Data Science
- Fall 2025: RU CS 671, Recent Advances in Large Language Models
Services
Area Chair: MLSys 2025, CPAL 2026
PC Member: DAC 2024, EuroSys 2024, SOSP 2023 (light PC), MLSys 2023-2026, SIGKDD 2022, AAAI 2021-2022
Reviewer (Journals): JMLR, TMLR, IEEE TNNLS, IEEE IoT-J, IEEE/ACM Transactions on Networking
Reviewer (Conferences): COLM 2026, ICML 2019-2026 (top reviewer 2020), NeurIPS 2019-2025 (top reviewer 2019), CVPR 2021-2023, ICCV 2021-2022, ICLR 2021-2025, AAAI 2021-2024, SIGKDD 2022-2023
