SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning
Y. He, Z. Wang, Z. Shen, G. Sun, Y. Dai, Y. Wu, H. Wang, A. Li, NeurIPS 2024 [arXiv]
My Google Scholar page can be found here.
SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning
Y. He, Z. Wang, Z. Shen, G. Sun, Y. Dai, Y. Wu, H. Wang, A. Li, NeurIPS 2024 [arXiv]
Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
X. Zhao, G. Sun, R. Cai, Y. Zhou, P. Li, P. Wang, B. Tan, Y. He, L. Chen, Y. Liang, B. Chen, B. Yuan, H. Wang, A. Li, Z. Wang, T. Chen, NeurIPS 2024 Datasets and Benchmarks [link]
FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations
Z. Wang, Z. Shen, Y. He, G. Sun, H. Wang, L. Lyu, A. Li, NeurIPS 2024 [arXiv]
TrustLLM: Trustworthiness in Large Language Models
H. Wang with many collegues (Position Paper), ICML 2024 [link] [arXiv]
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
S. Horváth, S. Laskaridis, S. Rajput, H. Wang, ICML 2024 [link] [arXiv]
LLM360: Towards Fully Transparent Open-Source LLMs
Z. Liu, A. Qiao, W. Neiswanger, H. Wang, B. Tan, T. Tao, J. Li, Y. Wang, S. Sun, O. Pangarkar, R. Fan, Y. Gu, V. Miller, Y. Zhuang, G. He, H. Li, F. Koto, L. Tang, N. Ranjan, Z. Shen, R. Iriondo, C. Mu, Z. Hu, M. Schulze, P. Nakov, T. Baldwin, E. P. Xing, COLM 2024 [arXiv]
Crystal: Illuminating LLM Abilities on Language and Code
T. Tao, J. Li, B. Tan, H. Wang, W. Marshall, B. M Kanakiya, J. Hestness, N. Vassilieva, Z. Shen, E. P. Xing, Z. Liu, COLM 2024 [arXiv]
RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs
B. Tan, Y. Zhu, L. Liu, H. Wang, Y. Zhuang, J. Chen, E. P. Xing, Z. Hu, NAACL Demo 2024 ($\color{red}{\text{the Best Demo Runner Up}}$) [link] [arXiv]
Does compressing activations help model parallel training?
S. Bian, D. Li, H. Wang, E. P. Xing, S. Venkataraman, MLSys 2024 [arXiv]
Fusing Models with Complementary Expertise
H. Wang, F. M. Polo, Y. Sun, S. Kundu, E. P. Xing, M. Yurochkin, ICLR 2024 [link] [arXiv]
FedNAR: Federated Optimization with Normalized Annealing Regularization
J. Li, A. Li, C. Tian, Q. Ho, E. Xing, H. Wang, NeurIPS 2023 [link] [arXiv]
Cuttlefish: Low-rank Model Training without All The Tuning
H. Wang, S. Agarwal, P. U-chupala, Y. Tanaka, E. P. Xing, D. Papailiopoulos, MLSys 2023 [link] [arXiv]
MPCFormer: fast, performant and private Transformer inference with MPC
D. Li*, R. Shao*, H. Wang*, H. Guo, E. P. Xing, H. Zhang, ICLR 2023, ($\color{red}{\text{Spotlight}}$) [link]
Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach
H. Guo, P. Greengard, H. Wang, A. Gelman, E. P. Xing, Y. Kim, ICLR 2023 [link]
Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation
K. Zhang, Y. Wang, H. Wang, L. Huang, C. Yang, X. Chen, L. Sun, Findings of EMNLP 2022
Rare Gems: Finding Lottery Tickets at Initialization
K. Sreenivasan, J. Sohn, L. Yang, M. Grinde, A. Nagle, H. Wang, E. P. Xing, K. Lee, D. Papailiopoulos, NeurIPS 2022 [arXiv]
AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness
D. Li, H. Wang, E. P. Xing, H. Zhang, NeurIPS 2022 [arXiv]
On the Utility of Gradient Compression in Distributed Training Systems
S. Agarwal, H. Wang, S. Venkataraman, D. Papailiopoulos, MLSys 2022 [link] [arXiv]
Pufferfish: Communication-efficient Models At No Extra Cost
H. Wang, S. Agarwal, D. Papailiopoulos, MLSys 2021 [arXiv] [link] [talk]
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
S. Agarwal, H. Wang, K. Lee, S. Venkataraman, D. Papailiopoulos, MLSys 2021, [arXiv] [link] [talk]
FedML: A Research Library and Benchmark for Federated Machine Learning
C. He, S. Li, J. So, M. Zhang, H. Wang, X. Wang, P. Vepakomma, A. Singh, H. Qiu, L. Shen, P. Zhao, Y. Kang, Y. Liu, R. Raskar, Q. Yang, M. Annavaram, S. Avestimehr, NeurIPS 2020 SpicyFL workshop, ($\color{red}{\text{the Baidu Best Paper Award}}$) [arXiv]
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. Sohn, K. Lee, D. Papailiopoulos, NeurIPS 2020, [link]
Federated Learning with Matched Averaging
H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, Y. Khazaeni, ICLR 2020, ($\color{red}{\text{Oral}}$) [link][blog][talk]
DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation
S. Rajput*, H. Wang*, Z. Charles, D. Papailiopoulos, NeurIPS 2019, [link]
Demonstration of Nimbus: Model-based Pricing for Machine Learning in a Data Marketplace
L. Chen, H. Wang, L. Chen, P. Koutris, A. Kumar, ACM SIGMOD 2019 demo track, [link]
ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding
H. Wang, Z. Charles, D. Papailiopoulos [arXiv]
The Effect of Network Width on the Performance of Large-batch Training
L. Chen, H. Wang, J. Zhao, D. Papailiopoulos, P. Koutris, NeurIPS 2018, [link]
ATOMO: Communication-efficient Learning via Atomic Sparsification
H. Wang*, S. Sievert*, Z. Charles, S. Wright, D. Papailiopoulos, NeurIPS 2018, [link]
DRACO: Robust Distributed Training via Redundant Gradients
L. Chen, H. Wang, Z. Charles, D. Papailiopoulos, ICML 2018, [link]
Draco: Robust Distributed Training against Adversaries
L. Chen, H. Wang, D. Papailiopoulos, SysML 2018, [link]
Recognizing Actions during Tactile Manipulations through Force Sensing
G. Subramani, D. Rakita, H. Wang, J. Black, M. Zinn, M. Gleicher, IROS 2017, [link]