Publications

My Google Scholar page can be found here.

2024

RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs

B. Tan, Y. Zhu, L. Liu, H. Wang, Y. Zhuang, J. Chen, E. P. Xing, Z. Hu, NAACL Demo 2024 [link] [arXiv]

Does compressing activations help model parallel training?

S. Bian, D. Li, H. Wang, E. P. Xing, S. Venkataraman, MLSys 2024 [arXiv]

Fusing Models with Complementary Expertise

H. Wang, F. M. Polo, Y. Sun, S. Kundu, E. P. Xing, M. Yurochkin, ICLR 2024 [link] [arXiv]

2023

FedNAR: Federated Optimization with Normalized Annealing Regularization

J. Li, A. Li, C. Tian, Q. Ho, E. Xing, H. Wang, NeurIPS 2023 [link] [arXiv]

Cuttlefish: Low-rank Model Training without All The Tuning

H. Wang, S. Agarwal, P. U-chupala, Y. Tanaka, E. P. Xing, D. Papailiopoulos, MLSys 2023 [link] [arXiv]

MPCFormer: fast, performant and private Transformer inference with MPC

D. Li*, R. Shao*, H. Wang*, H. Guo, E. P. Xing, H. Zhang, ICLR 2023, ($\color{red}{\text{Spotlight}}$) [link]

Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach

H. Guo, P. Greengard, H. Wang, A. Gelman, E. P. Xing, Y. Kim, ICLR 2023 [link]

2022

Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation

K. Zhang, Y. Wang, H. Wang, L. Huang, C. Yang, X. Chen, L. Sun, Findings of EMNLP 2022

Rare Gems: Finding Lottery Tickets at Initialization

K. Sreenivasan, J. Sohn, L. Yang, M. Grinde, A. Nagle, H. Wang, E. P. Xing, K. Lee, D. Papailiopoulos, NeurIPS 2022 [arXiv]

AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness

D. Li, H. Wang, E. P. Xing, H. Zhang, NeurIPS 2022 [arXiv]

On the Utility of Gradient Compression in Distributed Training Systems

S. Agarwal, H. Wang, S. Venkataraman, D. Papailiopoulos, MLSys 2022 [link] [arXiv]

2021

Pufferfish: Communication-efficient Models At No Extra Cost

H. Wang, S. Agarwal, D. Papailiopoulos, MLSys 2021 [arXiv] [link] [talk]

Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification

S. Agarwal, H. Wang, K. Lee, S. Venkataraman, D. Papailiopoulos, MLSys 2021, [arXiv] [link] [talk]

2020

FedML: A Research Library and Benchmark for Federated Machine Learning

C. He, S. Li, J. So, M. Zhang, H. Wang, X. Wang, P. Vepakomma, A. Singh, H. Qiu, L. Shen, P. Zhao, Y. Kang, Y. Liu, R. Raskar, Q. Yang, M. Annavaram, S. Avestimehr, NeurIPS 2020 SpicyFL workshop, ($\color{red}{\text{the Baidu Best Paper Award}}$) [arXiv]

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. Sohn, K. Lee, D. Papailiopoulos, NeurIPS 2020, [link]

Federated Learning with Matched Averaging

H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, Y. Khazaeni, ICLR 2020, ($\color{red}{\text{Oral}}$) [link][blog][talk]

2019

DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation

S. Rajput*, H. Wang*, Z. Charles, D. Papailiopoulos, NeurIPS 2019, [link]

Demonstration of Nimbus: Model-based Pricing for Machine Learning in a Data Marketplace

L. Chen, H. Wang, L. Chen, P. Koutris, A. Kumar, ACM SIGMOD 2019 demo track, [link]

ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding

H. Wang, Z. Charles, D. Papailiopoulos [arXiv]

2018

The Effect of Network Width on the Performance of Large-batch Training

L. Chen, H. Wang, J. Zhao, D. Papailiopoulos, P. Koutris, NeurIPS 2018, [link]

ATOMO: Communication-efficient Learning via Atomic Sparsification

H. Wang*, S. Sievert*, Z. Charles, S. Wright, D. Papailiopoulos, NeurIPS 2018, [link]

DRACO: Robust Distributed Training via Redundant Gradients

L. Chen, H. Wang, Z. Charles, D. Papailiopoulos, ICML 2018, [link]

Draco: Robust Distributed Training against Adversaries

L. Chen, H. Wang, D. Papailiopoulos, SysML 2018, [link]

2017

Recognizing Actions during Tactile Manipulations through Force Sensing

G. Subramani, D. Rakita, H. Wang, J. Black, M. Zinn, M. Gleicher, IROS 2017, [link]