Posts by Collection

portfolio

Portfolio item number 1

Short description of portfolio item number 1

Portfolio item number 2

Short description of portfolio item number 2

publications

Recognizing Actions during Tactile Manipulations through Force Sensing

G. Subramani, D. Rakita, H. Wang, J. Black, M. Zinn, M. Gleicher, IROS 2017, [link]

Draco: Robust Distributed Training against Adversaries

L. Chen, H. Wang, D. Papailiopoulos, SysML 2018, [link]

DRACO: Robust Distributed Training via Redundant Gradients

L. Chen, H. Wang, Z. Charles, D. Papailiopoulos, ICML 2018, [link]

ATOMO: Communication-efficient Learning via Atomic Sparsification

H. Wang*, S. Sievert*, Z. Charles, S. Wright, D. Papailiopoulos, NeurIPS 2018, [link]

The Effect of Network Width on the Performance of Large-batch Training

L. Chen, H. Wang, J. Zhao, D. Papailiopoulos, P. Koutris, NeurIPS 2018, [link]

ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding

H. Wang, Z. Charles, D. Papailiopoulos [arXiv]

Demonstration of Nimbus: Model-based Pricing for Machine Learning in a Data Marketplace

L. Chen, H. Wang, L. Chen, P. Koutris, A. Kumar, ACM SIGMOD 2019 demo track, [link]

DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation

S. Rajput*, H. Wang*, Z. Charles, D. Papailiopoulos, NeurIPS 2019, [link]

Federated Learning with Matched Averaging

H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, Y. Khazaeni, ICLR 2020, ($\color{red}{\text{Oral}}$) [link][blog][talk]

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. Sohn, K. Lee, D. Papailiopoulos, NeurIPS 2020, [link]

FedML: A Research Library and Benchmark for Federated Machine Learning

C. He, S. Li, J. So, M. Zhang, H. Wang, X. Wang, P. Vepakomma, A. Singh, H. Qiu, L. Shen, P. Zhao, Y. Kang, Y. Liu, R. Raskar, Q. Yang, M. Annavaram, S. Avestimehr, NeurIPS 2020 SpicyFL workshop, ($\color{red}{\text{the Baidu Best Paper Award}}$) [arXiv]

Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification

S. Agarwal, H. Wang, K. Lee, S. Venkataraman, D. Papailiopoulos, MLSys 2021, [arXiv] [link] [talk]

Pufferfish: Communication-efficient Models At No Extra Cost

H. Wang, S. Agarwal, D. Papailiopoulos, MLSys 2021 [arXiv] [link] [talk]

On the Utility of Gradient Compression in Distributed Training Systems

S. Agarwal, H. Wang, S. Venkataraman, D. Papailiopoulos, MLSys 2022 [link] [arXiv]

AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness

D. Li, H. Wang, E. P. Xing, H. Zhang, NeurIPS 2022 [arXiv]

Rare Gems: Finding Lottery Tickets at Initialization

K. Sreenivasan, J. Sohn, L. Yang, M. Grinde, A. Nagle, H. Wang, E. P. Xing, K. Lee, D. Papailiopoulos, NeurIPS 2022 [arXiv]

Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation

K. Zhang, Y. Wang, H. Wang, L. Huang, C. Yang, X. Chen, L. Sun, Findings of EMNLP 2022

Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach

H. Guo, P. Greengard, H. Wang, A. Gelman, E. P. Xing, Y. Kim, ICLR 2023 [link]

MPCFormer: fast, performant and private Transformer inference with MPC

D. Li*, R. Shao*, H. Wang*, H. Guo, E. P. Xing, H. Zhang, ICLR 2023, ($\color{red}{\text{Spotlight}}$) [link]

Cuttlefish: Low-rank Model Training without All The Tuning

H. Wang, S. Agarwal, P. U-chupala, Y. Tanaka, E. P. Xing, D. Papailiopoulos, MLSys 2023 [link] [arXiv]

FedNAR: Federated Optimization with Normalized Annealing Regularization

J. Li, A. Li, C. Tian, Q. Ho, E. Xing, H. Wang, NeurIPS 2023 [link] [arXiv]

Fusing Models with Complementary Expertise

H. Wang, F. M. Polo, Y. Sun, S. Kundu, E. P. Xing, M. Yurochkin, ICLR 2024 [link] [arXiv]

Does compressing activations help model parallel training?

S. Bian, D. Li, H. Wang, E. P. Xing, S. Venkataraman, MLSys 2024 [arXiv]

RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs

B. Tan, Y. Zhu, L. Liu, H. Wang, Y. Zhuang, J. Chen, E. P. Xing, Z. Hu, NAACL Demo 2024 [link] [arXiv]

talks

Draco: Robust Distributed Training against Adversaries

DRACO:Robust Distributed Training via Redundant Gradients

DRACO: Byzantine-resilient Distributed Training via Redundant Gradients

teaching

Teaching experience 1

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

This is a description of a teaching experience. You can use markdown like any other post.