Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Page Not Found

Page not found. Your pixels are in another canvas.

About me

Archive Layout with Content

Posts by Category

Posts by Collection

CV

Markdown

Page not in menu

This is a page not in th emain menu

Page Archive

Portfolio

Publications

Sitemap

Posts by Tags

Talk map

Talks and presentations

Teaching

Terms and Privacy Policy

Blog posts

Jupyter notebook markdown generator

Posts

Blog Post number 4

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

Portfolio item number 1

Short description of portfolio item number 1

Portfolio item number 2

Short description of portfolio item number 2

publications

Recognizing Actions during Tactile Manipulations through Force Sensing

G. Subramani, D. Rakita, H. Wang, J. Black, M. Zinn, M. Gleicher, IROS 2017, [link]

Draco: Robust Distributed Training against Adversaries

L. Chen, H. Wang, D. Papailiopoulos, SysML 2018, [link]

DRACO: Robust Distributed Training via Redundant Gradients

L. Chen, H. Wang, Z. Charles, D. Papailiopoulos, ICML 2018, [link]

ATOMO: Communication-efficient Learning via Atomic Sparsification

H. Wang*, S. Sievert*, Z. Charles, S. Wright, D. Papailiopoulos, NeurIPS 2018, [link]

The Effect of Network Width on the Performance of Large-batch Training

L. Chen, H. Wang, J. Zhao, D. Papailiopoulos, P. Koutris, NeurIPS 2018, [link]

ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding

H. Wang, Z. Charles, D. Papailiopoulos [arXiv]

Demonstration of Nimbus: Model-based Pricing for Machine Learning in a Data Marketplace

L. Chen, H. Wang, L. Chen, P. Koutris, A. Kumar, ACM SIGMOD 2019 demo track, [link]

DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation

S. Rajput*, H. Wang*, Z. Charles, D. Papailiopoulos, NeurIPS 2019, [link]

Federated Learning with Matched Averaging

H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, Y. Khazaeni, ICLR 2020, ($\color{red}{\text{Oral}}$) [link][blog][talk]

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. Sohn, K. Lee, D. Papailiopoulos, NeurIPS 2020, [link]

FedML: A Research Library and Benchmark for Federated Machine Learning

C. He, S. Li, J. So, M. Zhang, H. Wang, X. Wang, P. Vepakomma, A. Singh, H. Qiu, L. Shen, P. Zhao, Y. Kang, Y. Liu, R. Raskar, Q. Yang, M. Annavaram, S. Avestimehr, NeurIPS 2020 SpicyFL workshop, ($\color{red}{\text{the Baidu Best Paper Award}}$) [arXiv]

Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification

S. Agarwal, H. Wang, K. Lee, S. Venkataraman, D. Papailiopoulos, MLSys 2021, [arXiv] [link] [talk]

Pufferfish: Communication-efficient Models At No Extra Cost

H. Wang, S. Agarwal, D. Papailiopoulos, MLSys 2021 [arXiv] [link] [talk]

On the Utility of Gradient Compression in Distributed Training Systems

S. Agarwal, H. Wang, S. Venkataraman, D. Papailiopoulos, MLSys 2022 [link] [arXiv]

AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness

D. Li, H. Wang, E. P. Xing, H. Zhang, NeurIPS 2022 [arXiv]

Rare Gems: Finding Lottery Tickets at Initialization

K. Sreenivasan, J. Sohn, L. Yang, M. Grinde, A. Nagle, H. Wang, E. P. Xing, K. Lee, D. Papailiopoulos, NeurIPS 2022 [arXiv]

Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation

K. Zhang, Y. Wang, H. Wang, L. Huang, C. Yang, X. Chen, L. Sun, Findings of EMNLP 2022

Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach

H. Guo, P. Greengard, H. Wang, A. Gelman, E. P. Xing, Y. Kim, ICLR 2023 [link]

MPCFormer: fast, performant and private Transformer inference with MPC

D. Li*, R. Shao*, H. Wang*, H. Guo, E. P. Xing, H. Zhang, ICLR 2023, ($\color{red}{\text{Spotlight}}$) [link]

Cuttlefish: Low-rank Model Training without All The Tuning

H. Wang, S. Agarwal, P. U-chupala, Y. Tanaka, E. P. Xing, D. Papailiopoulos, MLSys 2023 [link] [arXiv]

FedNAR: Federated Optimization with Normalized Annealing Regularization

J. Li, A. Li, C. Tian, Q. Ho, E. Xing, H. Wang, NeurIPS 2023 [link] [arXiv]

Fusing Models with Complementary Expertise

H. Wang, F. M. Polo, Y. Sun, S. Kundu, E. P. Xing, M. Yurochkin, ICLR 2024 [link] [arXiv]

Does compressing activations help model parallel training?

S. Bian, D. Li, H. Wang, E. P. Xing, S. Venkataraman, MLSys 2024 [arXiv]

RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs

B. Tan, Y. Zhu, L. Liu, H. Wang, Y. Zhuang, J. Chen, E. P. Xing, Z. Hu, NAACL Demo 2024 ($\color{red}{\text{the Best Demo Runner Up}}$) [link] [arXiv]

Crystal: Illuminating LLM Abilities on Language and Code

T. Tao, J. Li, B. Tan, H. Wang, W. Marshall, B. M Kanakiya, J. Hestness, N. Vassilieva, Z. Shen, E. P. Xing, Z. Liu, COLM 2024 [arXiv]

LLM360: Towards Fully Transparent Open-Source LLMs

Z. Liu, A. Qiao, W. Neiswanger, H. Wang, B. Tan, T. Tao, J. Li, Y. Wang, S. Sun, O. Pangarkar, R. Fan, Y. Gu, V. Miller, Y. Zhuang, G. He, H. Li, F. Koto, L. Tang, N. Ranjan, Z. Shen, R. Iriondo, C. Mu, Z. Hu, M. Schulze, P. Nakov, T. Baldwin, E. P. Xing, COLM 2024 [arXiv]

Maestro: Uncovering Low-Rank Structures via Trainable Decomposition

S. Horváth, S. Laskaridis, S. Rajput, H. Wang, ICML 2024 [link] [arXiv]

TrustLLM: Trustworthiness in Large Language Models

H. Wang with many collegues (Position Paper), ICML 2024 [link] [arXiv]

FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations

Z. Wang, Z. Shen, Y. He, G. Sun, H. Wang, L. Lyu, A. Li, NeurIPS 2024 [arXiv]

Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild

X. Zhao, G. Sun, R. Cai, Y. Zhou, P. Li, P. Wang, B. Tan, Y. He, L. Chen, Y. Liang, B. Chen, B. Yuan, H. Wang, A. Li, Z. Wang, T. Chen, NeurIPS 2024 Datasets and Benchmarks [link]

SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning

Y. He, Z. Wang, Z. Shen, G. Sun, Y. Dai, Y. Wu, H. Wang, A. Li, NeurIPS 2024 [arXiv]

talks

teaching

Teaching experience 1

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

This is a description of a teaching experience. You can use markdown like any other post.