Hanlin Tang
Title
Cited by
Cited by
Year
: Decentralized Training over Decentralized Data
H Tang, X Lian, M Yan, C Zhang, J Liu
International Conference on Machine Learning, 4848-4856, 2018
1212018
Communication compression for decentralized training
H Tang, S Gan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1803.06443, 2018
1202018
Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, C Yu, X Lian, T Zhang, J Liu
International Conference on Machine Learning, 6155-6165, 2019
652019
Central server free federated learning over single-sided trust social networks
C He, C Tan, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1910.04956, 2019
202019
Distributed learning over unreliable networks
C Yu, H Tang, C Renggli, S Kassing, A Singla, D Alistarh, C Zhang, J Liu
International Conference on Machine Learning, 7202-7212, 2019
202019
Deepsqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1907.07346, 2019
122019
Decentralized Online Learning: Take Benefits from Others' Data without Sharing Your Own to Track Global Trend
Y Zhao, C Yu, P Zhao, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1901.10593, 2019
102019
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
H Tang, S Gan, AA Awan, S Rajbhandari, C Li, X Lian, J Liu, C Zhang, ...
arXiv preprint arXiv:2102.02888, 2021
2021
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
H Tang, S Gan, S Rajbhandari, X Lian, J Liu, Y He, C Zhang
arXiv preprint arXiv:2008.11343, 2020
2020
: Decentralization Meets Error-Compensated Compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1907.07346, 2019
2019
Systems/Subsytems
S Rajbhandari, AVN Jalajakumari, H Chun, G Faulkner, K Cameron, ...
The system can't perform the operation now. Try again later.
Articles 1–11