摘要 : Distributed implementations are crucial in speeding up large scale machine learning applications. Distributed gradient descent (GD) is widely employed to parallelize the learning task by distributing the dataset across multiple wo... 展开
作者 | Baturalp Buyukates Emre Ozfatura Sennur Ulukus Deniz Gündüz |
---|---|
作者单位 | |
页码/总页数 | 3317-3332 / 16 |
语种/中图分类号 | 英语 / TN91 |
关键词 | Encoding Computational modeling Redundancy Servers Codes Machine learning Task analysis |
DOI | 10.1109/TCOMM.2022.3166902 |
馆藏号 | IELEP0052 |