[期刊]
  • 《》 2023年71卷6期

摘要 : Distributed implementations are crucial in speeding up large scale machine learning applications. Distributed gradient descent (GD) is widely employed to parallelize the learning task by distributing the dataset across multiple wo... 展开

相关作者
相关关键词