首页 > 资源库 > 研究论文 > Convergenceofcontinuous-timestochasticgradientdescentwithapplicationstolineardeepneuralnetworks

Convergenceofcontinuous-timestochasticgradientdescentwithapplicationstolineardeepneuralnetworks

2024-09-12
We study a continuous-time approximation of the stochastic gradient descent process for minimizing the expected loss in learning problems. The main results establish general sufficient conditions for the convergence, extending the results of Chatterjee (2022) established for (nonstochastic) gradient descent. We show how the main result can be applied to the case of overparametrized linear neural network training.
Tags:
相关推荐