Fast convergence for stochastic and distributed gradient descent in the interpolation limit

Mitra, P. P. (November 2018) Fast convergence for stochastic and distributed gradient descent in the interpolation limit. European Signal Processing Conference, EUSIPCO, pp. 1890-1894. ISBN 22195491 (ISSN); 9789082797015 (ISBN)

URL: https://www.scopus.com/inward/record.uri?eid=2-s2....
DOI: 10.23919/EUSIPCO.2018.8553369

Abstract

Modern supervised learning techniques, particularly those using deep nets, involve fitting high dimensional labelled data sets with functions containing very large numbers of parameters. Much of this work is empirical. Interesting phenomena have been observed that require theoretical explanations; however the non-convexity of the loss functions complicates the analysis. Recently it has been proposed that the success of these techniques rests partly in the effectiveness of the simple stochastic gradient descent algorithm in the so called interpolation limit in which all labels are fit perfectly. This analysis is made possible since the SGD algorithm reduces to a stochastic linear system near the interpolating minimum of the loss function. Here we exploit this insight by presenting and analyzing a new distributed algorithm for gradient descent, also in the interpolating limit. The distributed SGD algorithm presented in the paper corresponds to gradient descent applied to a simple penalized distributed loss function, L(w1, ..., wn) = Σili(wi) + µP <i,j> |wi − wj|2. Here each node holds only one sample, and its own parameter vector. The notation < i, j > denotes edges of a connected graph defining the communication links between nodes. It is shown that this distributed algorithm converges linearly (ie the error reduces exponentially with iteration number), with a rate 1 −n η λmin(H) < R < 1 where λmin(H) is the smallest nonzero eigenvalue of the sample covariance or the Hessian H. In contrast with previous usage of similar penalty functions to enforce consensus between nodes, in the interpolating limit it is not required to take the penalty parameter to infinity for consensus to occur. The analysis further reinforces the utility of the interpolation limit in the theoretical treatment of modern machine learning algorithms. © EURASIP 2018.

Item Type: Book
Subjects: bioinformatics > computational biology > algorithms
bioinformatics > computational biology
bioinformatics > computational biology > algorithms > machine learning
CSHL Authors:
Communities: CSHL labs > Mitra lab
Depositing User: Matthew Dunn
Date: 29 November 2018
Date Deposited: 28 Jan 2019 15:07
Last Modified: 28 Jan 2019 15:07
URI: https://repository.cshl.edu/id/eprint/37665

Actions (login required)

Administrator's edit/view item Administrator's edit/view item
CSHL HomeAbout CSHLResearchEducationNews & FeaturesCampus & Public EventsCareersGiving