After one year my PostDoc in Paris is now over and tomorrow I’m starting at SFB 1114 at FU Berlin. As I already told one of my new colleagues, it’s been quite a thing for me not being Xians office mate any more. One reason, obviously, is that with respect to work it’s a great luxury to be able to ask a senior researcher questions at almost any time. My second major reason is that the last time an office mate was so pleasant on a personal level was about six years ago when both Felix (then office mate) and I where new fathers and just generally got along very well. Christian told me last year how he was very thankful to Jim Berger for taking him as a PostDoc “and basically for no reason” (his words). Christians publication record wasn’t great but Jim Berger didn’t care. My response was that that was exactly how I felt, very thankful – and for the same reasons.
It was rather easy to feel at home scientifically in Paris in general. Especially in domains with a strong math component it’s hard to find a better place when taking the union of all Paris universities. Dauphine had a nice atmosphere with many young researchers, especially probabilists. This also made me reflect more on Germany, and I can’t help but feel that the situation in the most prosperous country in the EU is much worse than in France. Rather it is comparable, especially with respect to young researchers, to some EU countries that are currently fighting low tax revenue. But there is hope. If the EU dissolves there will be less comparisons…
The hard part of this year was family life and commuting back to Berlin almost every weekend. It gave my wife the life of a single mother during week days. Another reason to be very thankful, being supported like that. I’m happy to have been here when my son entered elementary school and hope that was it in terms of long term family separation.
During the super nice International Conference on Monte Carlo techniques in the beginning of July in Paris at Université Descartes (photo), which featured many outstanding talks, one by Tong Zhang particularly caught my interest. He talked about several variants of Stochastic Gradient Descent (SGD) that basically use variance reduction techniques from Monte Carlo algorithms in order to improve the convergence rate versus vanilla SGD. Even though some of the papers mentioned in the talk do not always point out the connection to Monte Carlo variance reduction techniques.
One of the first works in this line, Accelerating Stochastic Gradient Descent using Predictive Variance Reduction by Johnson and Zhang, suggests using control variates to lower the variance of the loss estimate. Let be the loss for the parameter at and jth data point, then the usual batch gradient descent update is with as step size.
In naive SGD instead one picks a data point index uniformly and uses the update , usually with a decreasing step size to guarantee convergence. The expected update resulting from this Monte Carlo estimate of the batch loss is exactly the batch procedure update. However the variance of the estimate is very high, resulting in slow convergence of SGD after the first steps (even in minibatch variants).
The authors choose a well-known solution to this, namely the introduction of a control variate. Keeping a version of the parameter that is close to the optimum, say , observe that has an expected value of 0 and is thus a possible control variate. With the possible downside that whenever is updated, one has to go over the complete dataset.
The contribution, apart from the novel combination of knowledge, is the proof that this improves convergence. This proof assumes smoothness and strong convexity of the overall loss function and convexity of the for the individual data points and then shows that the proposed procedure (termed stochastic variance reduced gradient or SVRG) enjoys geometric convergence. Even though the proof uses a slightly odd version of the algorithm, namely where . Rather simply setting should intuitively improve convergence, but the authors could not report a result on that. Overall a very nice idea, and one that has been discussed in more papers quite a bit, among others by Simon Lacoste-Julien and Francis Bach.
This arXival by Feng and Dicker deals with the problem of fitting multivariate mixture estimates with maximum likelihood. One of the advantages put forward being that nonparametric maximum likelihood estimators (NPMLEs) put virtually no constraints on the base measure . While the abstract claims their approach works for arbitrary distributions as mixture components, really they make the assumption that the components are well approximated by a Gaussian (of course including distributions arising from sums of RVs because of the CLT). While theoretically NPMLEs might put no constraints on the base measure, practically in the paper first is constrained to measures supported on at most as many points as there are data points. To make the optimization problem convex, the support is further constrained to a finite grid on some compact space that the data lies on.
The main result of the paper is in Proposition 1, which basically says that the finite grid constraint indeed makes the problem convex. After that the paper directly continues with empirical evaluation. I think the method proposed is not all that general. While the elliptical unimodal (gaussian approximation) assumption would not be that problematic, the claimed theoretical flexibility of NPMLE is not really bearing fruit in this approach, as the finite grid constraint is very strong and gets rid of most flexibility left after the gaussian assumption. For instance, the gaussian location model fitted is merely a very constrained KDE without even allowing the ability of general gaussian mixture models of fitting the covariance matrix of individual components. While a finite grid support for location and covariance matrix is possible, to be practical the grid would have to be extremely dense in order gain flexibility in the fit. While it is correct that the optimization problem is becoming convex, this is bought for the price of a rigid model. However, Ewan Cameron assured me that the model is very useful for astrostatistics, and I realized that it might be so in other contexts, e.g. adaptive Monte Carlo techniques.
A minor comment regarding the allegation in the paper that nonparametric models lack interpretability: while this is certainly true for the model proposed in the paper and mainstream bayesian nonparametric models, this is not a given. One notable interpretable class of models are Mixture models with a prior on the number of components by Miller and Harrison (2015).