A very exiting new preprint by Chatterjee and Diaconis takes a new look at the old Importance Sampling. While its masks as a constructive paper (maybe because of publication bias), its main result to me is actually the deconstruction of an IS myth. Let me elaborate.
The paper shows that the variance estimate for an Importance Sampling estimator is not a good diagnostic for convergence. An implication of this is that aiming for low variance of importance weights, or equivalently high Effective Sample Size, might be better then nothing, but is not actually that great. In more mathy speak, high ESS is a necessary condition for convergence of the estimate, but not a sufficient condition.
Especially in your face is their result that for any , one can find a number of samples n such that where is the empirical variance of samples, independent of your target density and your proposal distribution. Think about this: I don’t care about your actual integration problem or your proposal distribution. Just tell me what estimator variance you are willing to accept and I will tell you a number of samples that will exhibit lower variance with high probability. This is, in their words, absurd as a diagnostic.
However, they propose a new convergence diagnostic which is basically plagued by a similar problem. They even state themselves that their alternative criterion (where the are the importance weights) being close to 0 can be proofed to be necessary but not sufficient. Just stating that ESS isn’t great is not a great sell for some reviewers so they have to come up with an alternative – and conjecture that it is sufficient, while not being able to proof it. Anyway might still be a better diagnostic than ESS and all things considered this is a wonderful piece of research.