At this past year at the IMA, there has been some attention spent on a number of interesting aspects in bringing persistent homology closer to statistical methods.
One core step in this process has been to figure out what we mean by an average persistence diagram, a question that has an answer proposed by Munch, Bendich, Turner, Mukherjee, Mattingly, Harer and Mileyko in various constellations.
The details on Frechet means for persistent homology is not what this post is about, however. Instead, I want to bring up something I just saw presented today at ICML. Marco Cuturi and Arnaud Doucet got a paper entitled Fast Computation of Wasserstein Barycenters accepted to this large machine learning conference.
In their paper, Cuturi-Doucet present work on computing the barycenter (or average, or mean) of N probability measures using the Wasserstein distance: they articulate the transport optimization problem of finding a measure minimizing Wasserstein distance to all the N given measures, and present a smoothing approach that allows them to bring the problem onto a convex optimization shape. In the talk — though not present in the paper — the authors argue that they can achieve quadratic running times through these approximation steps. Not only that, but their approach ends up amenable for GPGPU computation and significant parallelization and vectorization benefits.
Their approach works anywhere that the Wasserstein metric is defined, and so in particular should work (and most likely give the same results) on the persistence diagram setting studied by the persistent statistician constellations mentioned above.
I for one would be excited to hear anything about the relations between these (mostly) parallel developments.