AIM Workshop: Generalized persistence and applications

This workshop will be devoted to generalizations of persistent homology with a particular emphasis on finding calculable algebraic invariants useful for applications. Applications of persistence -- for example, signal processing, drug design, tumor identification, shape classification, and geometric inference -- rely on the classification of persistence via barcodes, geometrization of the space of barcodes via metrics or as an algebraic variety, and on efficient algorithms. Accordingly, this workshop will bring together theoriticians, computer scientist, and the users of computational topology.

The main topics for the workshop are:

  • Generalizations of persistence: multidimensional persistence, well groups, (co)sheaves
  • Algorithms
  • Geometrization
  • Applications

The workshop will differ from typical conferences in some regards. Participants will be invited to suggest open problems and questions before the workshop begins, and these will be posted on the workshop website. These include specific problems on which there is hope of making some progress during the workshop, as well as more ambitious problems which may influence the future activity of the field. Lectures at the workshop will be focused on familiarizing the participants with the background material leading up to specific problems, and the schedule will include discussion and parallel working sessions.

Space and funding is available for a few more participants. If you would like to participate, please apply by filling out the on-line form no later than May 15, 2014. Applications are open to all, and we especially encourage women, underrepresented minorities, junior mathematicians, and researchers from primarily undergraduate institutions to apply.

Open Question: Lions and Contamination

I'd like to point to you an open problem that I find interesting. A good reference is the paper "How many lions are needed to clear a grid?" by Florian Berger, Alexander Gilbers, Ansgar Grüne, and Rolf Klein [1].

Disclaimer:  I would classify this problem as more combinatorial than topological.

Suppose we have a graph which is an n \times n grid. This graph contains n^2 vertices, and the case n=5 is drawn below.


We have k lions moving on this grid. At each time step a lion occupies a vertex, and between adjacent time steps a lion either stays put or travels across one edge to an adjacent vertex.

We also need to define the subset of vertices W(t) which are "contaminated" at time t. A lion can clean a contaminated vertex, but in the absence of lions, the contamination spreads. At starting time t=0 every vertex not occupied by a lion is contaminated; this gives W(0). How does the contaminated set update as the lions move? A vertex v is in W(t+1) if v is not covered by a lion at time t+1 and either

  • v belongs to W(t), or
  • v has a neighbor u in W(t) such that no lion travels from vertex v to u between times t and t+1.

Suppose you are given k lions. You get to choose the lions' starting vertices at time zero in the grid - all other vertices begin contaminated. You get to pick how each lion moves at each time step. Can you design a way to clear all contaminated vertices from the grid?

If k \geq n then this problem is easy. At time zero simply line up the lions along the left-hand side of the grid, from top to bottom. At each time step, sweep each lion one step to the right. At time t=n-1 the lions will be on the right-hand side of the grid, and there will be no contaminated vertices.

It is unknown whether k=n-1 lions are sufficient to clear the grid or not. I would guess that most people think k=n-1 lions are insufficient, but nobody has a proof!

An equivalent way to phrase this problem is to use a mobile evader instead of the set of contaminated vertices. Suppose our evader moves at the same speed as the lions: at each time step the evader occupies a vertex, and between adjacent time steps the evader either stays put or crosses one edge. The evader is caught if it occupies the same vertex or crosses the same edge as any lion. It is known that k\geq n lions can catch any such evader (say by sweeping from left to right), and it is unknown whether k=n-1 lions are sufficient or not. To see the equivalence between the formulations using a mobile evader or contaminated vertices, note that W(t) is the set of all possible locations of a mobile evader at time t.

This is one of those problems that is harder than it sounds. Upon first hearing it your reaction is that you will have a proof after one evening of hard work. A week later you still haven't made much progress, and you're a week behind on your normal research agenda. Consider yourself warned!

One reason why I classify this problem as more combinatorial than topological is that the details of the discretization matter. For example, see Figure 1 of [1] (Note - in this figure, a vertex of the n \times n grid is drawn as a square. This is their representation of a 4 \times 4 grid with 16 vertices, not a 5 \times 5 grid with 25 vertices). For a second example, see Figure 5 of [1]. In the 3d version of this problem, you might expect that n^2 lions are necessary to clear an n \times n \times n grid. Figure 5 of [1] shows that this is false - 8 lions (which is less than 9) are sufficient to clear the 3 \times 3 \times 3 grid.


[1] Florian Berger, Alexander Gilbers, Ansgar Grüne, and Rolf Klein. How many lions are needed to clear a grid? Algorithms 2009, 2, 1069-1086.

ICMS 2014: Session on "Software for Computational Topology"

The 4th International Congress on Mathematical Software (ICMS 2014) takes place in Seoul, Korea on Aug 5-9. This year, it will host a workshop session dedicated to Computational Topology. Contributions on state-of-the-art software for topological problems as well as applications of such software to other domains are welcome. See the dedicated webpage for more information,

How to contribute: Submit a short abstract of 200-500 words to the session organizer until March 31. You will get a notification about acceptance within one week and upon positive evaluation, you will give a talk at ICMS. An extended abstract (due end of April) will appear in the conference proceedings.  A special issue of Journal of Symbolic Computation will be organized immediately after the workshop.

ATMCS 6 open for registration


Algebraic Topology- Methods, Computation and Science 6 (ATMCS6) is now open for registration. The conference takes place at PIMS University of British Columbia May 26-30.

Confirmed speakers include:

Amit Patel
Donald Sheehy
Jeff Erickson
Jose Perea
Liz Munch
Michael Robinson
Omer Bobrowski
Peter Bubenik
Radmilla Sazdanovic
Sayan Mukerjhee
Vanessa Robbins
Yuliy Baryshnikov
Tamal Dey
Shmuel Weinberger
Raul Rabadan
Chris Hoffman
Vin de Silva

For more details, see:

Applied and computational topology refers to the adaptation of topological ideas and techniques to study problems in science and engineering. A particular focus is on using invariants and methods of algebraic topology to understand large high-dimensional data sets. The further development of topological techniques for use in applications and the creation of new areas of application in the subject are amongst the goals of this workshop.

The workshop will bring together leading researchers in this emerging discipline as well as providing an opportunity for young mathematicians to get involved in it. In past years, the ATMCS conference has been very successful in providing a forum for cutting-edge research to be disseminated; attendance tends to represent a broad swath of the diverse research community which works in this area.

Workshop Format:
The workshop will feature lectures and discussion in the morning and afteroon. Mid-Morning and Afternoon refreshments will be provided during the conference.

On behalf of the organizers
Andrew Blumberg, U Texas
Matthew Kahle, Ohio State
Mikael Vejdemo-Johansson, KTH / IMA / IJS

Source material for Topological Data Analysis

To start off the feature articles at, I figured it might be worth while collecting good entry points to the field. One of the most common questions I get about persistent homology and topological data analysis is how to get started with our techniques and ideas.

Overview articles and books

First off in the list of entry points is the written word. There are survey articles, overview articles and books written about topological data analysis as a whole, as well as focusing on specific parts.

Topology and Data by Gunnar Carlsson. This survey article came soon after Ghrist's survey, and covers persistent homology, as well as Mapper for topological simplification and modeling. It also comes with a good discussion of the underlying philosophy of the field.
Start here.

Barcodes: the persistent topology of data by Robert Ghrist. This is the first major survey article to come out, and covers persistent homology and some of its applications.

Topology for computing by Afra Zomorodian. This is the first book format exposition of persistent homology for applied and computational topology. It is a good and self-contained introduction to the field, if ever so slightly dated: in particular, it does not cover anything about zigzag persistence or multi-dimensional persistence.

Computational Topology: an introduction by Herbert Edelsbrunner and John Harer. This book covers the state of the art as of 2010 of computational topology, with some focus on persistent homology: one third of the book is devoted to persistence and its applications. Throughout, the book discusses the underlying theory, the most obvious algorithm, and the fastest known algorithm.

Software packages

So you understand what the underlying ideas of the field are. Next up, you'll want to try them out on your own data. There are some ways you can go to do this, and they all have their specific strengths and weaknesses.

Plex, jPlex, javaPlex: this sequence of libraries were developed in the Stanford group, and with an explicit aim at always interoperating smoothly and easily with Matlab. Of the three, we currently recommend javaPlex unless this library does not cover your exact use case — in which case some methods may exist in jPlex. Plex is written in C++, and connects to Matlab through a MEX interface, while jPlex and javaPlex are both Java libraries.

Dionysus: this library, written and maintained by Dmitriy Morozov, provides a platform for developing and experimenting with computational topology algorithms in C++ or in Python. It interfaces with CGAL for low dimensional geometric constructions, and has example applications provided for persistent homology, cohomology, vineyards, alphashapes and numerous other common techniques.

Perseus: this package, developed by Vidit Nanda, provides a platform for computing persistent homology for cubical and simplicial complexes generated in a number of different ways. It specifically uses methods based on discrete morse theory for speeding up computations.

pHat: this package, created by Ulrich Bauer, Michael Kerber and Jan Reininghaus builds on results by the authors that speeds up persistence computation by specific tricks that use structures in a persistence boundary matrix. Currently only using Z/2-coefficients and not constructing the complex for you, it seems to be the fastest publicly available package.

CHomP: this software package came out of the CHomP research project, and consists of a rich collection of tools to work persistently or statically with cubical complex data. For homology on image or voxel collection data, CHomP forms the fastest and most complete analysis system available right now.

We warmly appreciate suggestions for more papers, software, or other resources if you have anything to add to this list.

Move of the site

After over a year of mostly inactivity, we have moved platform: away from and to a WebFaction-based server. Kudos to Ryan Lewis for taking over hosting.

In addition to Mikael and Ryan, we would like to invite more members of the community to help build up to a good gathering point for the community of applied and computational topology. If you want to help out, or have opinions on what should be done, drop us an email at and we'll work it all out.

Speeding up Homology via Duality?

In the last post I constructed for any (locally finite) simplicial complex K a dual complex. Today, I thought of using this to gain a potential speed-up in computational homology.

Note that simplicial homology of K is the same as Cech homology of the cover given by open stars of the vertices of K. This is because the nerve of this cover is precisely K and Cech homology is not-so-secretly the simplicial homology of the nerve.

However, we know that the dual complex has the same homotopy type as K, but the number of vertices in the dual complex is equal to the number of maximal simplices. Simplicial homology of this dual complex is the same as Cech homology of the cover given by the closed top dimensional cells in K.

Here is a case where this observation results in a most dramatic speed-up. The standard n-simplex \Delta^n has 2^{n+1} simplices and naively simplicial homology requires storing all of these. However, the dual complex is a single point and Cech homology here reveals that. Thus we have gone from a chain complex with n+1 chain groups, each of which are dimension \binom{n+1}{j} to a chain complex concentrated in a single degree, all by using this duality.

Of course, the case of graphs shows where this duality fails miserably. The dual complex creates simplices whose dimension corresponds to the degree of vertices in the original graph, so we can go from a two-term chain complex with a single boundary matrix, to arbitrarily large chain complexes.

This presents a question of possible interest: For what simplicial complexes does the dual complex and the original complex have the same number of simplices? Triangulated manifolds clearly do, but what about other simplicial complexes? Can we say something about the expected discrepancy of these numbers?

Dual Simplicial Complexes

In the process of designing homework problems for Applied Algebraic Topology (ESE 680-003) last night, I stumbled upon a most beautiful application of the nerve theorem as well as a construction of a dual simplicial complex that is defined for any (locally finite) simplicial complex K. This dual complex has the property that it is always homotopic to the original simplicial complex.

Let K be a simplicial complex with vertex set V. The open star U_v of a vertex v is defined to be the set of simplices \sigma containing v.  Visibly, the nerve of the cover \mathcal{U}=\{U_v | v\in V\} is the same as the simplicial complex K.

On the other hand, one can consider a different cover by closed sets, or dually Alexandrov opens when one reverses the partial order. Define a simplex to be maximal if it is not the face of any other simplex. Define the closure \overline{\sigma} of a simplex \sigma to be the set of faces of \sigma, written \tau\leq\sigma, so that \sigma\leq\sigma, i.e. \sigma is a face of itself.

Now define the dual simplicial complex of K to be the nerve of the cover \mathcal{V}=\{\overline{\sigma} | \sigma \,\mathrm{maximal}\,\}.

The nerve theorem works for convex closed sets or open sets with contractible intersections, so we know this dual simplicial complex has to be homotopy equivalent to the simplicial complex K.

If we use Graeme Segal’s construction of the classifying space of the cover category X_U (sometimes called the Mayer-Vietoris blowup complex) in the paper “Classifying Spaces and Spectral Sequences” then we should be able to construct a similar dual space for cell complexes by looking at the open stars of vertices and the closures of maximal cells respectively.

Exact sequence of a pair: Computing the connecting map

Given a pair of spaces (X,A), with A \subset X, the short exact sequence of the pair is a relationship between the chains C_*(A),C_*(X/A) and C_*(X). The sequence that looks like this:

(1)   \begin{equation*} 0 \longrightarrow C_*(A) \xlongrightarrow{i} C_*(X) \xlongrightarrow{j} C_*(X/A) \longrightarrow 0 \end{equation*}

is short exact, or that: \Ker{j} = \Im{i}. If you want to know more about exact sequences I recommend reading this.

It turns out that on the level of homology this short exact sequence turns into a long exact sequence:

(2)   \begin{equation*} \ldots \longrightarrow H_*(A) \xlongrightarrow{i_*} H_*(X) \xlongrightarrow{j_*} H_*(X/A) \xlongrightarrow{\delta} H_{*-1}(A) \ldots \end{equation*}

and the map \delta is called the connecting homomorphism.

The natural question to ask is: What is \delta([x])? The answer is that
that \delta([x]) = [\partial[x]]. However, if you read the aforementioned authoritative text on the subject you will see that \delta is not explicitly defined, but its existence is merely proved.

However, given a particular X and A, one is able to come up with an idea of what the map does. I’ll go over the basic diagram chase and then show how this works by example.

Since we are given a representation of H_*(X/A) we can begin by taking a homology class \hat{c} which is relative cycle. Any chain c in X which is in the class \hat{c} has the property that j(c) = \hat{c} in H_*(X/A). Such a chain c is called a lift.

Assuming we have found a lift we can compute z = \partial(c), and it turns out that z \in Z_{*-1}(A) (because \partial{j} = j\partial), which means that [z] \in H_{*-1}(A). So our mapping sends \hat{c} \in H_*(X/A) to [z] \in H_{*-1}(A). So you now just need to prove that this process is well defined and this ends your diagram chase as well as your interest in the details of this sequence. Unless of course you are Mr. Cooperman, or, a computer scientist.

If you are a computer scientist you might be dissatisfied because it is not entirely clear how [z] is presented. Indeed z itself is not unique as it depends on our choice of lift, z is likely itself not even a member of our given representative for homology on H_{*-1}(A).

The problem we have is that so far our cycle z is written down as a linear combination of cells. We want an equivalent representation in terms of cycles. Noether says we can write down Z_{*}(A) \simeq H_{*}(A) \oplus B_{*}(A).
This tells us that we can use our homology basis to form a basis for our cycle group, and its not to hard to see that we can extend the cycle group to a basis for all chains. However in our case we want to take a cycle expressed in the chain basis, and write it as a linear combination of given cycles. This amounts to being able to write down a rectangular “change of basis” matrix between the cell basis and the given cycle basis on Z_{*-1}(A).

The column space of this matrix would be given by our given cycle basis, and the row space would be given by the canonical cell basis for C_{*-1}(A). We then look to “solve Ax = z” over \mathbb{Z}_p (in this blog p = 2), in the sense that we want to write down a linear combination of the cycles in Z_{*-1}(A) apply our matrix and end up with the z we computed.

Consider this space A:

Rendered by

which is homotopic to an annulus.

We can imagine that A \subset X, and that X is obtained by gluing a disc onto the boundary of A, so that X is itself homotopic to a disc, X/A is then homotopic to a sphere, with exactly one non-trivial 2nd homology class. The boundary of this class would be the outermost cycle of A.

Lets say that we are provided a basis for H_1(A) = \langle \alpha = bc + cd + db \rangle and B_1 = \langle \beta = ab + bc + ca \rangle. If I want to express the outer bounding cycle ab + bd + dc +ca in this basis, I begin by writing down the change of basis matrix, in order to solve Ax=z I augment the matrix with the identity matrix on the right and z on the bottom.

    \[ A' = \left( \begin{array}{c|ccccc|cc} & ab & ac & bc & bd & cd & \alpha & \beta \\ \hline \alpha & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\ \beta & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ \hline & 1 & 1 & 0 & 1 & 1 & 0 & 0 \end{array} \right) \]

Now you can see that modulo our extra row for z our matrix is in row echelon form, and it suffices to perform one Gaussian elimination (over Z_2) in this last column to achieve our result:

    \[ \left( \begin{array}{c|ccccc|cc} & ab & ac & bc & bd & cd & \alpha & \beta \\ \hline \alpha & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\ \beta & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ \hline z & 1 & 1 & 0 & 1 & 1 & 0 & 0 \end{array} \right) \]

    \[ z \rightarrow z + \beta \]

    \[ \left( \begin{array}{c|ccccc|cc} & ab & ac & bc & bd & cd & \alpha & \beta \\ \hline \alpha & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\ \beta & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ \hline z=\beta & 0 & 0 & 1 & 1 & 1 & 0 & 1 \end{array} \right) \]

    \[ z \rightarrow z + \alpha \]

    \[ \left( \begin{array}{c|ccccc|cc} & ab & ac & bc & bd & cd & \alpha & \beta \\ \hline \alpha & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\ \beta & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ \hline z=\alpha+\beta & 0 & 0 & 0 & 0 & 0 & 1 & 1 \end{array} \right) \]

Notice how the right hand side now contains our vector x and indeed Ax = z = \partial(j(c)).