Tools

This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

Cochain complexes and cohomology

1 Cochain complexes and cohomology

• $\Omega^1({\bf R})$ is the set of $1$-forms in ${\bf R}$, and
• $\Omega^0({\bf R})$ is the set of $0$-forms in ${\bf R}$

both are vector spaces, very familiar objects.

However, as vector spaces they aren't the simplest ones, because they are infinite dimensional: $$\dim \Omega^0({\bf R}) = \dim C({\bf R}) = \infty.$$ Indeed, observe that the set $P = \{1, x, x^2, \ldots\}$ is linearly independent in the space of functions $C({\bf R}) = \Omega^0({\bf R})$.

Further, does $P$ span the whole space? Exercise. Hint: consider Taylor series.

Theorem. $$\dim C({\bf R}) = \infty.$$

Proof. Are there are $a_0, \ldots, a_n$ not all equal to $0$ with: $$a_0+a_1x+a_2x^2+\ldots+a_nx^n = 0?$$ The equation here holds for every $x$. In particular, if we pick $x=0$, then, immediately, $a_0=0$. This implies: $$a_1x+a_2x^2+\ldots+a_nx^n = 0,$$ hence $$x(a_1+a_2x+\ldots+a_nx^{n-1}) = 0.$$ But if we pick, $x \neq 0$ now, then $$a_1 + a_2x + \ldots+a_n x^{n-1}=0.$$ We continue and, by induction, prove that all $a_k$ are zero. $\blacksquare$

The difference now, with discrete forms, is that it is finite dimensional: $$\dim C^0[0,k] = k+1 < \infty.$$ Therefore everything is computable!

Generally, if $K$ is a cubical complex, $\dim C^k(K) =$ number of $k$-cells in $K$.

The result is plausible if you think about what a discrete $k$-form is. It's a correspondence:

$\varphi \colon k$-cell $\mapsto$ a number.

So, the number of cells is the number of degrees of freedom as given by the dual basis.

Consider the cochain complex: $$\ldots \stackrel{d_{k-1}}{\rightarrow} C^k \stackrel{d_k}{\rightarrow} C^{k+1} \stackrel{d_{k+1}}{\rightarrow} C^{k+2} \stackrel{d_{k+2}}{\rightarrow} \ldots$$

Just as with continuous forms we define two new vector spaces:

• ${\rm ker \hspace{3pt}} d_k$ is the set of closed $k$-forms in $K$, also known as cocycles and
• ${\rm im \hspace{3pt}} d_{k-1}$ is the set of exact $k$-forms in $K$, also known as coboundaries.

Since $dd=0$, as we know, then

• "every exact form is closed" or
• "every coboundary is a cocycle".

Algebraically, $${\rm im} d_{k-1} \subset {\rm ker \hspace{3pt}} d_k$$

The end result is identical to that for continuous forms, which is:

Each $\Omega$ is replaced with $C$.

Define the cubical cohomology of $K$ as this quotient vector space: $$H^k(K)=\ker d_k / {\rm im \hspace{3pt}} d_{k-1}.$$

The quotient is, of course, very similar to the one that defined cubical homology. The difference is that the former isn't just a vector space, it also has a graded ring structure provided by the wedge product. To see the difference this makes, consider these two spaces: the sphere with two bows added and the torus:

The homology is the same in all dimensions. The cohomology is also the same, but only in the sense of vector spaces! The basis elements in dimension $1$ behave differently under wedge product. In the sphere with bows: $$[\alpha ^*]\wedge [\beta ^*]=0,$$ because there is nowhere for this $2$-form to "reside". Meanwhile in the torus: $$[a ^*]\wedge [b ^*]\ne0.$$

2 Connectedness and simple connectedness

The following is very similar to the continuous case (why? because the linear algebra is the same).

Proposition. Constant functions are closed $0$-forms.

Proof. They are defined on vertices. If $\varphi \in C^0({\bf R})$, $d \varphi([a,a+1]) = [\varphi[(a+1)-\varphi(a)]dx = 0dx = 0$. A similar argument applies to $C^0({\bf R}^n)$. $\blacksquare$

Proposition. Closed $0$-forms are constant on a path-connected cubical complex (i.e., its realization $|K|$ is path-connected).

Proof. For ${\bf R}^1$: $d \varphi([a,a+1]) = 0$, so $(\varphi(a+1) - \varphi(a))dx = 0$, so $\varphi(a+1) = \varphi(a)$, so $\varphi$ is constant on adjacent vertices. $\blacksquare$


Lemma: A cubical complex is path-connected iff any two vertices can be connected by a sequence of adjacent edges.

Exercise. Prove the lemma.

Exercise. Use it to finish the proof.

Corollary. $\dim \ker d_0 =$ number of path components of $|K|$.

What about the exact $0$-forms? Just $0$.

To summarize:

Theorem: $H^0(K) = {\bf R}^m$, where $m$ is the number of path components of $|K|$.

Now, simple connectedness. Recall this fact for continuous forms: if $R$ is simply connected, then all closed forms are exact. Hence $H_{dR}^1(R) = 0$. Same holds for discrete forms:

Theorem. If $|K|$ is simply connected, $H^1(K)=0$.

The issue is complex for continuous differential forms. In order to detect -- cohomologically -- the hole in ${\bf R}^2-\{(0,0)\}$ we needed to prove that this form $$\theta = \frac{1}{x^2+y^2} (-ydx + xdy)$$

• is closed (easy), and
• isn't exact (hard)

(see Closedness and exactness of 1-forms). Then we know that $$H^1_{dR}\ne 0.$$

With cubical forms, we get the corresponding result, and more, a lot cheaper.


First, which of these forms are closed? They should have "horizontal difference - vertical difference" equal to $0$: $$(r-p)-(q-s)=0.$$ For example, we can choose them all equal to $1$.

However, all $1$-forms are closed, even those that don't satisfy this equation! Why? Exercise.


So, $H^1\ne 0$.

3 Computing cohomology

We have used discrete forms to detect the hole in the circle. Now, let's compute the whole thing avoiding shortcuts (theorems), just like a computer would do.

Back to our cubical complex $K$:

Let's compute $H^1(K)$.

Consider the cochain complex: $$C^0 \stackrel{d_0}{\rightarrow} C^1 \stackrel{d_1}{\rightarrow} C^2 = 0.$$ Observe that, since $d_1=0$ we have $\ker d_1 = C^1$.

Let's list the bases of these vector spaces. Of course, we start with the bases of the chains $C_k$, i.e., the cells, and consider the dual bases of the cochains $C^k$. After all, $C^k=(C_k)^*$.

Below, in leftmost column contains the cells, the uppermost row contains the forms, and the rest is the values of these forms on these cells.

 $C^0:$ $\varphi_A$ $\varphi_B$ $\varphi_C$ $\varphi_D$ $A$ $1$ $0$ $0$ $0$ $B$ $0$ $1$ $0$ $0$ $C$ $0$ $1$ $0$ $0$ $D$ $0$ $0$ $0$ $1$
 $C^1:$ $\psi_a$ $\psi_b$ $\psi_c$ $\psi_d$ $a$ $1$ $0$ $0$ $0$ $b$ $0$ $1$ $0$ $0$ $c$ $0$ $0$ $1$ $0$ $d$ $0$ $0$ $0$ $1$

We find the formula for $d_0$, a linear operator, which is its $4 \times 4$ matrix.

For that, we just look at what happens to the basis elements:

$\varphi_A = [1, 0, 0, 0]^T= \begin{array}{ccc} 1 & - & 0 \\ | & & | \\ 0 & - & 0 \end{array} \Longrightarrow d_0 \varphi_A = \begin{array}{ccc} \bullet & -1 & \bullet \\ 1 & & 0 \\ \bullet & 0 & \bullet \end{array} = \psi_a-\psi_b = [1,-1,0,0]^T;$

$\varphi_B = [0, 1, 0, 0]^T= \begin{array}{ccc} 0 & - & 1 \\ | & & | \\ 0 & - & 0 \end{array} \Longrightarrow d_0 \varphi_B = \begin{array}{ccc} \bullet & 1 & \bullet \\ 0 & & 1 \\ \bullet & 0 & \bullet \end{array} = \psi_b+\psi_c = [0,1,1,0]^T;$

$\varphi_C = [0, 0, 1, 0]^T= \begin{array}{ccc} 0 & - & 0 \\ | & & | \\ 0 & - & 1 \end{array} \Longrightarrow d_0 \varphi_C = \begin{array}{ccc} \bullet & 0 & \bullet \\ 0 & & -1 \\ \bullet & 1 & \bullet \end{array} = -\psi_c+\psi_d = [0,0,-1,1]^T;$

$\varphi_D = [0, 0, 0, 1]^T= \begin{array}{ccc} 0 & - & 0 \\ | & & | \\ 1 & - & 0 \end{array} \Longrightarrow d_0 \varphi_D = \begin{array}{ccc} \bullet & 0 & \bullet \\ -1 & & 0 \\ \bullet & -1 & \bullet \end{array} = -\psi_a-\psi_d = [-1,0,0,-1]^T.$

Now, the matrix of $d_0$ is formed by the column vectors above: $$d_0 = \left( \begin{array}{cccc} 1 & 0 & 0 & -1 \\ -1 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \end{array} \right).$$

Now, using this data we find the kernel and the image using the standard linear algebra.

The kernel is the solution set of the equation $d_0v=0$. It may be found by solving the corresponding system of linear equations with the coefficient matrix $d_0$. The brute force approach is the Gaussian elimination, or something more sophisticated for a computer implementation. We simply notice that the rank of the matrix is $3$, so the dimension of the kernel is $1$. Therefore, $$\dim H^0=1.$$ We have a single component!

The image is the set of $u$'s with $d_0v=u$. Once again Gaussian elimination is a simple but effective approach. We simply notice that the dimension of the image is the rank of the matrix, $3$. In fact, $${\rm span} \{ d_0(\varphi_A), d_0(\varphi_B), d_0(\varphi_C), d_0(\varphi_D) \} = {\rm im \hspace{3pt}} d_0 .$$ Therefore $$\dim H^1=\dim \left( C^1 / {\rm im \hspace{3pt}} d_0 \right) = 4-3=1.$$ We have a single hole!

Note: Another way to see that the columns aren't linearly independent is just add them or take the determinant of $d_0$.

Exercise: Compute the cohomology of figure 8:

4 There is more to homology...

This is just a tip of the iceberg. There is a lot more to homology and cohomology.

Let's take a quick peak, without diving in...

First, the result that holds everything together: $$|K|=|L| \Longrightarrow H_k(K) \cong H_k(L).$$ In other words, homology is independent from the cubical complex representation. This is known as Invariance of homology.

Further in this direction, homology is independent from the method of discretization. Indeed, a topological space can be triangulated or even build in vacuum using only gluing. The result is a simplicial complex or a cell complex, but the homology is the same! There is a more direct way of dealing with the homology of topological spaces, without discretization. Singular homology deals with collections of maps from cells to the space. The chain complexes are infinite dimensional but the homology is still the same, under mild conditions.

One can also introduce homology and cohomology axiomatically, via the Eilenberg–Steenrod axioms. Dropping some of the axioms leads to "extraordinary homology theories", such as bordism.

The most immediate and the most profound algebraic extension of these concepts is to consider chains as "linear combinations" of cells with coefficients in an arbitrary ring $R$. As a result, what the chains and cochains form isn't vector spaces anymore but modules. These modules, $H_k(K;R)$ and $H^k(K;R)$, have the same bases and, especially in the discrete case, behave very much like those vector spaces. However, when the homology and cohomology is computed new effects appear.

There is essentially one way for a Euclidean space to fit into another. That's why quotient vector spaces are always quite simple: $${\bf R}^n / {\bf R}^m ={\bf R}^{n-m},n\ge m.$$ However, a copy of ${\bf Z}$ can fit "non-trivially" in another copy of ${\bf Z}$ as the set of even numbers $2{\bf Z}$ and we have: $${\bf Z} / 2{\bf Z} = {\bf Z}_2 .$$ This is called "torsion" (see Classification of free abelian groups). This is the reason why we call them homology and cohomology groups.

The free part looks the same in the sense that they have the same number of generators. Hence, the Betti numbers are the same for all coefficients. What's the difference then? As an example, the integer $1$-homology detects the twist of the Klein bottle ${\bf K}^2$ while the real homology does not: $$H_1({\bf K}^2;{\bf Z})={\bf Z}\oplus {\bf Z}_2, H_1({\bf K}^2;{\bf R})={\bf R}.$$ Compare to the circle: $$H_1({\bf S}^1;{\bf Z})= {\bf Z},H_1({\bf S}^1;{\bf R})= {\bf R}.$$ In fact, the Universal Coefficient Theorem states that the homology and cohomology groups over any ring can be deduced from the integer homology via an algebraic procedure.

However, the converse isn't true. Therefore, the integer homology is the best! And it is the default: $H_k(X)=H_k(X;{\bf Z})$.

Note: what about calculus vs integer-valued calculus?


There is more duality between homology and cohomology: $$H^k(M^n) \cong H_{n-k}(M^n)$$ for any orientable compact path connected $n$-manifold $M$.

It is called Poincare duality. It is realized by means of a new product operator, the cap product: $$\frown : H_n \times H^m \rightarrow H^{n-m}.$$ It is in addition to the familiar wedge product, aka the cup product: $$\smile : H^n \times H^m \rightarrow H^{n+m}.$$