This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# Metric complexes

### From Mathematics Is A Science

## Contents

## 1 Geometry

We will learn how to introduce an arbitrary geometry into the (so far) purely topological setting of cell complexes.

It is much easier to separate topology from geometry in the framework of cell complexes (and discrete calculus) because we have built it from scratch! To begin with, it is obvious that only the way the cells are attached to each other affects the matrix of the boundary operator (and the exterior derivative):

It is then clear that the sizes or the shapes of the cells are topologically irrelevant. Note that, even when those are given, the geometry of the domain will remain unspecified unless the angles are also provided:

These two grids can be thought of as different (but homeomorphic) realizations of the same cell complex.

In calculus, we rely only on the topological and algebraic structures to develop the mathematics. There are two main exceptions. One needs the dot product and the norm for the following related concepts:

- the concavity,
- the arc-length, and
- the curvature.

We'll need to introduce more structure -- geometric in nature -- to the complex.

Of course, the norm is explicitly present in the formulas for the last two.

## 2 Inner products

In order to develop a complete discrete calculus we need to be able compute:

- the lengths of vectors and
- the angles between vectors. $\\$

In linear algebra, we learn how an inner product adds geometry to a vector space. We choose a more general setting.

Suppose $R$ is our ring of coefficients and suppose $V$ is a finitely generated free module over $R$.

**Definition.** An *inner product* on $V$ is a function that associates a number to each pair of vectors in $V$:
$$\langle \cdot,\cdot \rangle :\begin{cases}
V \times V \to R,\\
(u,v) \mapsto \langle u,v \rangle ,
\end{cases}$$
that satisfies these properties: $\\$

- $\hspace{5 mm}\bullet$ 1. $\hspace{2 mm}\diamond$ $\langle v,v \rangle \geq 0$ for any $v \in V$ --
*non-degeneracy*, $\\$ - $\hspace{14.5 mm}\diamond$ $ \langle v,v \rangle =0$ if and only if $v=0$ --
*positive definiteness*;$\\$ - $\hspace{5 mm}\bullet $ 2. $ \langle u , v \rangle = \langle v,u \rangle $ --
*symmetry*(commutativity);$\\$ - $\hspace{5 mm}\bullet $ 3. $ \langle ru,v \rangle =r \langle u ,v \rangle $ --
*homogeneity*;$\\$ - $\hspace{5 mm}\bullet $ 4. $ \langle u + u ' ,v \rangle = \langle u ,v \rangle + \langle u ' ,v \rangle $ --
*distributivity*.

Items 3 and 4 together make up *bilinearity*. Indeed, consider $p: V \times V \to R$ given by $p ( u , v)= \langle u , v \rangle $. Then $p$ is linear with respect to the first variable, and the second variable, separately:

- fix $v=b$, then $p(\cdot,b): V \to R$ is linear;
- fix $u=a$, then $p(a,\cdot): V \to R$ is linear.

**Exercise.** Show that this isn't the same as linearity.

It is easy to verify these axioms for the *dot product* defined on $V={\bf R}^n$ or any other module with a fixed basis. For
$$u=(u_1, ...,u_n),\ v=(v_1, ...,v_n) \in {\bf R}^n,$$
define
$$ \langle u , v \rangle :=u_1v_1 + u_2v_2 + ... + u_nv_n .$$
Moreover, a *weighted dot product* on ${\bf R}^n$ is given by
$$ \langle u , v \rangle :=w_1u_1v_1 + w_2u_2v_2 + ... + w_nu_nv_n ,$$
where $w_i \in {\bf R} ,\ i=1, ...,n$, are the positive “weights”, is also an inner product.

**Exercise.** Prove the last statement.

A module equipped with an inner product is called an *inner product space*.

When ring $R$ is a subring of the reals ${\bf R}$, such as the integers ${\bf Z}$, we can conduct some useful computations with some familiar formulas.

**Definition.** The *norm of a vector* $v$ in an inner product space $V$ is defined as
$$\lVert v \rVert:=\sqrt {\langle v,v \rangle}\in {\bf R}.$$

This number measures the length of the vector.

**Definition.** The *angle between vectors* $u,v \ne 0$ in $V$ is defined as
$$\cos \widehat{uv} := \frac{\langle u, v \rangle}{\lVert u \rVert \cdot \lVert v \rVert}.$$

The norm satisfies certain properties that we can also use as *axioms of a normed space*.

**Definition.** Given a vector space $V$, a *norm* on $V$ is a function
$$\lVert\cdot \rVert:V \to R$$
that satisfies

- $\hspace{5 mm}\bullet$ 1. $\hspace{2 mm}\diamond$ $\lVert v \rVert \geq 0$ for all $v \in V$,$\\$
- $\hspace{14.5 mm}\diamond$ $\lVert v \rVert = 0$ if and only if $v=0$;$\\$
- $\hspace{5 mm}\bullet $ 2. $\lVert rv \rVert = |r| \lVert v \rVert$ for all $v \in V, r\in R$;$\\$
- $\hspace{5 mm}\bullet $ 3. $\lVert u + v \rVert \leq \lVert u \rVert + \lVert v \rVert$ for all $u, v \in V$.

**Exercise.** Prove the propositions below.

**Proposition.** A normed space is a metric space with the metric $d(x,y):=\lVert x-y \rVert$.

**Proposition.** A normed space is a topological vector space.

**Exercise.** Prove that the inner product is continuous in this metric space.

This is what we know from linear algebra.

**Theorem.** Any inner product $ \langle \cdot, \cdot \rangle $ on an $n$-dimensional module $V$ can be computed via matrix multiplication
$$ \langle u, v \rangle =u^T Q v,$$
where $Q$ is some positive definite, symmetric $n \times n$ matrix.

In particular, the dot product is represented by the identity matrix $I_n$, while the weighted dot product is represented by the diagonal matrix: $$Q=\left( \begin{array}{ccc} w_1 &... &0\\ ...&...&...\\ 0 &... &w_n \end{array} \right).$$

Below we assume that $R={\bf R}$.

To learn about the eigenvalues and eigenvectors of matrix $Q$ of the inner product, we observe that $$\lVert v \rVert ^2 =v^T Q v = v^T \lambda v = \lambda \lVert v \rVert ^2.$$ We have proven the following:

**Theorem.** The eigenvalues of a positive-definite matrix are real and positive.

We know from linear algebra that if the eigenvalues are real and *distinct*, the matrix is diagonalizable. As it turns out, having distinct eigenvalues isn't required for a positive-definite matrix. We state the following without proof.

**Theorem.** Any inner product $ \langle \cdot, \cdot \rangle $ on an $n$-dimensional vector space $V$ can be represented, via a choice of basis, by a diagonal $n \times n$ matrix $Q$ with positive numbers on the diagonal. In other words, every inner product is a weighted dot product, in some basis.

Also, since the determinant $\det Q$ of $Q$ is the product of the eigenvalues of $Q$, it is also positive and we have the following:

**Theorem.** Matrix $Q$ that represents an inner product is invertible.

## 3 The metric tensor

What if the geometry -- as described above -- varies from point to point in some space $X$? Then the inner product of two vectors will depend on their location.

In fact, we'll need to look at a different set of vectors at each location as the angle between two vectors is meaningless unless they have the same origin.

We already have such a construction! For each vertex $A$ in a cell complex $K$, the *tangent space* at $A$ of $K$ is a submodule of $C_1(K)$ generated by the $1$-dimensional star of $A$:
$$T_A(K):=< \{AB \in K\} > \subset C_1(K).$$

However, the tangent bundle $T(K)$ is inadequate because this time we need to consider *two directions* at a time!

Since $V=T_A$ is a module, an inner product can be defined (so far locally). Then we have a collection of bilinear functions for each of the vertices,
$$\psi_A: T_A(K)^2\to R,\ A\in K^{(0)}.$$
Thus the setup is as follows. We have the *space of locations* $X=K^{(0)}$, the set of all vertices of a cell complex $K$, and, second, to each location $A\in X$, we associate the *space of pairs of directions* $T_A(K)^2$.

We now collect these “double” tangent spaces into one.

**Definition.** The *tangent bundle of degree* $2$ of $K$ is defined to be
$$T^2(K):=\bigsqcup _{A\in K} \Big( \{A\} \times T_A(K)^2 \Big).$$

Then a locally defined inner product is seen as a function on this space, $$\psi =\{\psi_A\}:T^2(K) \to R,$$ bilinear on each tangent space, defined by $$\psi (A,AB,AC):=\psi_A(AB,AC).$$ In other words, we have inner products “parametrized” by location.

Furthermore, we have a function that associates to each location, i.e., a vertex $A$ in $K$, and each pair of directions at that location, i.e., edges $x=AB,y=AC$ adjacent to $A$ with $B,C\ne A$, a number $ \langle AB, AC \rangle $: $$(A,AB,AC) \mapsto \langle AB, AC \rangle (A) \in R.$$

Note: we have used this notation for $f(a)=\langle f,a \rangle$, where $f$ a form.

**Definition.** A *metric tensor* on a cell complex $K$ is a collection of functions,
$$\psi=\{\psi_A:\ A\in X\},$$
such that each of them,
$$\psi_A: \{A\} \times T_A(K)^2 \to R,$$
satisfies the axioms of inner product, and also:
$$ \psi_A (AB,AB) = \psi_B (BA,BA).$$

Using our notation, a metric tensor is such a function: $$ \langle \cdot,\cdot \rangle (\cdot): T^2(K) \to R,$$ that, when restricted to any tangent space $T_A(K),\ A\in K^{(0)}$, it produces a function $$ \langle \cdot,\cdot \rangle (A): T_A(K)^2 \to R,$$ which is an inner product on $T_A(K)$. The last condition, $$ \langle AB,AB \rangle (A) = \langle BA,BA \rangle (B),$$ ensures that these inner products match!

**Exercise.** State the axioms of inner product (non-degeneracy, positive-definiteness, symmetry, homogeneity, and distributivity) for the metric tensor.

The end result is similar to a combination of $1$-forms, because we have a number assigned to each edge and to each pair of adjacent edges:

- $a \mapsto \langle a, a \rangle $;
- $(a,b) \mapsto \langle a, b \rangle $. $\\$

Then the metric tensor is given by the table of its values for each vertex $A$: $$\begin{array}{c|ccccccccc} A & AB & AC & AD &...\\ \hline AB & \langle AB,AB \rangle & \langle AB,AC \rangle & \langle AB,AD \rangle &...\\ AC & \langle AC,AB \rangle & \langle AC,AC \rangle & \langle AC,AD \rangle &...\\ AD & \langle AD,AB \rangle & \langle AD,AC \rangle & \langle AD,AD \rangle &...\\ ...& ... & ... & ... &... \end{array} $$

Whenever our ring of coefficients happens to be a subring of the reals ${\bf R}$, such as the integers ${\bf Z}$, this data can be used to extract more usable information:

- $a \mapsto \lVert a \rVert$, and
- $(a,b) \mapsto \widehat{ab}$, $\\$

where $$\cos\widehat{ab} = \frac{\langle a, b \rangle}{\lVert a \rVert \lVert b\rVert}.$$

All the information we need for measuring is contained in this symmetric matrix: $$\begin{array}{c|ccccccccc} A & AB & AC & AD &...\\ \hline AB & \lVert AB \rVert & \widehat{BAC} & \widehat{BAD} &...\\ AC & \widehat{BAC} & \lVert AC \rVert & \widehat{CAD} &...\\ AD & \widehat{BAD} & \widehat{CAD} & \lVert AD \rVert &...\\ ...& ... & ... & ... &... \end{array} $$

**Exercise.** In the $1$-dimensional cubical complex ${\mathbb R}$, are $01$ and $10$ parallel?

**Proposition.** From any matrix with positive entries on the diagonal, we can (re)construct a metric tensor by the formula:
$$ \langle AB , AC \rangle =\lVert AB \rVert\lVert AC \rVert\cos\widehat{BAC}.$$

With a metric tensor, we can define the two main geometric quantities of calculus in a very easy fashion. Suppose we have a *discrete curve* $C$ in the complex $K$, i.e., a sequence of adjacent vertices and edges
$$C=\{ A_0A_1,A_1A_2, ...,A_{N-1}A_N\}\subset K.$$

**Definition.** The *arc-length* of curve $C$ is the sum of the lengths of its edges:
$$l_C:=\lVert A_0A_1 \rVert + \lVert A_1A_2 \rVert +... + \lVert A_{N-1}A_N \rVert.$$

The arc-length is an example of a *line integral* of a $1$-form $\rho$ over a $1$-chain $a$ in complex $K$ equipped with a metric tensor:
$$\begin{array}{ll}
a=&A_0A_1+A_1A_2+...+A_{n-1}A_n \Longrightarrow \\
&\displaystyle\int_a \rho:= \rho (A_0,A_0A_1)\lVert A_0A_1 \rVert+... +\rho (A_{N-1},A_{N-1}A_N)\lVert A_{N-1}A_N \rVert.
\end{array}$$
In particular, if we have a curve made of rods represented by edges in our complex while $\rho$ represents the linear density of each rod, the integral gives us the *weight* of the curve.

For the curvature, the angle $\alpha:=\widehat{A_{s-1}A_sA_{s+1}}$ might appear a natural choice to represent it, but, since we are to capture the change of the direction, it should be the “outer” angle $\beta$:

**Definition.** The *curvature* of curve $C$ at a given vertex $A_s,\ 0<s<N,$ is the value of the angle
$$\kappa _C (A_s):=\pi-\widehat{A_{s-1}A_sA_{s+1}}.$$

As before, the result depends on our choice of a metric tensor.

The following is a simple but important observation.

**Theorem.** The set of vertices $K^{(0)}$ of a cell complex $K$ equipped with a metric tensor is a metric space with its metric given by:
$$d(A,B)=\min \{l_C: C \text{ a curve from }A\text{ to } B\},$$
for any two vertices $A,B$.

**Exercise.** (a) Prove the theorem. (b) What is the relation between the topology of $|K|$ and this metric space?

## 4 Metric tensors in dimension $1$

Let's consider examples of how adding a metric tensor turns a cell complex, which is a discrete representation of a topological space, into something more *rigid*.

**Example.** In the illustration below you see the following:

- a cubical complex $K$, a
*topological*entity: cells and their boundaries also made of cells; - complex $K$ combined with a choice of metric tensor $\langle \cdot,\cdot \rangle$, a
*geometric*entity: lengths and angles; - a realization of $K$ in ${\bf R}$ as a segment;
- a realization of $K$ in ${\bf R}^2$;
- another realization of $K$ in ${\bf R}^2$.

The three realizations are homeomorphic but they differ in the way they geometrically fit into the plane, even though the last two have the same lengths and angles. $\square$

**Definition.** A *geometric realization of dimension* $n$ of a cell complex $K$ equipped with a metric tensor is a realization $|K|\subset {\bf R}^n$ of $K$ such that the Euclidean metric tensor of ${\bf R}^n$ matches this metric tensor; i.e.,
$$\langle a,b \rangle_K =\langle r(a),r(b) \rangle_{{\bf R}^n} ,$$
where $r$ is the realization.

In the last example, we see a $1$-dimensional cell complex with two different geometric realizations of dimension $2$.

There is even more room for variability if we consider realizations in ${\bf R}^3$. Indeed, even with fixed angles, one can rotate the edges freely with respect to each other. To illustrate this idea, we consider a *mechanical interpretation* of a realization. We think of $|K|$ as if it is constructed of pieces in such a way that

- each edge is a tube with one opening and a rod rigidly attached to the other end, and
- at each vertex, a rod is inserted into the next tube.

The rods are free to rotate -- independent of each other -- while still preserving the lengths and angles:

Just as since the very beginning we have studied intrinsic topological properties, now we are after only the *intrinsic geometric properties* of these objects, i.e., the ones that you can detect while staying inside the object.

**Example.** In a $1$-dimensional complex, one can think of a worm moving through a tube. The worm can only feel the distance that it has passed and the angle (but not the direction) at which its body bends:

Meanwhile, we can see the whole thing by stepping *outside*, to the second dimension.

But, can the worm tell a left turn from a right?$\square$

**Exercise.** Show that the ability to rotate these rods allows us to make the curve flat, i.e., realized in ${\bf R}^2$.

These observations underscore the fact that a metric tensor may not represent unambiguously the geometry of the realization.

**Exercise.** (a) Provide metric tensors for a triangle, a square, and a rectangle. (b) Approximate a circle.

A metric tensor can also be thought of as a pair of maps:

- $AB \mapsto \lVert AB \rVert$;
- $(AB,AC) \mapsto \widehat{BAC}$. $\\$

This means that for ${\bf R}^2$ there will be one (positive) number per edge and six per vertex. To simplify this a bit, we assume below that the opposite angles are equal. This is a type of local symmetry that lets us work with fewer parameters and yet allows for a sufficient variety of examples.

We can illustrate metric tensors by specifying the lengths and the angles:

In the examples below, we put the metric data on the left over the complete cubical complex for the whole grid and then use it as a *blueprint* to construct its realization as a rigid structure, on the right. The $2$-cells are ignored for now.

**Example (square grid).** Here is the *standard metric cubical complex* ${\mathbb R}^2$:

$\square$

**Example (rectangular grid).** Here is ${\mathbb R}^2$ horizontally stretched:

$\square$

**Example (rhomboidal grid).** Here is ${\mathbb R}^2$ skewed horizontally:

$\square$

**Exercise.** Provide a realization of ${\mathbb R}^2$ with lengths: $1,1,1, ...$, and angles: $\pi/2,\pi/4,\pi/2, ...$.

**Example (cubical grid).** We want a hollow cube build from the same squares:

Here we don't assume anymore that the opposite angles are equal, nor do we use the whole grid. $\square$

**Exercise.** Provide the rest of the realization for the last example.

Suppose we have a discrete curve $C$ in the complex $K$:
$$C:=\{ A_0A_1,A_1A_2, ...,A_{N-1}A_N\}\subset K,$$
that is closed: $A_N=A_0$. The *total curvature* of curve $C$ is the sum of the curvatures at the vertices (except the endpoints):
$$\kappa _C := \displaystyle\sum_{s=1}^N\kappa _C (A_s):= \displaystyle\sum_{s=1}^N(\pi-\widehat{A_{s-1}A_sA_{s+1}}).$$

**Theorem.** Suppose a cubical or simplicial complex is realized in the plane and suppose a polygon is bounded by a closed discrete curve in this complex. Then the total curvature is equal to $2\pi$.

**Exercise.** Prove the theorem.

In the case of a triangle, the sum of the *inner* angles is $3\pi$ minus the former. We have proved the following theorem familiar from middle school.

**Corollary (Sum of Angles of Triangle).** It is equal to $180$ degrees.

## 5 Metric complexes in dimension $1$

Let's consider again the $1$-dimensional complex $K$ from the last subsection. With all of those rotations possible, how do we make this construction *more* rigid?

We start by considering another *mechanical interpretation* of a geometric realization $|K|$:

- we think of edges as
*rods*and vertices as*hinges*. $\\$

Then there is a way to make the angles of the hinges fixed: we can connect the centers of the adjacent rods using an extra set of rods. These rods (connected by a new set of hinges) form a new complex **denoted** by $K^{\star}$.

**Exercise.** Assume that the vertices of the new complex are placed at the centers of the edges. (a) Find the rest of the lengths. (b) Find the angles too. (c) Find a metric tensor for $K^\star$.

The cells of the two complexes are matched:

- an edge $a$ in $K$ corresponds to a vertex $a^\star$ in $K^\star$; and
- a vertex $A$ in $K$ corresponds to an edge $A^\star$ in $K^\star$. $\\$

Furthermore, the correspondence (except for the endpoints) can be reversed!

Then we have a one-to-one correspondence between the original, called *primal*, cells and the new, called *dual*, cells:
$$\begin{array}{llll}
1\text{-cell (primal)} &\longleftrightarrow 0\text{-cell (dual)} \\
0\text{-cell (primal)} &\longleftrightarrow 1\text{-cell (dual)}
\end{array}$$
The combined correspondence is stated for $k=0,1$:

*each primal $k$-cell corresponds to a dual $(1-k)$-cell, and vice versa*.

Next, we assemble these new cells into a new complex:

Given a complex $K$, the set of all of the duals of the cells of $K$ is the new complex $K^{\star}$.

**Definition.** Two $1$-dimensional cell complexes $K$ and $K^{\star}$ are called *Hodge-dual* of each other if there is a one-to-one correspondence between the $k$-cells of $K$ and $(1-k)$-cells of $K^\star$, for $k=0,1$.

**Notation:** In order to avoid confusion between the familiar duality between vectors and covectors (and chains and cochains), known as the “Hom-duality”, and the new, Hodge-duality, we will use the star $\star$ instead of the usual asterisk $*$. Both are contravariant functors.

**Exercise.** Define the boundary operator of the dual grid in terms of the boundary operator of the primal grid, in dimension $1$.

What kinds of complexes are subject to this construction?

Suppose $K$ is a graph and its vertex $A$ has three adjacent edges $AB,AC,AD$. Then, in the dual complex $K^\star$, the edge $A^\star$ is supposed to join the vertices $AB^\star,AC^\star,AD^\star$. This is impossible, and therefore, such a complex has no dual.

Then,

- both $K$ and $K^\star$ have to be
*curves*.

Furthermore, once constructed, $K^{\star}$ doesn't have to be a cell complex as some of the boundary cells may be missing. The simplest example is that the dual of a single vertex complex $K=\{A\}$ is a single edge “complex” $K^{\star}=\{A^{\star}\}$. The endpoints of this edge aren't present in $K^\star$. This is a typical example of the dual of a “closed” complex being an “open” complex. Such a complex won't have the boundary operator well-defined.

What's left? The complex $K$ must be a complex representation of the infinite line or the circle: $${\mathbb R}^1 \ \text{or} \ {\mathbb S}^1 ,$$ or the disjoint union of their copies.

In other words, they are $1$-dimensional manifolds without boundary.

**Proposition.**
$$\big( {\mathbb R}^1 \big)^\star ={\mathbb R}^1,\quad \big( {\mathbb S}^1 \big)^\star ={\mathbb S}^1 .$$

**Exercise.** Prove the proposition.

Up to this point, the construction of the dual $K^\star$ has been purely topological. The geometry comes into play when we add a metric tensor to the picture.

**Definition.** A *metric (cell) complex of dimension* $1$ is a pair of a $1$-dimensional cell complex $K$ and its dual $K^{\star}$, either one equipped with a metric tensor.

Note that “dimension $1$” also refers to the “ambient dimension” of the pair of complexes: $$1=\dim a+\dim a^\star.$$

**Notation:** Instead of $||AB||$, we use $|AB|$ for the length of $1$-cell $AB$. The latter notation will also be used for the “volumes” of cells of all dimensions, including vertices.

Hodge duality is a correspondence between the *cells* of the two complexes; what about the *chains*?

As we have done many times, we could extend this correspondence, $a\mapsto a^\star$, by linearity from cells to chains. Because this is a bijection, the result is diagonal matrix with non-zero entries on the diagonal. The geometry of the complex is incorporated into these entries.

**Definition.** The (chain) *Hodge star operator* of a metric complex $K$ is the pair of homomorphisms on chains of complementary dimensions:
$$\begin{array}{lll}
\star:&C_k(K)\to C_{1-k}(K^\star ),& k=0,1,\\
\star:&C_k(K^\star)\to C_{1-k}(K),&k=0,1,
\end{array}$$
defined by
$$\star (a):=\frac{|a|}{|a^\star|}a^\star.$$
for any cell $a$, dual or primal, under the **convention**:
$$|A|=1, \text{ for every vertex } A.$$

The choice of the coefficient of this operator is justified by the following crucial property.

**Theorem (Isometry).** The Hodge star operator preserves lengths:
$$|\star (a)|=|a|.$$

**Proposition.** The two star operators are the inverses of each other;
$$\star\star=\operatorname{Id}.$$

**Proposition.** The matrix of the Hodge star operator $\star$ is diagonal with:
$$\star_{ii}=\frac{|a_i|}{|a_i^\star|},$$
where $a_i$ is the $i$th cell of $K$.

## 6 The first and second derivatives

The geometry of the complex allows us to define the first derivative of a discrete function (i.e., a $0$-form) as “the rate of change” instead of just “change” (the exterior derivative).

This is the setting of $K={\mathbb R}$:

This is how we defined the derivative $$\frac{g(B)-g(A)}{|AB|}.$$ For the general case, every edge of $K$ looks exactly like that. Then, we just recast this definition in the language of Hodge duality.

**Definition.** The *first derivative* of a primal $0$-form $g$ on a metric cell complex $K$ is a dual $0$-form given by its values at the vertices of $K^\star$:
$$g'(P):=\frac{dg(P^\star)}{|P^\star|},\ \forall P\in K^\star.$$
The form $g$ is called *differentiable* if this fraction exists in ring $R$.

**Proposition.** All forms are differentiable when we have
$$\frac{1}{|a|}\in R$$
for every cell $a$ in $K$. In particular, this is the case under either of the two conditions below:

- the ring $R$ of coefficients is a field, or
- the geometry of the complex $K$ is
*standard*: $|AB|=1$.

The formula is simply the exterior derivative -- with the extra coefficient equal to the reciprocal of the length of this edge. We recognize this coefficient from the definition of the Hodge star operator $\star :C^k(K)\to C^{1-k}(K^\star),\ k=0,1$. The operator's matrix is diagonal with $$\star _{ii}=\frac{1}{|a_i|},$$ where $a_i$ is the $i$th $k$-cell of $K$ ($|A|=1$ for $0$-cells). Therefore, we have an alternative formula for the derivative.

**Proposition.**
$$g'=\star d g.$$

Thus, *differentiation is a linear operator*:
$$\tfrac{d}{dx}=\star d:C^0(K) \to C^0(K^\star).$$
It is seen as the diagonal of the following *Hodge star diagram*:
$$
\newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
\newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!}
\newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{rrccccccccc}
g\in &C^0(K)\ & \ra{d} & C^1(K) &\\
&_{\tfrac{d}{dx}}& \searrow & \da{\star} & \\
&& & C^0(K^\star)& \ni g'\\
\end{array}
$$

This is how the formula works in the case of edges with equal lengths:

**Exercise.** Prove: $f'=0\Longrightarrow f=const$.

**Exercise.** Create a spreadsheet for computing the first derivative as the “composition” of the spreadsheets for the exterior derivative and the Hodge duality.

**Exercise.** Define the first derivative of a $1$-form $h$ and find its explicit formula. Hint:
$$
\newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
\newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!}
\newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{rrccccccccc}
&& & C^1(K) \ &\ni h\\
&^{\tfrac{d}{dx}}& \swarrow & \da{\star} & \\
h'\in &C^1(K^\star)\ & \la{d}& C^0(K^\star)& \\
\end{array}
$$

The concavity of a real-valued function is determined by the sign of its second derivative, which is *the rate of change of the rate of change*.

This is the setting of $K={\mathbb R}$:

This is how we defined the derivative $$\frac{\frac{g(C)-g(B)}{|BC|} - \frac{g(B)-g(A)}{|AB|}}{|AC|/2}.$$ For the general case, every two adjacent edges of $K$ looks exactly like that. Then, we just recast this definition in the language of Hodge duality.

**Definition.** The *second derivative* of a primal $0$-form $g$ on a metric cell complex $K$ is a primal $0$-form given by its values at vertices of $K$:
$$g' '(B)=\frac{\frac{dg(b)}{|b|} - \frac{dg(a)}{|a|}}{|B^\star|},\ \forall B\in K$$
where $a,b$ are the $1$-cells adjacent to $B$.

We take this one step further. What is the meaning of the difference in the numerator? Combined with the denominator, it is the derivative of $g'$ as a dual $0$-form. The second derivative is indeed *the derivative of the derivative*:
$$g' '=(g')'.$$

Let's find this formula in the Hodge star diagram: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccc} g,g' '\in &C^0(K)& \ra{d} & C^1(K) &\\ &\ua{\star} & \ne & \da{\star} & \\ &C^1(K^\star)\ & \la{d} & C^0(K^\star)& \ni g' \\ \end{array} $$ If, starting with $g$ in the left upper corner, we go around this, non-commutative, square, we get: $$\star d \star dg.$$

Just as $g$ itself, its second derivative $$g' ':=\star d \star d g=(\star d)^2g$$ is another primal $0$-form.

**Definition.** A $0$-form $g$ is called *twice differentiable* if it is differentiable and so is its derivative.

**Proposition.** All forms are twice differentiable when we have $\tfrac{1}{|a|}\in R$ for every cell $a$ in $K$ or $K^\star$.

**Exercise.** Instead of a direct computation as above, implement the second derivative via the spreadsheet for the first derivative.

**Proposition.** If the complex $K$ is a directed graph with unit edges, the matrix $D$ of the second derivative is given by its elements indexed by ordered pairs of vertices $(u,v)$ in $K$,
$$D_{uv}=
\begin{cases}
-1 & \text{ if } u,v \text{ are adjacent;}\\
\deg u, & \text{ if } v=u;\\
0 & \text{ otherwise,}
\end{cases}$$
with $\deg u$ the degree of node $u$.

**Exercise.** (a) Prove the formula. (b) Generalize the formula to the case of arbitrary lengths.

**Exercise.** Define the second derivative of a $1$-form $h$ and find its explicit formula.

## 7 Hodge duality of forms

What happens to the *forms* under the Hodge duality?

Since $i$-forms are defined on $i$-cells, $i=0,1$, the matching of the cells that we saw,

- primal $i$-cell $\Longleftrightarrow $ dual $(1-i)$-cell, $\\$

produces a matching of the forms,

- primal $i$-form $\Longleftrightarrow $ dual $(1-i)$-form. $\\$

For instance,

- If $f(A)=2$ for some $0$-cell $A$ in $K$, there must be somewhere a $1$-cell, say, $A^{\star}$ so we can have $f^\star(A^\star)=2$ for a new form $f^\star$ in $K^\star$.
- If $\phi(a)=3$ for some $1$-cell $a$, there must be somewhere a $0$-cell, say, $a^\star$ so we can have $\phi^\star(a^\star)=3$.

To summarize,

- if $f$ is a $0$-form then $f^{\star}$ is a $1$-form
*with the same values*on the dual cells; - if $\phi$ is a $1$-form then $\phi ^{\star}$ is a $0$-form
*with the same values*on the dual cells.

This is what is happening in dimension $1$, illustrated with a spreadsheet:

Link to file: Spreadsheets.

Algebraically, if $f$ is a $0$-form over the primal complex, we define a new form by $$f^{\star}(a):=f(a^{\star}).$$ for any $1$-cell $a$ in the dual complex. And if $\phi$ is a $1$-form over the primal complex, we define a new form by: $$\phi^{\star}(A):=\phi(A^{\star})$$ for any $0$-cell $A$ in the dual complex.

We put this together below.

**Definition.** For a metric complex $K$, the *Hodge-dual form* $\psi^\star$ of a $k$-form $\psi$ over $K$ is a $(1-k)$-form on $K^\star$ given by
$$\psi^{\star}(\sigma):=\psi(\sigma ^{\star}),$$
where $\sigma$ is a $(1-k)$-cell.

This formula will used as the definition of duality of form of all dimensions.

Naturally, the forms are extended from cells to chains by linearity making these diagrams commutative: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccc} &C_{k}(K)\ & \ra{\star} & C_{1-k}(K^\star) &&&C_{k}(K)\ & \la{\star} & C_{1-k}(K^\star) &&\\ &\ \ \da{\varphi} & &\ \ \da{\varphi^\star} && &\ \ \da{\psi^\star}& &\ \ \da{\psi} && \\ &R&= & R& &&R& =& R& & \end{array} $$ Then the formula can be rewritten: $$(\star \psi)(a):=\psi(\star a ),$$ where $\star$ in the right-hand side is the chain Hodge star operator, for any primal/dual cochain $\psi$ and any dual/primal chain $a$. We will use the same notation when there is no confusion.

**Definition.** The (cochain) *Hodge star operator* of a metric complex $K$ is the pair of homomorphisms,
$$\begin{array}{lll}
\star:&C^k(K)\to C^{1-k}(K^\star ),& k=0,1,\\
\star:&C^k(K^\star)\to C^{1-k}(K),&k=0,1,
\end{array}$$
defined by
$$\star (\psi) :=\psi^{\star}.$$

In other words, the new, *cochain* Hodge star operator is the dual (Hom-dual, of course) of the old, *chain* Hodge star operator; i.e.,
$$\star=\star^*.$$

The diagonal entries of the matrix of the new operator are the reciprocals of those of the old.

**Proposition.** The matrix of the Hodge star operator is diagonal with
$$\star_{ii}=\frac{|a^\star_i|}{|a_i|},$$
where $a_i$ is the $i$th cell of $K$.

**Proof.** From the last subsection, we know that the matrix of the chain Hodge star operator,
$$\star :C_k(K)\to C_{1-k}(K^\star),$$
is diagonal with
$$\star_{ii}=\frac{|a_i|}{|a^\star_i|},$$
where $a_i$ is the $i$th cell of $K$. We also know that the matrix of the dual is the transpose of the original. Therefore, the matrix of the dual of the above operator, which is the cochain Hodge star operator,
$$\star^* :C^{1-k}(K^\star)\to C^{k}(K),$$
is given by the same formula. We now restate this conclusion for the other cochain Hodge star operator,
$$\star^* :C^k(K)\to C^{1-k}(K^\star).$$
We simply replace in the formula: $a$ with $a^\star$ and $k$ with $(1-k)$. $\blacksquare$

**Exercise.** Provide the formula for the exterior derivative of $K^\star$ in terms of that of $K$.