This site contains an online textbook as well as other mathematical content. Created and run by Peter Saveliev.

# Compact spaces

### From Intelligent Perception

## Contents

## 1 Open covers and accumulation points

Previously, we have proved one of the two main topological theorem of elementary calculus, the Intermediate Value Theorem: a continuous function $f:[a,b]\to {\bf R}$ attains all the values between $f(a)$ and $f(b)$. It follows from this theorem that the image of a path-connected space (under a continuous map) is a path-connected.

The second main topological theorem from Calculus 1 is:

**Theorem (Extreme Value Theorem).** Continuous functions attain their extreme values (the maximum and minimum) on closed bounded intervals. In other words, if $f$ is continuous on $[a,b]$, then there is $c,d$ in $[a,b]$ such that
$$f(c) = M = \max _{x\in [a,b]} f(x),$$
$$f(d) = L = \min _{x\in [a,b]} f(x).$$

**Exercise.** Generalize the theorem to the case of $f:[a,b]\cup [c,d] \to {\bf R}$.

Our immediate goal is to prove the theorem while developing some profound topological tools and ideas.

In a calculus course, one is supplied with examples of how the above theorem fails if we replace the domain interval $[a.b]$ with other kinds of intervals: $(a,b), [a,b), (a,b], [a,\infty)$, etc. Here, on the surface, two separate conditions about the interval are violated: these intervals are either not closed or not bounded. We will seek a single condition on the domain of the function that would guarantee that the theorem holds. As the counterexamples show, it will have something to do with the convergence of a sequence $f(x_n)$ while $x_n$ is approaching the complement of the domain:

To understand what's going on, we will rely on the following familiar concept. A point $x$ in a topological space $X$ is called an accumulation point of subset $A$ of $X$ if for any neighborhood $U$ of $x$ $$U \cap (A \setminus \{x\}) \neq \emptyset.$$ Then, what makes a difference is that every sequence in $[a,b]$ appears to have an accumulation point -- the condition that fails for the all other intervals.

Since, as we know, sequences aren't an adequate tool for studying topological spaces in general, we will look at all infinite subsets.

Suppose $A$ is a subset of a topological space $X$ and let's assume that

Then, for any point $x \in X$, there is an open neighborhood $U_x$ of $x$ such that
$$U_x \cap (A \setminus \{x\}) = \emptyset.$$
What we have here is an *open cover*
$$\alpha = \{U_x:x\in X\}$$
of $X$. This cover satisfies a certain special property. This property can be re-written as a combination of two cases:

- 1. if $x\in A$ then $U_x \cap A = \{x\},$
- 2. if $x\not\in A$ then $U_x \cap A = \emptyset.$

Note: The former case means that $x$ is an isolated point of $A$.

The two cases indicate that each element of $\alpha$ contains at most one element of $A$. Therefore, their cardinalities are related by this inequality: $$\#A \le \# \alpha .$$ Let's consider a specific example that illustrates this construction.

**Example.** Let's choose
$$X:=\{0,1,1/2,1/3,...\},$$
$$A:=\{1,1/2,1/3,...\},$$
with the topology acquired from ${\bf R}$:

We keep the assumption that $A$ has no accumulation points in $X$. Then, based on the discussion above, there is an open cover $\alpha$ satisfying the above properties. Since the topology of $X$ is Euclidean, we can assume that the elements of $\alpha$ are the intersections of open finite intervals, such as $(a-\epsilon, a+\epsilon)$, with $X$. Then, for any $n=1,2,3...$, there is an open interval $U_n=(1/n-\epsilon _n, 1/n+\epsilon _n)$ such that
$$U_n \cap A = \{1/n\}.$$
And, in addition, there is one more interval $U=(\epsilon , \epsilon )$ such that
$$U \cap A = \emptyset.$$
Of course, we realize that this is impossible:
$$n > [1/\epsilon]+1 \Rightarrow 1/n \in U,$$
and, therefore, the assumption was wrong. However, we will ignore this obvious conclusion and explore instead what this means for the open cover $\alpha$. We just discovered that this *infinite* cover
$$\alpha :=\{U,U_1,U_2,U_3,...\}$$
has a *finite* subcover
$$\alpha' :=\{U,U_1,U_2,U_3,...,U_n\}.$$
Since there is at most one point of $A$ per element of $\alpha$, this new collection can't cover the whole $A$ because $A$ is infinite. A contradiction.

$\square$

The crucial step is the ability to choose a finite *subcover*. This motivates the following definition.

**Definition.** A topological space $X$ is called *compact* if every open cover contains a finite subcover.

We can now continue with the general argument and prove the following important theorem.

**Theorem (Bolzano-Weierstrass Theorem).** In a compact space, every infinite subset has an accumulation point.

**Proof.** As before, we have a compact topological space $X$ and a subset $A$. We have already constructed an open cover
$$\alpha = \{U_x:\ x\in X\}$$
such that

- there is at most one element of $A$ in an element of $\alpha$.

If we now choose a finite subcover $$\alpha '= \{U_{x_i}:\ i=1,2,...,n, x_i \in X\},$$ it will satisfy the same property. But this property implies that $$A\subset \{x_i:\ i=1,2,...,n\}.$$ This set is finite while $A$ is infinite, a contradiction. $\blacksquare$

The following version of the Bolzano-Weierstrass Theorem is also very important.

**Corollary.** In a compact space, every sequence has a convergent subsequence.

**Exercise.** Prove the corollary for (a) metric spaces, and (b) all topological spaces. Hint: think of the sequence as a subset $A$ and consider two cases: $A$ is infinite and $A$ is finite.

## 2 Definitions

There are two approaches to compactness.

**Definition.** For any topological space $X$, a collection of open sets $\alpha$ is called an *open cover* if $\cup \alpha = X$. Then, $X$ is called a *compact topological space* if every open cover has a finite subcover.

**Exercise.** Provide a definition of compactness in terms of closed sets.

Alternatively, we define compactness of subsets.

**Definition.** For any subset $X$ of a topological space $Y$, a collection $\alpha$ of open (in $Y$) sets is called an *open cover* if $X \subset \cup \alpha$. Then, $X$ is called a *compact subset* if every open cover has a finite subcover.

These definitions are equivalent in the following sense.

**Proposition.** If $X$ is a compact subset of space $Y$, then $X$ is compact, in relative topology.

**Exercise.** Prove the proposition. Hint:

The nature of the first definition -- given entirely in terms of open sets -- tells us that compactness is a topological invariant: it is preserved by homeomorphisms. Moreover, the image of a compact space under a continuous function is compact.

**Theorem.** Compactness is preserved by continuous functions: if $f:X\to Y$ is a map and $X$ is compact then $f(X)$ is also compact.

**Proof.** Suppose $\alpha$ is an open cover of $f(X)$. We need to find its finite subcover. The idea is, with $f$, to take its elements to $X$, make the collection finite, and then bring what's left back to $Y$. The elements of $\alpha$ are open sets in $Y$. Then, since $f$ is continuous, for each $U\in \alpha$, its preimage $f^{-1}(U)$ under $f$ is open in $X$. Therefore,
$$\beta := \{f^{-1}(U):\ U\in \alpha\}$$
is an open cover of $X$. Suppose $\beta '$ is its finite subcover. Then
$$\alpha ' := \{U\in \alpha:\ f^{-1}(U)\in \beta '\}$$
is a finite subcover of $\alpha$. $\blacksquare$

The proof is illustrated below:

**Proposition.** (1) All finite topological spaces are finite. (2) Anti-discrete is a compact topology.

**Exercise.** Prove the proposition.

The following result significantly simplifies the task of proving compactness.

**Theorem.** Suppose $X$ is a topological space. Then $X$ is compact if and only if there is a basis $\beta$ of its topology so that every cover of $X$ by members of $\beta$ has a finite subcover.

**Proof.** [$\Leftarrow$] Suppose $\gamma$ is an open cover of $X$. Then, we represent every element of $\gamma$ as the union of elements of $\beta$. This gives us a cover of $X$ by element of $\beta$. Take its finite subcover, $\alpha$. Finally, for every element of $\alpha$ choose a single element of $\gamma$ that contains it. These sets form a finite subcover of $\gamma$. $\blacksquare$

We would like to apply the idea of compactness and our results we have proven to calculus and, especially, the Extreme Value Theorem. For that, we need to take a good look at *intervals*.

## 3 Compactness of intervals

First, the real line ${\bf R}=(-\infty,\infty)$ is not compact: it suffices to consider the cover by all intervals of a given length, $\epsilon$.

A similar argument will prove that no ray, $(a,\infty),[a,\infty)$, is compact.

Next, let's take a *finite* open interval $I=(a,b)$ in ${\bf R}$. Let's choose the cover that consists of all open $\epsilon$-intervals:
$$\alpha := \{(p,q) \cap I :\ p-q= \epsilon \}.$$
Then, finding a finite subcover is easy:
$$\alpha ' := \{(0,\epsilon),(1/2 \epsilon,(1+1/2)\epsilon), (\epsilon,2\epsilon),... \}.$$
There will be $[2/ \epsilon]+1$ intervals.

However, the following cover of $(a,b)$ (an expanding sequence of intervals) does not have a finite subcover:

We choose: $$\alpha := \{(p_n,q_n) :\ n=1,2,3,... \},$$ with $$p_n =a+1/n,\ q_n =b-1/n.$$

Same argument applies to $[a,b)$ and $(a,b]$.

Thus, *open and half-open intervals aren't compact*!

But closed intervals are! Why is this important?

We already know that the image of an interval under a continuous function is an interval.

**Exercise.** Prove this statement. Hint: use path-connectedness.

Now we also discover that the image of a *closed bounded* interval is also a closed bounded interval:
$$f \left( [a,b] \right)=[c,d].$$
The Extreme Value Theorem will follow from this simple formula.

**Exercise.** Prove this statement.

In turn, it implies that the sup-norm is well-defined.

**Theorem (Heine-Borel Theorem).** Every interval $[a,b]$ is compact.

**Proof.** The proof is by contradiction. Suppose we are given an open cover $\alpha$ of $[a,b]$ that doesn't have a finite subcover. The idea of the proof is that since $[a,b]$ is “bad”, then so is one of its halves, and a half of that half too, etc., until only a single point is left, which can't be “bad”:

The plan is to build a “nested” sequence of intervals $$I=[a,b] \supset I_1=[a_1,b_1] \supset I_2=[a_2,b_2] \supset ...,$$ so that they have only one point in common.

We start with the following two observations:

- 1. $\alpha$ is an open cover for all elements of the sequence $I=[a,b]$ as well as for all of its subsets and $I_1,I_2,...$.
- 2. if $\alpha$ has no finite subcover for an interval $I_n$, then $\alpha$ has no finite subcover for at least one of its halves.

These two facts allow us to construct the sequence inductively.

Consider the halves of the interval $I=[a,b]$:
$$\left[ a,\frac{a+b}{2} \right],\ \left[ \frac{a+b}{2},b \right].$$
For at least one of them, there is no finite subcover for $\alpha$. Call this interval $I_1:=[a_1,b_1]$. Next, we consider the halves of this new interval:
$$\left[ a_1,\frac{a_1+b_1}{2} \right],\ \left[ \frac{a_1+b_1}{2},b_1 \right].$$
For at least one of them, there is no finite subcover for $\alpha$. Call this interval $I_2:=[a_2,b_2]$.
We continue on with this process and the result is a sequence of intervals
$$I=[a,b] \supset I_1=[a_1,b_1] \supset I_2=[a_2,b_2] \supset ...,$$
that satisfies these two properties:
$$ a\le a_1 \le ... \le a_n \le ... \le b_n \le ... \le b_1 \le b,$$
and
$$|b_{n+1}-a_{n+1}|=\frac{1}{2} |b_n-a_n| = \frac{1}{2^n} |b-a|.$$
Now we invoke the *Completeness Property of Real Numbers*: every bounded increasing (decreasing) sequence converges to its least upper bound (the greatest lower bound). Therefore, the two sequences converge:
$$\lim _{n\to \infty} a_n=A, \lim _{n\to \infty} b_n=B.$$
The second property above implies that
$$\lim _{n\to \infty} |a_n-b_n|=0.$$
We then conclude that $A=B$.

Now, this new single-point set isn't “bad”: $\alpha$ has a finite subcover for $\{A\}$. In fact, the subcover needs only one element: $A\in U\in\alpha$. But since $a_n \to A$ and $b_n\to A$, we have $a_n\in U$ and $b_n\in U$ for large enough $n$. Then, $I_n=[a_n,b_n] \subset U$; it's covered by this, finite, subcover $\{U\}$ of $\alpha$! This contradicts our assumption.

$\blacksquare$

**Exercise.** What tacit assumption about $U$ did we make? Fix the proof.

## 4 Compactness in various dimensions

A generalization of the Heine-Borel Theorem is as follows.

**Theorem.** A closed bounded subset of a Euclidean space is compact.

It also follows that *all cells are compact*!

**Exercise.** Show that even though the combination of closedness and boundedness (of a subset of a Euclidean space) is a topological property, neither of the two alone is.

Before we address the general case, let's consider *rectangles* in the plane. Can we generalize the proof in the last subsection? The picture below suggests that we can:

**Exercise.** Prove that a closed rectangle in the plane is compact. Hint: Instead of nested intervals consider nested rectangles.

Later we will show that compactness is preserved under *products*. That will prove compactness of the closed squares, cubes,... , $n$-cubes. Then the above theorem easily follows if we can transition to subsets. Of course, not all subsets of a compact space are compact, but the closed ones are.

**Theorem.** A closed subset of a compact space is compact.

**Proof.** Suppose $Y$ is compact and $X \subset Y$ is closed. Suppose $\alpha$ is an open cover of $X$. We want to find a subcover of $\alpha$ using the compactness of $Y$. The only problem is that $\alpha$ doesn't cover the whole $Y$!

Then, the idea is to follow these steps:

- 1. augment $\alpha$ with extra open sets to build an open cover $\beta$ of the whole $Y$, then
- 2. find its finite subcover $\beta '$, and finally
- 3. extract from $\beta '$ a subcover of $X$ by dropping the extra elements we added: $\alpha '= \beta ' \cap \alpha$.

The first step can be carried out in a number of ways; however, we need to make sure that step 3 gives us a collection that still covers $X$. This outcome is impossible to ensure unless we choose to add only the elements that don't intersect $X$. With that idea, the choice is easy: we need only one set and it's the complement of $X$. This set is open because $X$ is closed. So $$\beta := \alpha \cup \{Y \setminus X \}.$$ Since $Y$ is compact, $\beta$ has a finite subcover, $\beta'$. Finally, we choose: $$\alpha' := \beta' \setminus \{Y \setminus X \}.$$ $\blacksquare$

The proof is illustrated below:

Then, any closed bounded subset is a subset of a closed cube in ${\bf R}^n$ and, therefore, is compact.

This isn't true for *infinite dimensional spaces*. As a counterexample, we consider a closed ball in the space of functions with sup-norm:

Here $$\bar{B}(0,\epsilon)=\{f\in C[a,b]:\ \max_{x\in [a,b]}|f(x)| \le \epsilon\}.$$ Now, consider the following two sequences of continuous functions in $\bar{B}(0,1)$:

**Exercise.** Provide formulas for the functions. Show that the first sequence diverges and the second converges, point-wise. However the distance to the limit, the zero function, remains the same. Conclude that there is no uniform convergence and that $\bar{B}(0,\epsilon)$ isn't compact in $C[a,b]$. What about $C_p[a,b]$?

**Exercise.** Provide a similar argument to show that the sequence:
$$(1,0,0,0, ...), (0,1,0,0, ...), (0,0,1,0, ... ), ...,$$
has no convergent subsequence. Hint: What's the topology?

**Exercise.** Prove the Arzela-Ascoli Theorem: a closed bounded subset $X$ of $C[a.b]$ is compact if, in addition, it is *equicontinuous*: for each $c\in [a,b]$ and every $\epsilon >0$, there is a $\delta$ such that
$$|x-c| < \delta \Rightarrow |f(x)-f(c)| < \epsilon,\ \forall f\in X.$$
Hint: To understand the concept, limit the set to differentiable functions with the derivatives between, say, $-1$ and $1$.

**Exercise.** Suppose $X$ is a compact space. Define the analogs $C(X)$ and $C_p(X)$ of the spaces of real-valued continuous functions $C[a,b]$ and $C_p[a,b]$ and examine the issue of compactness.

**Definition.** A space is called *locally compact* if its every point has a neighborhood the closure of which is compact.

Of course, ${\bf R}^n$ is locally compact.

**Exercise.** Is either discrete or anti-discrete topology locally compact?

**Proposition.** $C[a,b]$ is not locally compact.

**Exercise.** Prove the proposition.