##### Tools

This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# Sequences and their limits

(Redirected from Treat calculus discretely!)

## 1 Limits of sequences: long-term trends

Example (falling ball). We watch a ping-pong ball falling down and recording -- at equal intervals -- how high it is. The result is an ever-expanding string, a sequence, of numbers. If the frames of the video are combined into one image, it will look something like this:

We have a list: $$36,\ 35,\ 32, \ 27,\ 20,\ 11,\ 0,\ ...$$ We bring them back together in one rectangular plot so that the location varies vertically while the time progresses horizontally:

The plot is called the graph of the sequence.

As far as the data is concerned, we have a list of pairs, time and location, arranged in a table: $$\begin{array}{r|ll} \text{moment}&\text{height}\\ \hline 1&36\\ 2&35\\ 3&32\\ 4&27\\ 5&20\\ 6&11\\ 7&0\\ ...&... \end{array}\ \text{ or }\ \begin{array}{l|ll} \text{moment:}&1&2&3&4&5&6&7&...\\ \hline \text{height:}&36&35&32&27&20&11&0&... \end{array}.$$

To represent a sequence algebraically, we first give it a name, say, $a$, and then assign a special variation of this name to each term of the sequence: $$\begin{array}{ll|ll} \text{index:}&n&1&2&3&4&5&6&7&...\\ \hline \text{term:}&a_n&a_1&a_2&a_3&a_4&a_5&a_6&a_7&... \end{array}$$ The subscript is called the index; it indicates the place of the term within the sequence. We say “$a$ sub $1$”, “$a$ sub $2$,” etc.

In our example, we name the sequence $h$ for “height”. Then the above table take this form: $$\begin{array}{l|ll} \text{moment:}&1&2&3&4&5&6&7&...\\ \hline \text{height:}&h_1&h_2&h_3&h_4&h_5&h_6&h_7&...\\ &||&||&||&||&||&||&||&...\\ \text{height:}&36&35&32&27&20&11&0&... \end{array}$$ When abbreviated, it takes the form of this list: $$h_1=36,\ h_2=35,\ h_3=32, \ h_4=27,\ h_5=20,\ h_6=11,\ h_7=0,\ ....$$ $\square$

So, we use the following notation: $$a_1=1,\ a_2=1/2,\ a_3=1/3,\ a_4=1/4,\ ...,$$ where $a$ is the name of the sequence and adding a subscript indicates which term of the sequence we are facing.

We will study infinite sequences of numbers and especially their trends. The idea is simple: $$\begin{array}{llllllll} \text{sequence }&1&1/2&1/3&1/4&1/5&...&\text{ trends toward } 0;\\ \text{sequence }&.9&.99&.999&.9999&.99999&...&\text{ trends toward } 1;\\ \text{sequence }&1&2&3&4&5&...&\text{ trends toward } \infty;\\ \text{sequence }&0&1&0&1&0&...&\text{ has no trend}. \end{array}$$ In other words, an infinite sequence of numbers will be sometimes “accumulating” around a single number. The gap between the bouncing ball and the ground becomes invisible!

Even though every function $y=f(x)$ with an appropriate domain creates a sequence, $a_n=f(n)$, the converse isn't true. This discrepancy serves our purpose: the primary, if not the only, reason for studying sequences is to understand (their) trends, called limits.

A function defined on a ray in the set of integers, $\{p,p+1,...\}$, is called an infinite sequence, or simply sequence, typically given by its formula: $$a_n=1/n:\ n=1,2,3,...$$ For example, these are the possibilities: $$a_n=1/n,\ 1/n,\ \{a_n=1/n:\ n=1,2,3,...\}.$$ The last option is used when we treat the sequence as a set.

We could visualize sequences as the graphs of functions:

However, we take a different approach; we will apply, at a later time, what we have learned about sequences to our study of functions. This is why our visualizations of graphs of sequences will use the re-named Cartesian coordinate system:

• the horizontal axis is the $n$-axis, and
• the vertical axis is the $x$-axis.

This approach allows us to have a more compact way to visualize sequences (right) as sequences of locations on the $x$-axis visited over an infinite period of time. The long-term trend becomes clear when the points stop visibly “moving”.

Example (reciprocals). The go-to example is the sequence of the reciprocals: $$x_n=\frac{1}{n}.$$ It tends to $0$.

This fact is easy to confirm numerically: $$x_n=1.000,\ 0.500,\ 0.333,\ 0.250,\ 0.200,\ 0.167,\ 0.143,\ 0.125,\ 0.111,\ ...$$ $\square$

Example (plotting). However, numerical analysis alone can't be used for discovering the value of the limit. Plotting the first $1000$ terms of the sequence $x_n=n^{-.01}$ fails to suggest the true value of the limit:

In fact, it is zero. $\square$

Example (decimals). Sequences are ubiquitous. For example, given a real number, we can easily construct a sequence that tends to that number -- via its decimal approximations. For example, $$x_n=0.3,\ 0.33,\ 0.333,\ 0.3333,\ ... \text{ tends to } 1 / 3 .$$

$\square$

Example (alternating). The values can also approach the ultimate destination from both sides, such as $$x_n=(-1)^n\frac{1}{n}.$$

$\square$

The notation for limits is the following: $$a_n \to a\ \text{ as }\ n\to \infty,$$ as well as $$\lim_{n\to\infty} a_n=a.$$ We will include the possibility of infinite limits: $$a_n \to \infty\ \text{ as }\ n\to \infty,$$ and $$\lim_{n\to\infty} a_n=\infty.$$

Exercise. What can you say about the limit of an integer-valued sequence?

Example (Zeno's paradox). Consider a simple scenario: as you walk toward a wall, you can never reach it because once you've covered half the distance, there is still distance left, etc.

We mark these steps and do observe that there are infinitely many of them to be taken... $\square$

## 2 The definition of limit

Calculus, for a large part, is the study of how to properly handle infinity.

Example. Let's examine this seemingly legitimate computation: $$\begin{array}{cccccc} 0& \overset{\text{?}}{=\! =\! =} &0&&+0&&+0&&+0&&+...\\ & \overset{\text{?}}{=\! =\! =} &(1&-1)&+(1&-1)&+(1&-1)&+(1&-1)&+...\\ & \overset{\text{?}}{=\! =\! =} &1&-1&+1&-1&+1&-1&+1&-1&+...\\ & \overset{\text{?}}{=\! =\! =} &1&+(-1&+1)&+(-1&+1)&+(-1&+1)&+(-1&+1)&...\\ & \overset{\text{?}}{=\! =\! =} &1&+0&&+0&&+0&&+0&&+...\\ & \overset{\text{?}}{=\! =\! =} &1. \end{array}$$ That's impossible! How did this happen? One can say that we got something from nothing (the numbers refer to the amount of soil taken out):

The problem is that we casually carried out infinitely many algebraic operations. $\square$

Exercise. Which of the “$=$” signs above is incorrect?

Thus, when facing infinity, algebra may fail. But it doesn't have to... when the sequence has a limit! A limit is a number and the sequence approximates this number: $$\begin{array}{llllllll} \text{ sequence }&1&1/2&1/3&1/4&1/5&...&\text{ approximates }0;\\ \text{ sequence }&.9&.99&.999&.9999&.99999&...&\text{ approximates }1;\\ \text{ sequence }&1.&1.1&1.01&1.001&1.0001&...&\text{ approximates }1;\\ \text{ sequence }&3.&3.1&3.14&3.141&3.1415&...&\text{ approximates }\pi;\\ \text{ sequence }&1&2&3&4&5&...&\text{ approaches } \infty;\\ \text{ sequence }&0&1&0&1&0&...&\text{ doesn't approximate any number}. \end{array}$$ In other words, we can substitute the sequence for the number it approximates and do it with any degree of accuracy!

Now, let's find the exact meaning of limit.

Geometrically, we see how the sequence accumulates toward a particular horizontal line:

At the end, the dots can't be distinguished from the $x$-axis.

Example. We have a more concise illustration if we concentrate on the $y$-axis only:

We can see that after sufficiently many steps, the terms of the sequence, $a_n$, become indistinguishable from the limit, $a$. It seems that, say, the $10$th dot has merged with $a$. $\square$

Example. Let's now look at this “process” numerically. What does it mean that $a_n=1/n^2$ approaches $a=0$?

First, how long does it take to get within $.1$ from $a$? Look up in the table of values: it takes $4$ steps.

Second, how long does it take to get within $.01$ from $a$? It takes $11$ steps.

Third, how long does it take to get within $.001$ from $a$? It takes $32$ steps.

And so on. No matter how small a number I pick, eventually $a_n$ will be that close to its limit. $\square$

Example. Another interpretation of this analysis is in terms of accuracy. We understand the idea that $a_n=1/n^2$ approaches $a=0$ as : “the sequence approximates $0$”.

First, what if we need the accuracy to be $.1$? Look up in the table of values: we need to compute $4$ terms of the sequence or more.

Second, what if we need the accuracy to be $.01$? At least $11$ terms.

Third, what if we need the accuracy to be $.001$? At least $32$.

And so on. No matter how much accuracy I need, there is a way to accommodate this requirement by getting farther and farther into the sequence $a_n$. $\square$

Unfortunately, not all sequences are as simple as that. They may approach their respective limits in a number of ways, as we have seen. They don't have to be monotone:

They might approach the limit from above and below at the same time:

And so on... And then there are sequences with no limits. We need a more general approach.

We re-write what we want to say about the meaning of the limits in progressively more and more precise terms. $$\begin{array}{l|ll} n&y=a_n\\ \hline \text{As } n\to \infty, & \text{we have } y\to a.\\ \text{As } n\text{ approaches } \infty, & y\text{ approaches } a. \\ \text{As } n \text{ is getting larger and larger}, & \text{the distance from }y \text{ to } a \text{ approaches } 0. \\ \text{By making } n \text{ larger and larger},& \text{we make } |y-a| \text{ as small as needed}.\\ \text{By making } n \text{ larger than some } N>0 ,& \text{we make } |y-a| \text{ smaller than any given } \varepsilon>0. \end{array}$$

The absolute values above are the distances from $a_n$ to $a$, as shown below:

Algebraically, we see that for every measure of “closeness”, call it $\varepsilon$, the function's values become eventually that close to the limit. In other words, $\varepsilon$ is the degree of required accuracy.

Example. Let's prove this statement for the sequence from the last example, $a_n=1/n^2$. Let's imagine that any degree of accuracy $\varepsilon>0$ that needs to be accommodated is supplied ahead of time. Let's find such an $n$ that $a_n$ is within $\varepsilon$ from $a=0$. In other words, we need this inequality to be satisfied: $$|a_n-a|=\left| \frac{1}{n^2}-0 \right|=\frac{1}{n^2}<\varepsilon.$$ We solve it: $$n>\frac{1}{\sqrt{\varepsilon}}=N.$$ This proves that the requirement can be satisfied. Then, for any such $n$ we have $|a_n-a|<\varepsilon$, as required.

The result gives us the same answers for the three particular choices of $\varepsilon =.1,\ .01,\ .001$ from the last example, as well as for any other... For example, let's pick $\varepsilon=.0001$, what is $N$? By the formula, it is $$N= \frac{1}{\sqrt{.0001}} =\frac{1}{\left( 10^{-4}\right)^{1/2}} = \frac{1}{ 10^{-2} } =10^2=100.$$ $\square$

Exercise. Carry out such an analysis for $a_n=1/\sqrt{x}$.

Definition. We call number $a$ the limit of the sequence $a_n$ if the following condition holds:

• for each real number $\varepsilon > 0$, there exists a number $N$ such that, for every natural number $n > N$, we have

$$|a_n - a| < \varepsilon .$$ We also say that the limit is finite. If a sequence has a limit, then we call the sequence convergent and say that it converges; otherwise it is divergent and we say it diverges.

Example. Let's apply the definition to $$a_n=1+\frac{(-1)^n}{n}.$$ Suppose an $\varepsilon>0$ is given. Looking at the numbers, we discover that they accumulate toward $1$. Is this the limit? We apply the definition. Let's find such an $n$ that $a_n$ is within $\varepsilon$ from $a=1$: $$|a_n-a|=\left| 1+\frac{(-1)^n}{n}-1 \right|=\left| \frac{(-1)^n}{n} \right|=\frac{1}{n}<\varepsilon.$$ We solve it: $$n>\frac{1}{\varepsilon}.$$ That gives us the $N$ required by the definition; we let $$N= \frac{1}{\varepsilon} .$$ Then, for any $n>N$ we have $|a_n-a|<\varepsilon$, as required by the definition. $\square$

Another way to visualize a trend in a convergent sequence is to enclose the end of the tail of the sequence in a band:

It should be, in fact, a narrower and narrower band; its width is $2\varepsilon$. Meanwhile, the starting point of the band moves to the right; that's $N$.

Examples of divergence are below.

Example. A sequence may tend to infinity, such as $a_n=n$:

Then no band -- no matter how wide -- will contain the sequence's tail. $\square$

This behavior however has a meaningful pattern.

Definition. We say that a sequence $a_n$ tends to positive infinity if the following condition holds:

• for each real number $R$, there exists a natural number $N$ such that, for every natural number $n > N$, we have

$$a_n >R.$$ We say that a sequence $a_n$ tends to negative infinity if:

• for each real number $R$, there exists a natural number $N$ such that, for every natural number $n > N$, we have

$$a_n <R.$$ In either case, we also way that the limit is infinite.

We describe such a behavior with the following notation: $$a_n\to \pm\infty \text{ as } n\to \infty ,$$ or $$\lim_{n\to \infty}a_n=\pm\infty.$$

Example. Some sequences seem to have no pattern at all, such as $a_n=\sin n$:

Here, a band -- if narrow enough -- can contain the sequence's tail.

If, however, we also divide this expression by $n$, the swings start to diminish:

The limit is $0$! $\square$

Example. The next example is $a_n=1+(-1)^n+\frac{1}{n}$. It seems to approach two limits at the same time:

Indeed, no matter how narrow, we can find two bands to contain the sequence's two tails. However, no single band -- if narrow enough -- will contain them! $\square$

Example. Let's pick a simpler sequence and do this analytically. Let $$a_n=(-1)^n=\begin{cases} 1&\text{ if } n \text{ is even,}\\ -1&\text{ if } n \text{ is odd.} \end{cases}$$ Is the limit $a=1$? If it is, then this is what needs to be “small”: $$|a_n-a|=\left| (-1)^n-1 \right|=\begin{cases} 0&\text{ if } n \text{ is even,}\\ 2&\text{ if } n \text{ is odd.} \end{cases}$$ It's not! Indeed, this expression won't be less than $\varepsilon$ if we choose it to be, say, $1$, no matter what $N$ is. So, $a=1$ is not the limit. Is $a=-1$ the limit? Same story. In order to prove the negative, we need to try every possible value of $a$. $\square$

Exercise. Finish the proof in the last example.

Example. For a given real number, we can construct a sequence that approximates that number -- via truncations of its decimal approximations. For example, we have already seen this: $$x_n=0.9 , 0.99 , 0.999 , 0.9999 , . . . \text{ tends to } 1 .$$ Furthermore, we have: $$x_n=0.3 , 0.33 , 0.333 , 0.3333 , . . . \text{ tends to } 1 / 3 .$$ The idea of limit then helps us understand infinite decimals.

• What is the meaning of $.9999...$? It is the limit of the sequence $0.9 , 0.99 , 0.999, ...$; i.e., $1$.
• What is the meaning of $.3333...$? It is the limit of the sequence $0.3 , 0.3 , 0.333, ...$; i.e., $1/3$.

$\square$

Exercise. Find the formulas for the two sequences above and confirm the limits.

We need to justify “the” in “the limit”.

Theorem (Uniqueness). A sequence can have only one limit (finite or infinite); i.e., if $a$ and $b$ are limits of the same sequence, then $a=b$.

Proof. The geometry of the proof is clear: we want to separate the two horizontal lines representing two potential limits by two non-overlapping bands, as shown above. Then the tail of the sequence would have to fit one or the other, but not both. These bands correspond to two intervals around those two “limits”. In order for them to be disjoint, their width (that's $2\varepsilon$!) should be less than half the distance between the two numbers.

The proof is by contradiction. Suppose $a$ and $b$ are two limits, i.e., either satisfies the definition, and suppose also $a\ne b$. In fact, without loss of generality we can assume that $a<b$. Let $$\varepsilon = \frac{b-a}{2}.$$ Then, what we are going to use at the end is $$a+\varepsilon=b-\varepsilon.$$

Now, we rewrite the definition for $a$ and $b$ specifically:

• there exists a number $L$ such that, for every natural number $n > L$, we have

$$|a_n - a| < \varepsilon .$$ Now, we rewrite the definition for $M$ as limit:

• there exists a number $M$ such that, for every natural number $n > M$, we have

$$|a_n - b| < \varepsilon .$$ In order to combine the two statements, we need them to be satisfied for the same values of $n$. Let $$N=\min\{ L,M\}.$$ Then,

• for every number $n > N$, we have

$$|a_n - a| < \varepsilon ,$$

• for every number $n > N$, we have

$$|a_n - b| < \varepsilon .$$ In particular, for every $n > N$, we have: $$a_n < a+\varepsilon=b-\varepsilon<a_n.$$ A contradiction. $\blacksquare$

Exercise. Follow the proof and demonstrate that that it is impossible to for a sequence to have as limit: (a) a real number and $\pm\infty$, or (b) $-\infty$ and $+\infty$.

The theorem indicates that the correspondence:

• a convergent sequence $\longrightarrow$ its limit (a real number),

makes sense. Can we reverse this correspondence? No, because there are many sequences converging to the same number. However, we can say that a real number “is” its approximations, i.e., all sequences that converge to it.

Thus, there can be no two limits and we are justified to speak of the limit.

The limits of some specific sequences can be easily found.

Theorem (Constant). For any real $c$, we have $$\lim_{n \to \infty}c = c.$$

Theorem (Arithmetic progression). For any real numbers $m,b>0$, we have $$\lim_{n \to \infty}(b+nm) = \begin{cases} -\infty &\text{ if } m<0,\\ b &\text{ if } m=0,\\ +\infty &\text{ if } m>0. \end{cases}$$

Exercise. Prove the theorem.

Theorem (Powers). For any integer $k$, we have $$\lim_{n \to \infty}n^k = \begin{cases} 0&\text{ if } k<0,\\ 1&\text{ if } k=0,\\ +\infty&\text{ if } k>0. \end{cases}$$

Proof. First, the case of $k<0$. Suppose $\varepsilon >0$ is given. We need to find such an $N$ that $|n^k-0|=n^k<\varepsilon$ whenever $n>N$. We can express such an $N$ in terms of this $\varepsilon$; we just choose: $$N= \sqrt[1/k]{\varepsilon}.$$

Second, the case of $k<0$. Suppose $R>0$ is given. We need to find such an $N$ that $n^k>R$ whenever $n>N$. We can express such an $N$ in terms of this $R$; similarly to the above we choose: $$N= \sqrt[1/k]{R}.$$ $\blacksquare$

Theorem (Geometric progression). For any real number $r$, we have $$\lim_{n \to \infty}r^n = \begin{cases} \text{diverges } &\text{ if } r \le -1,\\ 0 &\text{ if } |r|<1,\\ 1 &\text{ if } r=1,\\ +\infty &\text{ if } r>1. \end{cases}$$

Exercise. Prove the theorem.

Example. Geometric progressions are used to model population growth and decline. $\square$

Exercise. Find the limits of each of these sequences or show that it doesn't exist:

• (a) $1,\ 3,\ 5,\ 7,\ 9,\ 11,\ 13,\ 15,\ ...$;
• (b) $.9,\ .99,\ .999,\ .9999,\ ...$;
• (c) $1,\ -1,\ 1,\ -1,\ ...$;
• (d) $1,\ 1/2,\ 1/3,\ 1/4,\ ...$;
• (e) $1,\ 1/2,\ 1/4\ ,1/8,\ ...$;
• (f) $2,\ 3,\ 5,\ 7,\ 11,\ 13,\ 17,\ ...$;
• (g) $1,\ -4,\ 9,\ -16,\ 25,\ ...$;
• (h) $3,\ 1,\ 4,\ 1,\ 5,\ 9,\ ...$.

Example. In either of the two tables below, we have a sequence given in the first two columns. Its $n$th term formula is known. The third column shows the sequence of sums (Chapter 1) of the first: $$\begin{array}{c|c|lll} n&a_n&s_n\\ \hline 1&\frac{1}{1}&\frac{1}{1}\\ 2&\frac{1}{2}&\frac{1}{1}+\frac{1}{2}\\ 3&\frac{1}{3}&\frac{1}{1}+\frac{1}{2}+\frac{1}{3}\\ \vdots&\vdots&\vdots\\ n&\frac{1}{n}&\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}\\ \end{array}\quad\quad \begin{array}{c|c|lll} n&a_n&s_n\\ \hline 1&\frac{1}{1}&\frac{1}{1}\\ 2&\frac{1}{2}&\frac{1}{1}+\frac{1}{2}\\ 3&\frac{1}{4}&\frac{1}{1}+\frac{1}{2}+\frac{1}{4}\\ \vdots&\vdots&\vdots\\ n&\frac{1}{2^n}&\frac{1}{1}+\frac{1}{2}+\frac{1}{4}+...+\frac{1}{2^{n-1}}\\ \end{array}$$ The $n$th term formula is unknown as we don't know how to represent these quantities without “...”. In contrast to last example, finding the limit of such a sequence is a challenge... $\square$

## 3 Algebra of sequences and limits

If every real number is the sequence of its approximations, does algebra with these numbers still make sense? Fortunately, limits behave well with respect to the usual arithmetic operations. Below we assume that the sequences are defined on the same set of integers.

We will study convergence of sequences with the help of other, simpler, sequences. The theorem below shows why.

Theorem. $$a_n\to a \ \Longleftrightarrow\ |a_n-a|\to 0.$$

Then to understand limits of sequences in general, we need first to understand those of a smaller class:

• positive sequences that converge to $0$.

The definition of convergence becomes simpler:

• $0<a_n\to 0$ when for any $\varepsilon >0$ there is $N$ such that $a_n<\varepsilon$ for all $n>N$.

To graphically add two sequences, we flip the second upside down and then connect each pair of dots with a bar. Then, the lengths of these bars form the new sequence. Now, if either sequence converges to $0$, then so do these bars.

Theorem (Sum Rule). $$0<a_n\to 0,\ 0<b_n\to 0 \ \Longrightarrow\ a_n+ b_n\to 0.$$

Proof. Suppose $\varepsilon >0$ is given. From the definition,

• $a_n\to 0\ \Longrightarrow$ there is $N$ such that $a_n<\varepsilon /2$ for all $n>N$, and
• $b_n\to 0\ \Longrightarrow$ there is $M$ such that $b_n<\varepsilon /2$ for all $n>M$.

Then for all $n>\max\{N,M\}$, we have $$a_n+b_n<\varepsilon /2+\varepsilon /2 =\varepsilon .$$ Therefore, by definition $a_n+ b_n\to 0$. $\blacksquare$

Exercise. Prove the version of the above theorem for $m$ sequences (a) from the theorem and (b) by generalizing the proof.

Multiplying a sequence by a constant number simply stretches the whole picture in the vertical direction -- in both directions, away from the $z$-axis.

Then, zero remains zero!

Theorem (Constant Multiple Rule). $$0<a_n\to 0 \ \Longrightarrow\ ca_n\to 0 \text{ for any real }c>0.$$

Proof. Suppose $\varepsilon >0$ is given. From the definition,

• $0 < a_n\to 0\ \Longrightarrow$ there is $N$ such that $a_n <\varepsilon /c$ for all $n>N$.

Then for all $n>N$, we have $$c\cdot a_n < c\cdot \varepsilon /c=\varepsilon .$$ Therefore, by definition $ca_n\to 0$. $\blacksquare$

For more complex situations we need to use the fact that convergent sequences are bounded; i.e., the sequence fits into a (not necessary narrow) band.

Theorem (Boundedness). $$a_n\to a \ \Longrightarrow\ |a_n| < Q \text{ for some real } Q.$$

Proof. The idea is that the tail of the sequence does fit into a (narrow) band; meanwhile, there are only finitely many terms left... Choose $\varepsilon =1$. Then by definition, there is such $N$ that for all $n>N$ we have: $$|a_n-a| < 1.$$ Then, we have $$\begin{array}{lll} |a_n|&=|(a_n-a)+a|&\text{ ...then by the Triangle Inequality...}\\ &\le |a_n-a|+|a|&\text{ ...then by the inequality above...}\\ &<1+|a|. \end{array}$$ To finish the proof, we choose: $$Q=\max\{|a_1|,...,|a_N|,1+|a|\}.$$ $\blacksquare$

The proof is illustrated below:

The converse isn't true: not every bounded sequence is convergent. Just try $a_n=\sin n$. We will show later that, with an extra condition, bounded sequences do have to converge...

We are now ready for the general results on the algebra of limits.

Theorem (Sum Rule). If sequences $a_n ,b_n$ converge then so does $a_n + b_n$, and $$\lim_{n\to\infty} (a_n + b_n) = \lim_{n\to\infty} a_n + \lim_{n\to\infty} b_n.$$

Proof. Suppose $$a_n\to a,\ b_n\to b.$$ Then $$|a_n - a|\to 0, \ |b_n-b|\to 0.$$ We compute $$\begin{array}{lll} |(a_n + b_n)-(a+b)|&= |(a_n-a)+( b_n-b)|& \text{ ...then by the Triangle Inequality...}\\ &\le |a_n-a|+| b_n-b|&\\ &\to 0+0 & \text{ ...by SR...}\\ &=0. \end{array}$$ Then, by the last theorem, we have $$|(a_n + b_n)-(a+b)|\to 0.$$ Then, by the first theorem, we have: $$a_n + b_n\to a+b.$$ $\blacksquare$

When two sequences are multiplied, it is as if we use each pair of their values to build a rectangle:

Then the areas of these rectangles form a new sequence and these areas converge if the widths and the heights converge.

Theorem (Product Rule). If sequences $a_n ,b_n$ converge then so does $a_n \cdot b_n$, and $$\lim_{n\to\infty} (a_n \cdot b_n) = (\lim_{n\to\infty} a_n)\cdot( \lim_{n\to\infty} b_n).$$

Proof. Suppose $a_n\to a,\ b_n\to b$. Then, $$|a_n-1|\to 0,\ |b_n-b|\to 0.$$ Consider, $$\begin{array}{lll} |a_n\cdot b_n-a\cdot b| &= |a_n\cdot b_n+(-a\cdot b_n+a\cdot b_n) -a\cdot b|&\text{ ...adding extra terms then factoring...}\\ &= |(a_n-a)\cdot b_n+a\cdot( b_n - b)|&\text{ ...then by the Triangle Inequality...}\\ &\le |(a_n-a)\cdot b_n|+|a\cdot ( b_n - b)|&\\ &= |a_n-a|\cdot |b_n|+|a|\cdot | b_n - b|&\text{ ...then by Boundedness...}\\ &\le |a_n-a|\cdot Q+|a|\cdot | b_n - b|&\\ &\to 0\cdot Q+|a|\cdot 0&\text{ ...by SR and CMR...}\\ &=0. \end{array}$$ Therefore, $$a_n\cdot b_n \to a\cdot b.$$ $\blacksquare$

CMR follows.

Theorem (Constant Multiple Rule). If sequence $a_n$ converges then so does $c a_n$ for any real $c$, and $$\lim_{n\to\infty} c\, a_n = c \cdot \lim_{n\to\infty} a_n.$$

Example. These laws help us justify the following trick of finding fraction representations of infinite decimals. This is how we deal with $x=.3333...$: $$\begin{array}{llll} x&=0.3333...\\ -\\ 10x&=3.3333...\\ \hline -9x&=-3.0000...\\ &&&&&&&&\Longrightarrow\ x=1/3 \end{array}$$ Instead we use the Constant Multiple Rule and the Difference Rule to carry out the following algebra of sequences: $$\begin{array}{llll} a_n:&0&0.3&0.33&0.333&0.3333&...&\to&x\\ -\\ 10a_n:&3&3.3&3.33&3.333&3.3333&...&\to&10x\\ \hline -9a_n:&-3&-3&-3&-3&-3&...&\to&-9x\\ &&&&&&&&\Longrightarrow\ x=1/3 \end{array}$$ Note that we have shifted the values of the second sequence. $\square$

One can understand division of sequences as multiplication in reverse: if the areas of the rectangles converge and so do their widths, then so do their heights.

Also, when two sequences are divided, it is as if we use each pair of their values to build a triangle:

Then the tangents of the base angles of these triangles form a new sequence and they converge if the widths and the heights converge.

Theorem (Quotient Rule). If sequences $a_n ,b_n$ converge then so does $a_n / b_n$ whenever defined, and $$\lim_{n\to\infty} \left(\frac{a_n}{b_n}\right) = \frac{\lim\limits_{n\to\infty} a_n}{\lim\limits_{n\to\infty} b_n},$$ provided $\lim_{n\to\infty} b_n \ne 0.$

Proof. We will only prove the case of $a_n=1$. Suppose $b_n\to b\ne 0$. First, choose $\varepsilon =|b|/2$ in the definition of convergence. Then there is $N$ such that for all $n>N$ we have $$|b_n-b|<|b|/2.$$ Therefore, $$|b_n|>|b|/2.$$ Next, $$\begin{array}{lll} \left| \frac{1}{b_n}-\frac{1}{b} \right| &= \left|\frac{b-b_n}{b_nb} \right|&\\ &= \frac{|b-b_n|}{|b_n|\cdot|b|}&\text{ ...then by above inequality...}\\ &< \frac{|b-b_n|}{|b/2|\cdot|b|}&\\ &\to \frac{0}{|b/2|\cdot|b|}&\text{ ...by the CMR...}\\ &=0. \end{array}$$ Therefore, $$\frac{1}{b_n} \to \frac{1}{b}.$$ Finally, the general case of QR follows from PR: $$\frac{a_n}{b_n}=a_n\cdot \frac{1}{b_n}\to a\cdot \frac{1}{b}=\frac{a}{b}.$$ $\blacksquare$

Exercise. What are the rules of the algebra of infinities for products?

Warning: it is considered a serious error if you use the conclusion (the formula) one of these rules without verifying the conditions (the convergence of the sequences involved).

The summary result below shows that when we replace every real number with a sequence converging to it, it is still possible to do algebraic operations with them.

Theorem (Algebra of Limits of Sequences). Suppose $a_n\to a$ and $b_n\to b$. Then $$\begin{array}{|ll|ll|} \hline \text{SR: }& a_n + b_n\to a + b& \text{CMR: }& c\cdot a_n\to ca& \text{ for any real }c\\ \text{PR: }& a_n \cdot b_n\to ab& \text{QR: }& a_n/b_n\to a/b &\text{ provided }b\ne 0\\ \hline \end{array}$$

Example. Let $$a_n=7n^{-2}+\frac{2}{3^n}+8.$$ What is its limit as $n\to \infty$? The computation is straightforward, but every step has to be justified with the rules above.

To understand which rules to apply first, observe that the last operation is addition. We use SR first, subject to justification: $$\begin{array}{lll} \lim_{n\to \infty}a_n&=\lim_{n\to \infty} (7n^{-2}+\frac{2}{3^n}+8) &\text{ ...use SR}\\ &=\lim_{n\to \infty} (7\cdot n^{-2})+\lim_{n\to \infty}(2\cdot \frac{1}{3^n})+\lim_{n\to \infty}8 &\text{ ...use CMR }\\ &=7\cdot \lim_{n\to \infty} n^{-2} +2\cdot \lim_{n\to \infty}3^{-n}+8 \quad&\\ &=7\cdot 0 +2 \cdot 0 +8\\ &=8. \end{array}$$ As all the limits exist, our use of SR (and then CMR) was justified. $\square$

Example. Prove the limit: $$\lim_{n \to \infty}(n^2-n) = +\infty .$$ Plotting the graph does suggest that the limit is infinite.


Presented verbally, these rules have these abbreviated versions:

• the limit of the sum is the sum of the limits;
• the limit of the difference is the difference of the limits;
• the limit of the product is the product of the limits;
• the limit of the quotient is the quotient of the limits (as long as the one of the denominator isn't zero).

Warning: never forget to confirm the preconditions before using these rules.


• right: take the limit of either, then down: add the results; or
• down: add them, then right: take the limit of the result.

The result is the same! For the Product Rule and the Quotient Rule, we just replace “$+$” with “$\cdot$” and “$\div$” respectively.

These rules show why approximations work. Indeed, we can think of a sequence that converges to a number as a sequence of better and better approximations. Then carrying out all the algebra with these sequences will produce the same result as the original computation is meant to produce! For example, here is such a substitution: $$\begin{array}{cccl} (1)&+&(2)&&=&3\\ \left(1+\frac{1}{n}\right)&+&\left(2-\frac{5}{n}\right)&=3-\frac{4}{n}&\to&3 \end{array}$$

What about infinite limits? If we replace an infinity with a sequence that approaches it, will the algebra make sense?

## 4 Can we add infinities? Subtract? Divide? Multiply?

We have demonstrated that in our computations of limits we can replace any sequence with its limit and continue doing the algebra. This conclusion doesn't apply to divergent sequences!

Sequences that approach infinity diverge, technically, but they provide useful information about the pattern exhibiting by the sequences. Such a sequence can also be a part of another, convergent sequence...

Theorem (Limits of Polynomials). Suppose we have a polynomial of degree $p$ with the leading coefficient $a_p\ne 0$. Then the limit of the sequence defined by this function is: $$\lim_{n\to\infty}(a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0)=\begin{cases} +\infty&\text{ if } a_p>0;\\ -\infty&\text{ if } a_p<0. \end{cases}$$

Proof. The idea is to factor out the highest power: $$a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0=n^p(a_p+a_{p-1}n^{-1}+...+ a_1n^{1-p}+a_0n^{-p})\to +\infty(a_p+0).$$ $\blacksquare$

So, as far as its behavior at $\infty$, for a polynomial,

• only the leading term matters.

Example. Evaluate the limit: $$\lim_{n \to \infty}\frac{4n^2-n+2}{2n^2-1}.$$

Plotting the graph does suggest that the limit is $a=2$:


Once again, we can't conclude that the limit doesn't exist; we've just failed to find the answer. The path out of this conundrum lies through algebra.

We divide the numerator and denominator by $n^2$ : $$\begin{array}{lll} \frac{4n^2-n+2}{2n^2-1}&=\frac{(4n^2-n+2)/n^2}{(2n^2-1)/n^2}\\ &=\frac{4-\tfrac{1}{n}+\tfrac{2}{n^2}}{2-\tfrac{1}{n^2}}\\ &\to\frac{4-0+0}{2-0}\\ &=\frac{4}{2}\\ &=2. \end{array}$$ We only used QR at the very end, after the indeterminacy has been resolved. $\square$

The general method for finding such limits is given by the theorem below.

Theorem (Limits of rational functions). Suppose we have a rational function $f$ represented as a quotient of two polynomials of degrees $p$ and $q$, with the leading coefficients $a_p\ne 0,\ b_q\ne 0$. Then the limit of the sequence defined by this function is: $$\lim_{n\to\infty}\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\begin{cases} \pm\infty&\text{ if } p>q;\\ \frac{a_p}{b_p}&\text{ if } p=q;\\ 0&\text{ if } p<q. \end{cases}$$

Proof. The idea is to divide by the highest power. If $p>q$, we have $$\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\frac{a_p+a_{p-1}n^{-1}+...+ a_1n^{-p+1}+a_0n^{-p}}{b_qn^{q-p}+b_{q-1}n^{q-p-1}+...+ b_1n^{1-p}+b_0n^{-p}}\to\frac{a_p+0}{0}=\pm\infty.$$ If $p=q$, we have $$\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\frac{a_p+a_{p-1}n^{-1}+...+ a_1n^{-p+1}+a_0n^{-p}}{b_q+b_{q-1}n^{-1}+...+ b_1n^{1-p}+b_0n^{-p}}\to\frac{a_p+0}{b_p+0}=\frac{a_p}{b_p}.$$ If $p<q$, we have $$\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\frac{a_pn^{p-q}+a_{p-1}n^{p-q-1}+...+ a_1n^{1-q}+a_0n^{-q}}{b_q+b_{q-1}n^{-1}+...+ b_1n^{-q+1}+b_0n^{-q}}\to\frac{0}{b_q+0}=0.$$ $\blacksquare$

This is the lesson we have re-learned:

• the long-term behavior of polynomials is determined by their leading terms.

Indeed: $$\lim_{n\to\infty}\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\lim_{n\to\infty}\frac{a_pn^p}{b_qn^q}=\frac{a_p}{b_q}\lim_{n\to\infty}n^{p-q}.$$

Example. Find the limit of the sequence: $$\begin{array}{lll} y_n&=\frac{1+(-3)^n}{5^n} & \leadsto\text{ QR? } \text{ But the numerator diverges -- DEAD END! }\\ &=\frac{1}{5^n}+\frac{(-3)^n}{5^n}\\ &=\left( \frac{1}{5} \right)^n + \left( \frac{-3}{5} \right)^n \\ &\quad\quad \downarrow \quad\quad\quad\quad \downarrow\\ &\quad\quad 0 \quad\quad\quad\quad 0 \\ &\to 0 &\text{ by SR }. \end{array}$$ These are two geometric progressions with the ratios: $r=1/5,\ -3/5$, that satisfy $|r| <0$. Meanwhile, our application of SR was justified by the fact that the two limits exit. $\square$

Exercise. Find the limit of the composition of $f(x)=\operatorname{sign}(x)$ and the sequence $x_n$ given by (a) $1/n$, (b) $-1/n$, (c) $(-1)^n/n$.

What is infinity?

The plus (or minus) infinity is identified with the collection of all sequences approaching this infinity. In other words, the following identity is read in both directions: $$\lim _{n\to +\infty}a_n=+\infty.$$

Now, does it make sense to do any algebra with the infinities? Yes, as long as the algebra with these limits makes sense.

The Algebraic Rules of Limits above have exceptions; we can imagine that one or both of the sequences approach infinity or that the limit in the denominator is $0$.

Theorem (Algebra of Infinite Limits of Sequences I). Suppose $a_n\to a$ and $b_n\to \pm\infty$. Then $$\begin{array}{|ll|llll|} \hline \text{SR: }& a_n + b_n\to \pm\infty& \text{CMR: }& c\cdot b_n\to \pm\infty& \text{ for any real }c>0\\ \text{PR: }& a_n \cdot b_n\to \operatorname{sign}(a)\infty& \text{QR: }& a_n/b_n\to 0, & b_n/a_n\to \pm\operatorname{sign}(a)\infty&\text{provided }a\ne 0 \\ \hline \end{array}$$

Theorem (Algebra of Infinite Limits of Sequences II). Suppose $a_n\to \pm\infty$ and $b_n\to \pm\infty$. Then $$\begin{array}{|ll|} \hline \text{SR: }& a_n + b_n\to \pm\infty& \\ \text{PR: }& a_n \cdot b_n\to +\infty& &\\ \hline \end{array}$$

Justified by these theorems, we follow the algebra of infinities: $$\begin{array}{|lll|} \hline \text{number } &+& (+\infty)&=+\infty\\ \text{number } &+& (-\infty)&=-\infty\\ +\infty &+& (+\infty)&=+\infty\\ -\infty &+& (-\infty)&=-\infty\\ \text{number } &/& (\pm\infty)&=0\\ \hline \end{array}$$ These are just shortcuts!

There is no $\infty -\infty$ (just as there is no $\infty /\infty$)... Why not?

Behind each $\infty$, there must be a sequence approaching $\infty$! However, the outcome is ambiguous; on the one hand we have: $$a_n=n \to+\infty,\ b_n=-n \to+\infty\ \Longrightarrow\ a_n- b_n=0 \to 0;$$ on the other: $$a_n=n^2 \to+\infty,\ b_n=-n \to+\infty\ \Longrightarrow\ a_n- b_n=n^2-n \to +\infty,$$ by Limits of Polynomials. Two seemingly legitimate answers for the same expression, $\infty -\infty$...

We have another indeterminate expression!

## 5 More properties of limits of sequences

Only the tail of the sequence matters for convergence:

Theorem (Truncation Principle). A sequence is convergent if and only if all of its truncations are convergent: $$\big( a_n:\ n=p,p+1,... \big)\to a \ \Longleftrightarrow\ \big(a_n:\ n=p+1,p+2,...\big)\to a.$$

Exercise. Prove the theorem.

Non-strict inequalities between sequences $$a\leftarrow a_n \ge b_n \to b,$$ are preserved under limits: $$a\ge b.$$

Theorem (Comparison Test). If $a_n \ge b_n$ for all $n$ greater than some $N$, then $$\lim_{n\to\infty} a_n \ge \lim_{n\to\infty} b_n,$$ provided the sequences converge.

Proof. The geometry of the proof is clear: we want to separate the two horizontal lines representing the two limits by two non-overlapping bands, as shown above. Then, if narrow enough, the tails of the “larger” sequence would have to fit the “smaller” band. These bands correspond to two intervals around those two limits. In order for them to be disjoint, their width (that's $2\varepsilon$!) should be less than half the distance between the two numbers.

The proof is by contradiction. Suppose $a$ and $b$ are the limits of $a_n$ and $b_n$ respectively and suppose also $a< b$. Let $$\varepsilon = \frac{b-a}{2}.$$ Then, what we are going to use at the end is $$a+\varepsilon=b-\varepsilon.$$

Now, we rewrite the definition for $a$ as limit:

• there exists a natural number $N$ such that, for every natural number $n > L$, we have

$$|a_n - a| < \varepsilon .$$ Now, we rewrite the definition for $b$ as limit:

• there exists a natural number $K$ such that, for every natural number $n > M$, we have

$$|b_n - b| < \varepsilon .$$ In order to combine the two statements, we need them to be satisfied for the same values of $n$. Let $$N=\max\{ L,M\}.$$ Then,

• for every number $n > N$, we have

$$|a_n - a| < \varepsilon ,\text{ or } a-\varepsilon<a_n<a+\varepsilon,$$

• for every number $n > N$, we have

$$|b_n - b| < \varepsilon ,\text{ or } b-\varepsilon<b_n<b+\varepsilon.$$ Taking one from either of the two pairs of inequalities, we have: $$a_n < a+\varepsilon=b-\varepsilon<b_n.$$ A contradiction. $\blacksquare$

Exercise. Show that replacing the non-strict inequality, $a_n \ge b_n$, with a strict one, $a_n > b_n$, won't produce a strict inequality in the conclusion of the theorem.

The situation is similar to that of the Uniqueness Theorem: if the opposite inequality were to hold, we could find two bands to contain the two sequences' tails so that the original inequality would fail:

Warning: from the inequality in the theorem, we can't conclude anything about the existence of the limit:

Having two inequalities, on both sides, may work better.

It is called a squeeze. If we can squeeze the sequence under investigation between two familiar sequences, we might be able to say something about its limit. Some further requirements will be necessary.

Theorem (Squeeze Theorem). If a sequence is squeezed between two sequences with the same limit, its limit also exists and is equal to the that number; i.e., $$a_n \leq c_n \leq b_n \text{ for all } n > N,$$ and $$\lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = c,$$ then the sequence $c_n$ converges and $$\lim_{n\to\infty} c_n = c.$$

Proof. The geometry of the proof is shown below:

Suppose $\varepsilon>0$ is given. As we know, we have for all $n$ larger than some $N$: $$c-\varepsilon < a_n < c+\varepsilon \text{ and } c-\varepsilon < b_n < c+\varepsilon.$$ Then we have: $$c-\varepsilon < a_n \le c_n \le b_n < c+\varepsilon.$$ $\blacksquare$

Example. Sometimes the choice of the squeeze is obvious. Consider: $$c_n=\frac{(-1)^n}{n}.$$ Examining the sequence reveals the two bounds:

In other words, we have: $$-\frac{1}{n} \le \frac{(-1)^n}{n} \le \frac{1}{n} .$$ Now, since both $a_n=-\frac{1}{n}$ and $b_n=\frac{1}{n}$ go to $0$, by the Squeeze Theorem, so does $c_n=\frac{(-1)^n}{n}$. $\square$

Example. Let's find the limit, $$\lim_{n \to \infty }\frac{1}{n} \sin n.$$

It cannot be computed by PR because $$\lim_{n \to \infty }\sin n.$$ does not exist. Let's try a squeeze. This is what we know from trigonometry: $$-1 \le \sin n \le 1.$$ However, this squeeze proves nothing about the limit of our limit!

Let's try another squeeze: $$-\left| \frac{1}{n} \right| \le \frac{1}{n} \sin\frac{1}{n} \le \left| \frac{1}{n} \right| .$$ Now, since $\lim_{n \to \infty }(-\frac{1}{n}) =\lim_{n \to \infty }\frac{1}{n}=0$, by the Squeeze Theorem, we have: $$\lim_{n \to \infty }\frac{1}{n} \sin n=0.$$ $\square$

Exercise. Suppose $a_n$ and $b_n$ are convergent. Prove that $\max\{a_n,b_n\}$ and $\min\{a_n,b_n\}$ are also convergent. Hint: start with the case $\lim a_n>\lim b_n$.

The squeeze theorem is also known as the Two Policemen Theorem: if two policemen are escorting a prisoner (handcuffed) between them, and both officers go to the same(!) police station, then -- in spite of some freedom the handcuffs allow -- the prisoner will also end up in that station.

Another name is the Sandwich Theorem. It is, once again, about control. A sandwich can be a messy affair: ham, cheese, lattice, etc. One won't want to touch that and instead takes control of the contents by keeping them between the two buns. He then brings the two to his mouth and the rest of the sandwich along with them!

To make conclusions about divergence to infinity, we only need to control it from one side.

Theorem (Push Out Theorem). If $a_n \ge b_n$ for all $n$ greater than some $N$, then we have: $$\begin{array}{lll} \lim_{n\to\infty} a_n =-\infty&\Longrightarrow& \lim_{n\to\infty} b_n=-\infty;\\ \lim_{n\to\infty} a_n =+\infty&\Longleftarrow& \lim_{n\to\infty} b_n=+\infty. \end{array}$$

Exercise. Prove the theorem.

Exercise. Suppose a sequence is defined recursively by $$a_{n+1}=2a_n+1\text{ with } a_0=1.$$ Does the sequence converge or diverge?

## 6 Theorems of Introductory Analysis

The theorems in this section will be used to prove new theorems. It can be skipped on the first reading.

We accept the following fundamental result without proof.

Theorem (Monotone Convergence Theorem). If a sequence is bounded and monotonic, i.e., it is either increasing, $a_n\le a_{n+1}$ for all $n$, or decreasing, $a_n\ge a_{n+1}$ for all $n$, then it is convergent.

The result is also known as the Completeness Property of Real Numbers.

Theorem (Nested Intervals Theorem). (1) A sequence of nested closed intervals has a non-empty intersection, i.e., if we have two sequences of numbers $a_n$ and $b_n$ that satisfy $$a_1\le a_2\le ... \le a_n\le ... \le b_n\le ... \le b_2\le b_1,$$ then they both converge, $$a_n\to a,\ b_n\to b,$$ and $$\bigcap_{n=1}^\infty [a_n,b_n]=[a,b].$$ (2) If, moreover, $$b_n-a_n\to 0,$$ then $$\bigcap_{n=1}^\infty [a_n,b_n]=\{a\}=\{b\}.$$

Proof. For part (1), observe that a point $x$ belongs to the intersection if and only if it satisfies: $$a_n\le x \le b_m,\ \forall n,m.$$ Meanwhile, the sequences converge by the Monotone Convergence Theorem. Therefore, $$a\le x\le b$$ by the Comparison Theorem.

For part (2), consider: $$0=\lim _{n\to \infty} (b_n-a_n)=\lim _{n\to \infty} b_n-\lim _{n\to \infty} a_n=b-a,$$ by SR. We then conclude that $a=b$. $\blacksquare$

We have indeed a “nested” sequence of intervals $$I=[a,b] \supset I_1=[a_1,b_1] \supset I_2=[a_2,b_2] \supset ...,$$ with a single point $A$ in common.

Definition. Given a set $S$ of real numbers, its upper bound is any number $M$ that satisfies: $$x\le M \text{ for any } x\text{ in } S.$$ Its lower bound is any number $m$ that satisfies: $$x\ge m \text{ for any } x\text{ in } S.$$

For $S=[0,1]$, any number $M\ge 1$ is its upper bound. However, these sets have no upper bounds: $$(-\infty,+\infty),\ [0,+\infty),\ \{0,1,2,3,...\}.$$

Definition. A set that has an upper bound is called bounded above and a set that has a lower bound is called bounded below. A set that has both upper and lower bounds is called bounded; otherwise it's unbounded.

Definition. For a set $S$, an upper bound for which there is no smaller upper bound is called a least upper bound; it is also called supremum and is denoted by $\sup S$. For a set $S$, a lower bound for which there is no larger upper bound is called a greatest lower bound; it is also called infimum and is denoted by $\inf S$.

Thus, $M=\sup S$ means that

• 1. $M$ is an upper bound of $S$, and
• 2. if $M'$ is another upper bound of $S$, then $M'\ge M$.

Now, if we have another $M'=\sup S$, then

• 1. $M'$ is an upper bound of $S$, and
• 2. if $M$ is another upper bound of $S$, then $M\ge M'$.

Therefore, $M=M'$.

Theorem. For a given set, there can be only one least upper bound.

Thus, we are justified to speak of the least upper bound.

Example. For the following sets the least upper bound is $M=3$:

• $S=\{1,2,3\}$;
• $S=[1,3]$;
• $S=(1,3)$.

The proof for the last one is as follows. Suppose $M'$ is an upper bound with $1<M'<3$. Let's choose $a=\frac{3+M'}{2}$. But $a$ belongs to $S$! Therefore, $M'$ isn't an upper bound.

What if we limit $S$ to the rational numbers only in $(1,3)$? Then $a=\frac{3+M'}{2}$ won't belong to $S$ when $M'$ is irrational. The proof fails. $\square$

Theorem (Existence of $\sup$). Any bounded above set has a least upper bound. Any bounded below set has a greatest lower bound.

Proof. The idea of the proof is to construct nested intervals with the right-end points being upper bounds. What should be the left-end points?

Given a set $S$, let

• $U$ be the set of all upper bounds of $S$;
• $L$ be the set of all lower bounds of $U$.

Since $S$ is bounded $$U\ne \emptyset.$$ Now, if $S$ is a single point, we are done. If not, we have $x,y$ in $S$ such that $x<y$. Therefore, $x$ belongs to $L$ and $$L\ne \emptyset.$$

• $a_1$ is any element of $L$ and $b_1$ is any element of $U$.

Suppose inductively that we have constructed two sequences of numbers $$a_i,\ b_i,\ i=1,2,3..., n,$$ such that:

• 1. $a_i$ is in $L$ and $b_i$ is in $U$;
• 2. $a_n\le...\le a_1\le b_1\le ...\le b_n$;
• 3. $b_i-a_i\le \frac{1}{2^{i-1}}(b_1-a_1)$.

We continue with the inductive step: let $$c=\frac{1}{2}(b_n-a_n).$$ We have two cases.

Case 1: $c$ belongs to $U$. Then choose $$a_{n+1}=a_n \text{ and } b_{n+1}=c.$$ Then, $$a_{n+1}=a_n\in L,\ b_{n+1}=c\in U.$$ Case 2: $c$ belongs to $L$. Then choose $$a_{n+1}=c \text{ and } b_{n+1}=b_n.$$ Then, $$a_{n+1}=c\in L,\ b_{n+1}=b_n\in U.$$

Furthermore, $$b_{n+1}-a_{n+1}=\frac{1}{2}(b_n-a_n)\le \frac{1}{2}\frac{1}{2^{n-1}}(b_1-a_1)=\frac{1}{2^{n}}(b_1-a_1).$$ Thus, all the conditions are satisfied, and our sequence of nested intervals has been inductively built. We apply the Nested Intervals Theorem and conclude that $$a_n\to d\leftarrow b_n.$$

Why is $c$ a least upper bound of $S$?

First, suppose $c$ is not an upper bound. Then there is $x\in S$ with $x>c$. If we choose $\varepsilon =x-c$, then from $b_n \to c$ we conclude that $b_n<x$ for all $n>N$ for some $N$. This contradicts the assumption that $b_n\in U$.

Second, suppose $c$ is not a least upper bound. Then there is an upper bound $y<c$. If we choose $\varepsilon =c-y$, then from $a_n \to c$ we conclude that $a_n>y$ for all $n>N$ for some $N$. This contradicts the assumption that $a_n\in L$. $\blacksquare$

Theorem (Intermediate Point Theorem). A subset $J$ of the reals is an interval or a point if and only if it contains all of its intermediate points; i.e., $$J\ni y_1<c< y_2\in J \ \Longrightarrow\ c\in J.$$

Proof. The “if” part is obvious. Now assume that the condition is satisfied for set $J$. Suppose also that $J$ is bounded. Then these exist by the Existence of $\sup$ theorem: $$a=\inf S,\ b=\sup J.$$ Note that these might not belong to $J$. However, if $c$ satisfies $a\le c\le b$, then there are

• $y_1\in J$ such that $a<y_1<c$, and
• $y_2\in J$ such that $c<y_2<b$.

By the property, then we have: $c\in J$. Therefore, $J$ is an interval with $a,b$ its end-points. $\blacksquare$

Exercise. Prove the theorem for the unbounded case.

Theorem (Bolzano-Weierstrass Theorem). Every bounded sequence has a convergent subsequence.

Proof. Suppose $x_n$ is such a sequence. Then, it is contained in some interval $[a,b]$. The first part of the construction is to cut consecutive intervals is half and pick the half that contains infinitely many elements of the set $\{x_n:\ n=1,2,3...\}$.

Similarly to the previous proofs, we assume that we have already constructed sequences: $$a_i,\ b_i,\ i=1,2,3..., n,$$ such that:

• 1. $[a_i,b_i]$ contains infinitely many elements of $\{x_n:\ n=1,2,3...\}$;
• 2. $a_n\le...\le a_1\le b_1\le ...\le b_n$;
• 3. $b_i-a_i\le \frac{1}{2^{i-1}}(b_1-a_1)$.

We continue with the inductive step: let $$c=\frac{1}{2}(b_n-a_n).$$ We have two cases.

Case 1: interval $[a_n,c]$ contains infinitely many elements of $\{x_n:\ n=1,2,3...\}$. Then choose $$a_{n+1}=a_n \text{ and } b_{n+1}=c.$$ Case 2: interval $[a_n,c]$ does not contain infinitely many elements of $\{x_n:\ n=1,2,3...\}$, then $[c,b_n]$ does. Then choose $$a_{n+1}=c \text{ and } b_{n+1}=b_n.$$

As before, $$b_{n+1}-a_{n+1}=\frac{1}{2}(b_n-a_n)\le \frac{1}{2}\frac{1}{2^{n-1}}(b_1-a_1)=\frac{1}{2^{n}}(b_1-a_1).$$

The intervals are constructed as desired; the intervals are zooming in on the denser and denser parts of the sequence:

Now we apply the Nested Intervals Theorem to conclude that $$a_n\to d\leftarrow b_n.$$

The second part of the construction is to choose the terms of the subsequence $y_k$ of $x_n$, as follows. We just pick as $y_{k}$ any element of the set $\{x_n:\ n=1,2,3...\}$ in $[a_k,b_k]$ that comes later in the sequence than the ones already added, i.e., $y_1,y_2,...,y_{k-1}$. This is always possible because we always have infinitely many elements left to choose from. Once the subsequence $y_k$ is constructed, we have $y_k\to d$ by the Squeeze Theorem. $\blacksquare$

## 7 Compositions


What about the limit of the new sequence? Can we say, similar to the four rules of limits, the limit of composition is the composition of the limits? Well, there is no such thing as composition of numbers...

Let's look at some examples.

Example. Sometimes the algebra is obvious. If $f$ is a linear polynomial, $$f(x)=mx+b,$$ and we have a sequence $x_n\to a$, we can use the Sum Rule and the Constant Multiple Rule to prove the following: $$\begin{array}{lll} \lim_{n\to \infty} f( x_n ) &=\lim_{n\to \infty} (m x_n +b) \\ &=m\lim_{n\to \infty} x_n +b\\ &=ma+b \\ &=f(a). \end{array}$$ $\square$

What we see is that the limit of the composition is the value of the function at the limit!

Example. Let's try $f(x)=x^2$ and a sequence that converges to $0$.

Bottom: how $x$ depends on $n$, middle: how $y$ depends on $x$, right: how $y$ depends on $n$. Can we prove what we see? An application of the Product Rule in this simple situation reveals: $$\begin{array}{lll} \lim_{n\to \infty} \big( x_n \big)^2 &=\lim_{n\to \infty} \left( x_n\cdot x_n \right)\\ &=\lim_{n\to \infty} x_n\cdot \lim_{n\to \infty}x_n \\ &=\left(\lim_{n\to \infty} x_n\right)^2, \end{array}$$ provided that limit exists. $\square$

A repeated use of PR produces a more general formula: if sequence $x_n$ converges then so does $(x_n)^p$ for any positive integer $p$, and $$\lim_{n\to\infty} \left[ (x_n)^p \right] = \left[ \lim_{n\to\infty} x_n \right]^p.$$ Combined with the Sum Rule and the Constant Multiple Rule this proves the following.

Theorem (Composition Rule for Polynomials). If sequence $x_n$ converges then so does $f(x_n)$ for any polynomial $f$, and $$\lim_{n\to\infty} f(x_n) = f\left[ \lim_{n\to\infty} x_n \right].$$

Then, we conclude that limits behave well with respect to composition with some functions. In general, new sequences are produces via compositions with functions: given a sequence $x_n$ and a function $y=f(x)$, define $$y_n=f(x_n).$$

But what about other functions: $$f(x)=\sqrt{x},\ g(x)=\sin x,\ h(x)=e^x?$$

Example. This time we choose a sequence that approaches $0$ from both sides: $$x_n=(-1)^n\frac{1}{n^.8} \text{ and } f(x)=-\sin 5x.$$

We see the same pattern! $\square$

Example. What if we choose $$x_n=\frac{1}{n} \text{ and } f(x)=\frac{1}{x}?$$ Then, obviously, we have $$y_n=\frac{1}{1/n}=n\to\infty!$$

$\square$

In Chapter 6, we will use this construction to study the limits of functions rather than those of sequences. A few examples of that are presented in the next section.

## 8 Famous limits

In this section, we will establish several important facts that will be used throughout the book.

First, trigonometry. The graph of $y=\sin x$ almost merges with the line $y=x$ around $0$. Moreover, plotting the points $(1/n,\sin 1/n)$ reveals a straight line with slope $1$:

Let's compare the two algebraically.

Theorem. $$\lim_{n\to \infty} \frac{\sin x_n}{x_n} =1,$$ for any sequence $x_n\to 0$.

Proof. The conclusion follows from the trigonometry fact: $$\cos x < \frac{\sin x}{x} < 1,$$ and the Squeeze Theorem. $\blacksquare$

The graph of $y=\cos x$ almost merges with the line $y=1$ when close to the $y$-axis. Moreover, plotting the points $(1/n,1-\cos 1/n)$ shows that the slope converges to $0$:

Let's compare the two algebraically.

Corollary. $$\lim_{n\to \infty} \frac{1 - \cos x_n}{x_n} = 0,$$ for any sequence $x_n\to 0$.

Exercise. Prove the theorem.

Corollary. $$\lim_{n\to \infty} \frac{\tan x_n}{x_n} =1,$$ for any sequence $x_n\to 0$.

Proof. It follows from the above theorem, the fact that $\cos x_n\to 1$ for any sequence $x_n\to 0$ and QR. $\blacksquare$

This is a confirmation:

Second, the exponents.

Example (compounded interest). Suppose we have money in the bank at APR $10\%$ compounded annually. Then after a year, given $\$ 1,000initial deposit, you have \begin{aligned} 1000 + 1000\cdot 0.10 &= 1000(1 + 0.1) \\ &= 1000 \cdot 1.1. \end{aligned} Same every year. Aftert$years, it's$1000\cdot1.1^{t}$. What if it is compounded semi-annually, with the same APR? After$\frac{1}{2}$year,$1000\cdot 0.05$, or total $$1000 + 1000\cdot 0.05 = 1000\cdot 1.05;$$ after another$\frac{1}{2}$year, $$\left(1000\cdot 1.05\right)\cdot 1.05 = 1000 \cdot 1.05^{2}.$$ After$t$years, $$1000\cdot (1.05^{2})^{t} = 1000\cdot 1.05^{2t}.$$ Note that we are getting more money:$1.05^{2} = 1.1025 > 1.1$! Try compound quarterly, $$1000\cdot 1.025^{4t}.$$ If compounded$n$times, then $$1000 \cdot \left(1 + \frac{1}{n}\right)^{nt},$$ where$\frac{1}{n}$is the interest in one period. Generally, for APR$r$(given as a decimal) and for the initial deposit$A_{0}$, after$t$years, the current amount is $$A(t) = A_{0}\left(1 + \frac{r}{n} \right)^{nt},$$ if compounded$n$times per year. What if we compounded more and more, will we be paid unlimited amounts? No.$\square$Theorem (Continuous compounding). The limit below exists: $$\lim_{n\to \infty} \left( 1+\frac{1}{n} \right)^n .$$ Proof. First, we show that the sequence $$a_n=\left(1+\frac{1}{n}\right)^{n},$$ is increasing. We have: $$\begin{array}{lll} \dfrac{a_{n+1}}{a_n}&=\dfrac{\left(1+\tfrac{1}{n+1}\right)^{n+1}}{\left(1+\tfrac{1}{n}\right)^n}=\dfrac{\left(\frac{n+2}{n+1}\right)^{n+1}}{\left(\frac{n+1}{n}\right)^n}\\ &=\left(\dfrac{n+2}{n+1}\right)^{n+1}\left(\dfrac{n}{n+1}\right)^{n+1}\left(\dfrac{n+1}{n}\right)^1\\ &=\left(\dfrac{n^2+2n+1-1}{n^2+2n+1}\right)^{n+1}\dfrac{n+1}{n}\\ &=\left(1-\dfrac{1}{(n+1)^2}\right)^{n+1}\dfrac{n+1}{n}. \end{array}$$ We use the Bernoulli Inequality: $$(1+a)^m > 1+ma,$$ for any$a$. We just choose$a=\tfrac{-1}{(n+1)^2}$and$m=n+1$. Then $$\begin{array}{lll} \dfrac{a_{n+1}}{a_n} & > \left(1-\dfrac{1}{n+1}\right)\dfrac{n+1}{n}\\ &=\dfrac{n}{n+1}\dfrac{n+1}{n}\\ &=1. \end{array}$$ In a similar fashion we show that the sequence $$b_n=\left(1+\frac{1}{n}\right)^{n+1},$$ is decreasing. Since$a_n<b_n$, we conclude that former sequence is both increasing and bounded. Therefore, it converges by the Monotone Convergence Theorem.$\blacksquare$We denote this limit by$e$, $$e=\lim_{n\to \infty} \left( 1+\frac{1}{n} \right)^n .$$ It is also known as the “Euler number”. Example. We continue with the example... What if the interest is compounded$n$times and$n \to \infty? Then we have: \begin{aligned} \lim_{n \to \infty} A(t) & = \lim_{n \to \infty} A_{0} \left( 1 + \frac{r}{n} \right)^{nt} \\ &= A_{0} \lim_{n \to \infty} \left( 1 + \frac{r}{n} \right)^{nt} \text{ ... by CMR} \\ & = A_{0} \left( \lim_{n \to \infty} \left(1 + \frac{r}{n}\right)^{n}\right)^{t} \\ & = A_{0} (e^{r})^{t}. \end{aligned} Thus, with APR ofr$and an initial deposit$A_{0}$, after$t$years you have: $$A(t) = A_{0} e^{rt}.$$ We say that the interest is compounded continuously. Suppose APR is$10\%$,$A_{0} = 1000$,$t = 1$. Then, $$A(1)=1000\cdot e^{1.1} = 1000\cdot e^{0.1} \approx \1,105,$$ interest$ \$1,105 > \$100 $(annual). How long does it take to triple your money with APR=$5\%$, compounded continuously? Set$A_{0} = 1$and solve for$t$: $$\begin{array}{rll} 3 & =& 1\cdot e^{0.05t} &\Longrightarrow\\ \ln 3 &=& 0.05t &\Longrightarrow\\ t &=& \frac{\ln 3}{0.05} &\approx 22 \text{ years.} \end{array}$$$\square$Note that these results suggest a way of understanding the limit of a function at a point to be discussed in Chapter 6. Exercise. Give formulas for the following sequences: (a)$a_n\to 0$as$n\to \infty$but it's not increasing or decreasing; (b)$b_n\to +\infty$as$n\to \infty$but it's not increasing. Applying the Binomial Theorem (Chapter 1) to the expression for$e$yields: $$\left(1 + \frac{1}{n}\right)^n = 1 + {n \choose 1}\frac{1}{n} + {n \choose 2}\frac{1}{n^2} + {n \choose 3}\frac{1}{n^3} + \cdots + {n \choose n}\frac{1}{n^n}.$$ The$k$th term of this sum is $${n \choose k}\frac{1}{n^k} = \frac{1}{k!}\cdot\frac{n(n-1)(n-2)... (n-k+1)}{n^k}= \frac{1}{k!}\cdot\frac{n}{n}\frac{n-1}{n}\frac{n-2}{n}... \frac{n-k+1}{n}.$$ As$n\to\infty\$, the fraction approaches one, and therefore $$\lim_{n\to\infty} {n \choose k}\frac{1}{n^k} = \frac{1}{k!}.$$ Then: $$e=\sum_{k=0}^\infty\frac{1}{k!}=\frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots.$$ The convergence follows from the Monotone Convergence Theorem.