This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# Calculus of sequences

### From Mathematics Is A Science

## Contents

- 1 What is calculus about?
- 2 The real number line
- 3 Sequences
- 4 Repeated addition and repeated multiplication
- 5 Exponential models
- 6 The algebra of exponents
- 7 The sequence of differences: velocity
- 8 The sequence of the sums: displacement
- 9 The fundamental relation between differences and sums
- 10 The algebra of sums and differences

## 1 What is calculus about?

The idea of calculus in a single picture:

Let's be more specific. We consider two situations.

First, imagine that our speedometer is broken. What do we do, if we want to estimate how fast we are driving? We look at the odometer several times -- say, every hour -- during the trip and record the mileage on a piece of paper. The list of our consecutive *locations* might look like this:

- initial reading: $10,000$ miles;
- after the first hour: $10,055$ miles;
- after the second hour: $10,095$ miles;
- after the third hour: $10,155$ miles;
- etc.

Let's plot the location against time:

But what do we know about the speed? Nothing without algebra! The algebra is simple; we use the well-known formula: $$\text{ speed }= \frac{\text{ distance }}{\text{ time }}.$$ The time period was chosen to be $1$ hour, so we need only to look at the distance covered during each of these one-hour periods:

- distance covered during the first hour: $10,055-10,000=55$ miles;
- distance covered during the second hour: $10,095-10,055=40$ miles;
- distance covered during the third hour: $10,155-10,095 =60$ miles;
- etc.

This is how these new numbers appear in the original plot (top):

We also plot these new numbers against time (bottom). As you can see, we treat the outcome data as if the speed remains constant during each of these hour-long periods.

The problem is solved! Only within the information provided by the data but we have established that the speed has been -- roughly -- $55$, $40$, and $60$ miles an hour.

Now the opposite problem... Imagine this time that it is the odometer that is broken. If now we want to estimate how far we will have gone, we should look at the speedometer several times -- say, every hour -- during the trip and record its readings on a piece of paper. The result may look like this:

- during the first hour: $35$ miles an hour;
- during the second hour: $65$ miles an hour;
- during the third hour: $50$ miles an hour;
- etc.

Let's plot our speed against time:

Now, what does this tell us about our location? Nothing, without algebra! We use the same formula: $$\text{ distance }=\text{ speed }\times \text{ time }.$$ In contrast to the first example, we need more information though. We must know where our trip started, say, the $100$ mile mark. The time period was chosen to be $1$ hour, so that we need only to add the speed at which -- we assume -- we drove during each of these one-hour periods:

- the location after the first hour: $100+35=135$ mile mark;
- the location after the two hours: $135+65=200$ mile mark;
- the location after the three hours: $200+50=250$ mile mark;
- etc.

This is how these new numbers appear in the plot:

The problem is solved! Only within the information provided by the data but we have established that we have progressed through -- roughly -- $135$, $200$, and $250$ miles marks during this time.

We next consider more complex examples. Here our ability to use negative numbers allows us to treat the data the same way even when we are moving in the opposite direction. In this case, we are dealing with the *velocity* instead of the speed.

First, from location to velocity... Suppose that this time we have $30$ data points; they are the locations of a moving object recorded every minute: $$\begin{array}{l|l|llllllllll} \text{ location } & \text{ miles } & 0.00 &0.10 &0.20 &0.30 &0.39 &0.48 &0.56 &0.64 &0.72 &0.78 &0.84 &...\\ \hline \text{ time } & \text{ min } &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &... \end{array}$$ This data is seen in the first two columns of the spreadsheet:

The data is furthermore illustrated as a “scatter plot” on the right.

To understand how fast we move over these one-minute intervals, we compute the *differences* of locations for each pair of consecutive locations:
$$\begin{array}{l|l|llllllllll}
\text{ velocity } & \text{ miles/min } &- &0.10 &0.10 &0.10 &0.09 &0.09 &0.09 &0.08 &0.07 &0.07 &0.06 &...\\
\text{ location } & \text{ miles } &0.00 &0.10 &0.20 &0.30 &0.39 &0.48 &0.56 &0.64 &0.72 &0.78 &0.84 &...\\
\hline
\text{ time } & \text{ min } &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &...
\end{array}$$
Practically, we use the spreadsheet with the following formula for each entry in the second column:
$$\texttt{ =RC[-1]-R[-1]C[-1]}.$$

This new data is illustrated as the second scatter plot. To emphasize the fact that the velocity data, unlike the location, is referring to time intervals rather than time instances, we plot it with horizontal segments. In fact, the data table can be rearranged as follows to make this point clearer: $$\begin{array}{l|c|c|c|c|c|c|c|c|c|c} \text{ velocity } &\cdot&0.10& \cdot&0.10& \cdot&0.10& \cdot&0.09& \cdot&0.09 &...\\ \text{ location } &0.00 &-&0.10 &-&0.20 &-&0.30 &-& .39&-&...\\ \hline \text{ time } &0 &&1 &&2 &&3 &&4 &&... \end{array}$$

What has happened to the moving object can now be easily read from the second graph:

- the velocity was positive initially and it was moving in the positive direction;
- it was moving fairly fast but then started to slow down;
- it stopped for a very short period;
- then the velocity became negative as it started to move in the opposite direction;
- it started to speed up in that direction.

Thus, the latter set of data succinctly records some facts about the qualitative and quantitative behavior of the former. As the latter is *derived* from the former, the transition is described by:
$$\text{function} \quad \longrightarrow \quad \text{its derivative}.$$

Now, from velocity to location... Again, we consider $30$ data points. They are the velocity of an object recorded every minute: $$\begin{array}{l|l|llllllllll} \text{ velocity }&\text{ miles }&0.00 &0.10 &0.20 &0.30 &0.39 &0.48 &0.56 &0.64 &0.72 &0.78 &0.84 &...\\ \hline \text{ time } &\text{ min } &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &... \end{array}$$ This data is seen in the first two columns of the spreadsheet:

The data is furthermore illustrated as a scatter plot on the right. Again, we emphasize the fact that the velocity data is referring to time intervals and plot it with horizontal bars.

To find out where we are at the end of each of these one-minute intervals, we compute the *sums* of the velocities, one minute at a time:
$$\begin{array}{l|l|llllllllll}
\text{ location } &\text{ miles/min } &0.00 &0.10 &0.30 &0.59 &0.98 &1.46 &2.03 &2.67 &3.39 &4.17 &...\\
\text{ velocity } &\text{ miles } &-&0.00 &0.10 &0.20 &0.30 &0.39 &0.48 &0.56 &0.64 &0.72 &...\\
\hline
\text{ time } &\text{ min } &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &...
\end{array}$$
Practically, we use a spreadsheet with a formula for each row:
$$\texttt{ =R[-1]C+RC[-1]}.$$

The data is also illustrated as the second scatter plot on the right.

We again rearrange the data table to make the difference between two types of data clearer: $$\begin{array}{l|c|c|c|c|c|c|c|c|c|c} \text{ location } &0.00 &-&0.10 &-&0.30 &-&0.59&-& .98&-&...\\ \text{ velocity } &\cdot&0.00& \cdot&0.10& \cdot&0.20& \cdot&0.30& \cdot&0.39 &...\\ \hline \text{ time } &0 &&1 &&2 &&3 &&4 &&... \end{array}$$

Thus, as the former data set records some facts about the quantitative behavior of the latter, we are able to combine this information to recover the latter. This *backward* transition is described by:
$$\text{function} \quad \longrightarrow \quad \text{its antiderivative.}$$

The terminology is justified by the fact that the two operations we have considered *undo* the effect of each other:
$$\text{location} \quad \longrightarrow \quad \text{velocity} \quad \longrightarrow \quad \text{same location (up to initial position),} $$
and
$$\text{velocity} \quad \longrightarrow \quad \text{location (up to initial position)} \quad \longrightarrow \quad \text{same velocity.} $$

We can further increase the number of data points and, as we zoom out, the scatter plots will look like *continuous curves*! We now use what we understand about the behavior of the motion data in those tables to describe *continuous motion*. We simply look at the slope of the graph:

Here we even replace our dependent variable, location, for another, temperature. The results are just as applicable to any other quantity that depends on time, or any other quantity.

**Exercise.** Your location is recorded every half-hour, shown below. Estimate your velocity as a function of time.
$$\begin{array}{r|c}
\text{time, }x&\text{location, }y\\
\hline
0&20\\
.5&30\\
1&20\\
1.5&20\\
2&50\\
\end{array}$$

## 2 The real number line

Sets get bigger and bigger and may seem to be infinite. Imagine facing a fence so long that you can't see its ends. We *zoom out* multiple times and there is still more left:

Is the size *infinite*? It may be. But for as long as this is *convenient*, we just assume that we can go on for as long as necessary.

We visualize the set as markings on a straight line, according to the order of the planks:

The assumption is that the line and the markings continue without stopping in both directions, which is commonly represented by “...”. The same idea applies to milestones. They are also ordered and might also continue indefinitely.

If we choose to speak of locations spaced over an infinite straight line we associate it with the set of *integers*, **denoted** by:
$${\bf Z}=\{...,-3,-2,-1,0,1,2,3,...\},$$
or its subset, the set of *natural numbers*:
$${\bf N}=\{0,1,2,3,...\} \subset {\bf Z}.$$

Suppose we *zoom in* on a piece of the fence. What if we see a shorter plank between the two?

If we keep zooming in, the result will look similar to a *ruler*:

It's as if we add *one mark* between two and then repeat this process. We'd have to stop eventually as this ruler goes only to $1/16$ of an inch. If we add *nine marks* at a time, the result is a *metric ruler*:

Here, we go from meters to decimeters, to centimeters, to millimeters, etc. Is the depth *infinite*? It may be. But for as long as this is *convenient*, we just assume that we can go on for as long as necessary.

To see it another way, we allow more and more decimals in our numbers: $$\begin{array}{rlllllll} 1/3:&.3&.33&.333&.3333&.33333&...;\\ 1:&1.&1.1&1.01&1.001&1.0001&...;\\ \pi:&3.&3.1&3.14&3.141&3.1415&.... \end{array}$$

In order to visualize *all numbers*, we arrange the integers in a line first. The line of numbers is built in several steps.

Step 1: a line is drawn, called an *axis*, usually horizontal.

Step 2: one of the two directions on the line is chosen as *positive*, usually the one to the right, then the other is *negative*.

Step 3: a point $O$ is chosen as the *origin*.

Step 4: a segment of the line is chosen as the *unit* of length.

Step 5: the segment is used to measure distances to locations from the origin $O$ -- positive in the positive direction and negative in the negative direction -- and add marks to the line, the *coordinates*.

Step 6: the segments are further subdivided to fractions of the unit, etc.

The end result depends on what the building block is. It may contain gaps and look like a ruler (or a comb) as discussed above. It may also be solid and look like a tile or a domino piece:

So, we start with integers as locations and then also include fractions, i.e., *rational numbers*. However, we then realize that some of the locations have no counterparts among these numbers. For example, $\sqrt{2}$ is the length of the diagonal of a $1\times 1$ square (and a solution of the equation $x^2=2$); it's not rational. That's how the *irrational numbers* came into play. Together they form the set of *real numbers*. It is often **denoted** by ${\bf R}$.

We use this set-up to produce a correspondence:
$$\begin{array}{|c|}\hline \quad \text{location } P\ \longleftrightarrow\ \text{ number } x . \quad \\ \hline\end{array}$$
It works in *both directions*, as follows:

- First, suppose $P$ is a
*location*on the line. We then find the nearest mark on the line. That's the “coordinate”, some*number*$x$, of $P$. - Conversely, suppose $x$ is a
*number*. We think of it as a “coordinate” and find its mark on the line. That's the*location*$P$ of $x$ on the line.

Once the coordinate system is in place, it is acceptable to think of location as numbers and vice versa. In fact, we can write: $$P=x.$$

The result may be described as the “$1$-dimensional coordinate system”. It is also called the *real number line* or simply *the number line*.

We have created a visual model of the set of real numbers. Depending on the real number or a collection of numbers that we are trying to visualize, we choose what part of the real line we exhibit; for example, the zero may or may not be in the picture. We also have to choose an appropriate length of the unit segment in order for the numbers to fit in.

In addition to the ruler, another way to visualize numbers is with *colors*. In fact, in digital imaging the levels of gray are associated with the numbers from $0$ and $255$. We use a shorter scale, $\{1,2,...,20\}$ below (top):

It is also often convenient to associate blue with negative and red with positive numbers (bottom).

## 3 Sequences

The lists of numbers in the first section are *sequences*: the locations and the velocities.

**Example.** Watching a ping-pong ball falling down and recording -- at equal intervals -- how high it is will be producing an ever-expanding string of numbers. It looks something like this:

We ignore, for now, the time and concentrate on the locations only. Suppose we have only the first few in a *list*:
$$36,\ 35,\ 32, \ 27,\ 20,\ 11,\ 0.$$
The picture above may come from combining the seven shots taken over this period of time. It can be visualized by placing the ball at every coordinate location on the real line, vertically or horizontally:

Though not uncommon, this method of visualization of motion, or sequences in general, has its drawbacks: overlapping may be inevitable and the *order* of events is lost without labels. A more popular approach is the following. The idea is to *separate time and space*:

This is called the *graph*. The location is plotted -- as it is -- vertically and the time horizontally. It is as if for every moment of time there is a separate real line, an axis!

As far as data is concerned, we have a list of *pairs* arranged in a table:
$$\begin{array}{r|ll}
\text{moment}&\text{height}\\
\hline
1&36\\
2&35\\
3&32\\
4&27\\
5&20\\
6&11\\
7&0
\end{array}$$
The table is just as effective representation of the data if we flip it; it's more compact:
$$\begin{array}{l|ll}
\text{moment:}&1&2&3&4&5&6&7\\
\hline
\text{height:}&36&35&32&27&20&11&0
\end{array}$$
$\square$

So, it is the most common way to visualize sequences of *numbers* as sequences of *points* on a sequence of vertical axes:

It is also common to represent these numbers as vertical bars:

Warning: the graph is just a visualization.

To represent a sequence algebraically, we first give it a name, say, $a$, and then assign a special variation of this name to each term of the sequence: $$\begin{array}{ll|ll} \text{index:}&n&1&2&3&4&5&6&7&...\\ \hline \text{term:}&a_n&a_1&a_2&a_3&a_4&a_5&a_6&a_7&... \end{array}$$

**Example.** In the last example, we name the sequence $h$ for “height”. Then the *table* take this form:
$$\begin{array}{l|ll}
\text{moment:}&1&2&3&4&5&6&7\\
\hline
\text{height:}&h_1&h_2&h_3&h_4&h_5&h_6&h_7&...\\
&||&||&||&||&||&||&||&...\\
\text{height:}&36&35&32&27&20&11&0&...
\end{array}$$
Abbreviated it produces this *list*:
$$h_1=36,\ h_2=35,\ h_3=32, \ h_4=27,\ h_5=20,\ h_6=11,\ h_7=0.$$
$\square$

So, we use the following **notation**:
$$a_1=1,\ a_2=1/2,\ a_3=1/3,\ a_4=1/4,\ ...,$$
where $a$ is the *name* of the sequence and adding a subscript indicates which element of the sequence we are facing.

A sequence, but not an infinite one, can come from a list or a table. Infinite sequence often come from formulas.

**Example.** The sequence,
$$a_1=1,\ a_2=1/2,\ a_3=1/3,\ a_4=1/4,\ ...,$$
can also be represented by this formula:
$$a_n=1/n.$$
Indeed, replacing $n$ in this formula with $1$, then $2$, $3$, etc. produces the numbers on the list one by one. We write:
$$a_n=1/n,\ n=1,2,3,...$$
With a formula, we can use a spreadsheet to produce more values and plot them:

$\square$

Working backwards from a list, it is sometimes possible to provide a formula for the $n$-*th term of the sequence*.

**Example.** What is the formula for this sequence:
$$1,\ 1/2,\ 1/4,\ 1/8,\ ...?$$
First, we notice that the numerators are just $1$s. Second, the denominators are the powers of $2$. We write it in a more convenient form:
$$a_1=1,\ a_2=\frac{1}{2},\ a_3=\frac{1}{2^2},\ a_4=\frac{1}{2^3},\ ....$$
The pattern in clear, the index is one higher than the power, and the formula is
$$a_n=\frac{1}{2^{n-1}}.$$
We can plot more now:

$\square$

**Example (alternating).** What is the formula for this sequence:
$$1,\ -1,\ 1,\ -1,\ ...?$$

First, we notice that the absolute values of these numbers are just $1$s and while the sign alternates. We write it in a more convenient form: $$a_1=1,\ a_2=-1,\ a_3=1,\ a_4=-1,\ ....$$ The pattern in clear and the correspondence is can be written for the two cases: $$a_n=\begin{cases} -1&\text{ if } n \text{ is even},\\ 1&\text{ if } n \text{ is odd}. \end{cases}$$ The trick we can use for sequences but not for functions is to write: $$a_n=(-1)^{n+1}.$$ $\square$

**Exercise.** Point out a pattern in each of the following sequences and suggest a formula for its $n$th element whenever possible:

- (a) $1,\ 3,\ 5,\ 7,\ 9,\ 11,\ 13,\ 15,\ ...$;
- (b) $.9,\ .99,\ .999,\ .9999,\ ...$;
- (c) $1/2,\ -1/4,\ 1/8,\ -1/16,\ ...$;
- (d) $1,\ 1/2,\ 1/3,\ 1/4,\ ...$;
- (e) $1,\ 1/2,\ 1/4\ ,1/8,\ ...$;
- (f) $2,\ 3,\ 5,\ 7,\ 11,\ 13,\ 17,\ ...$;
- (g) $1,\ -4,\ 9,\ -16,\ 25,\ ...$;
- (h) $3,\ 1,\ 4,\ 1,\ 5,\ 1,\ 9,\ ...$.

Thus, *every* formula may create a sequence $a_n$.

When we say that a sequence increases, we mean that the graph *rises* and we say it decreases when its graph *drops* (seen zoomed out):

As you can see, the behavior varies even within these two categories.

The precise definition has to rely on considering *every pair of consecutive terms* of the sequence.

**Definition.** A sequence $a_n$ is called *increasing* if, for all $n$, we have:
$$a_n\le a_{n+1};$$
It is called *decreasing* $I$ if, for all $n$, we have:
$$a_n\ge a_{n+1}.$$

**Example.** The sequences $\frac{1}{n}$ and $\frac{1}{2^{n-1}}$ are decreasing. The sequence $n^2$ is increasing. But $(-1)^{n}$ is neither increasing nor decreasing. $\square$

A major reason why we study sequences is that their terms can often be defined in a *consecutive manner*.

**Example (regular deposits).** A person starts to deposit $\$20$ every month to in his bank account that already contains $\$ 1000$. Then, after the first month the account contains:
$$ \$1000+\$20=\$ 1020,$$
after the second:
$$ \$1020+\$20=\$ 1040,$$
and so on. Then, if $a_n$ is the amount in the bank account after $n$ months, we have a formula:
$$a_{n+1}=a_n+ 20.$$
For the spreadsheet, the formula is:
$$\texttt{=R[-1]C+20}.$$
Below, the current amount is shown in blue and the next -- computed from the current -- is shown in red:

Since repeated addition is multiplication, it is easy to derive a formula for the $n$th term : $$a_{n}=1000+ 20\cdot n,$$ assuming that $a_0=1000$.

The sequence is increasing. $\square$

Thus, in addition to tables and formulas, sequences can be defined by defining their elements in a consecutive manner. We say that a sequence $a_n$ is *recursive* when its next term is found from the current term (or several previous terms) by a formula.

**Definition.** A sequence defined (recursively) by the formula:
$$a_{n+1}=a_n+ b,$$
is called an *arithmetic progression* with $b$ its *increment*.

**Example (compounded interest).** We saw an arithmetic progression with increment $b=20$ in the last example. Also typical is the following situation. A person deposits $\$ 1000$ in his bank account. Suppose the account pays $1\%$ APR compounded annually. Then, after the first year, the accumulated interest is
$$ \$1000\cdot.01=\$ 10,$$
and the total amount becomes $\$1010$. After the second year we have the interest:
$$ \$1010\cdot .01=\$ 10.10,$$
and so on. In other words, the total amount is multiplied by $.01$ at the end of each year and then added to the total. An even simpler way to put this is to say that the total amount is multiplied by $1.01$ at the end of each year. Now if $a_n$ is the amount in the bank account after $n$ years, then we have a *recursive formula*:
$$a_{n+1}=a_n\cdot 1.01.$$
For the spreadsheet, the formula is:
$$\texttt{=R[-1]C*1.01}.$$

It is easy to derive the $n$th term formula though: $$a_{n+1}=1000\cdot 1.01^n.$$ Only after repeating the step $100$ times one can see that this isn't just a straight line:

The sequence is increasing. $\square$

**Definition.** A sequence defined (recursively) by the formula:
$$a_{n+1}=a_n\cdot r,$$
with $r\ne 0$, is called a *geometric progression* with $r$ its *ratio*. We say that this is

*geometric growth*when $r>1$, and*geometric decay*when $r<1$.

Alternatively, it is called *exponential* growth and decay respectively.

**Example.** If the population of a city declines by $3\%$ every year, we have a geometric progression with ratio $r=.97$. The sequence is decreasing. $\square$

**Example.** What if we deposit money to our bank account *and* receive interest? The recursive formulas is simple, for example:
$$a_{n+1}=a_n\cdot 1.05+200.$$
We just take $F(x)=x\cdot 1.05+200$ in the definition of recursiveness. Is there a *direct* formula? Yes, but it's too cumbersome to be of any use:
$$a_n=\bigg(...\big((a_0\cdot 1.05+200)\cdot 1.05+200\big)...\bigg)\cdot 1.05+200.$$
The sequence is increasing. $\square$

This time the multiple varies...

**Definition.** Define a sequence recursively:
$$a_1=1,\ a_n=a_{n-1}\cdot n.$$
Then,
$$a_n=1\cdot 2 \cdot ... \cdot (n-1)\cdot n .$$
The result is called the *factorial* of $n$ and is denoted by
$$n!=1\cdot 2 \cdot ... \cdot (n-1)\cdot n.$$

The factorial exhibits a very fast growth:

You can see how it stays behind the geometric progression with ratio $r=10$ but then leaps ahead.

The factorial appears frequently in calculus and elsewhere. It suffices to point out for now that it counts in how many way one can permute objects. We notice that it is about placing $n$ objects into $n$ slots, one by one: the first objects has $n$ options, the second $n-1$,... and the last has just one left. Since the choices are independent of each other, the total number of such placements is $n(n-1)\cdot ...2\cdot 1=n!$.

**Theorem.** The number of *permutations* of $n$ objects is equal to $n$ factorial.

**Example (logistic).** Define a sequence recursively:
$$a_{n+1}=ra_n(1-a_n),$$
where $r>0$ is a parameter. For the spreadsheet, the formula is:
$$\texttt{=R2C2*R[-1]C*(1-R[-1]C)},$$
where $\texttt{R2C2}$ contains the value of $r$. For example, this is what we have for $r=3.9$ (here $a_1=.5$):

The sequence is called the *logistic sequence*. Its dynamics dramatically depends on $r$:

$\square$

**Example (round robin).** If we have $n$ teams to play each other exactly once, how many games do we have to plan for? A table commonly used for such a tournament is below:

The table reveals the following. The first team is to play $n-1$ games. The second also is to play $n-1$ games but one less is actually counted as it is already on the first list. The third is to play $n-1$ games but two less is actually counted as they are already on the first and second lists. And so on. The total is: $$(n-1)+(n-2)+...+2+1.$$ We can treat this as a recursive sequence: $$a_1=1,\ a_{n+1}=a_n+n.$$ How do we find an explicit, direct formula for the $n$th term of this sequence? The table tells us the answer. The total number of cells in the table is $n^2$. Without the diagonal ones, it's $n^2-n$. Finally, we take only half of those: $(n^2-n)/2$. As a purely mathematical conclusion, the sum of $m$ consecutive integers starting from $1$ is the following: $$1+2+3+...+m=\frac{m(m+1)}{2}.$$ $\square$

Sequences are subject to algebraic operations: addition, subtraction, multiplication, and division. These operations produce new sequences. However, there are two operations that also produce *new* sequences that tell us a lot about the *original* sequence. We saw them in action in the last section: *differences and sums*.

## 4 Repeated addition and repeated multiplication

In this section, we will take a better look at the arithmetic and geometric progressions, side by side.

Remember this simple algebra:

- repeated addition is
*multiplication*: $2 + 2 + 2 = 2 \cdot 3$.

One can say that that's how multiplication was “invented” -- as repeated addition. What about *repeated multiplication*?

- repeated multiplication is
*power*: $2 \cdot 2 \cdot 2 = 2^{3}$.

**Theorem.** (1) The $n$th-term formula for an arithmetic progression with increment $b$ and initial term $a_0$ is:
$$a_{n}=a_0+ b\cdot n,\ n=1,2,3...$$
(2) The $n$th-term formula for a geometric progression with ratio $r$ and initial term $a_0$ is:
$$a_{n}=a_0\cdot r^n,\ n=1,2,3...$$

So, we face two similar *conventions* of algebra:

- $a$ added to itself $n$ times is replaced with $a\cdot n$; while
- $a$ multiplied by itself $n$ times is replaced with $a^n$.

We will pursue this big analogy further and re-discover some familiar algebraic properties. This is entirely about *counting* how many times you carry out the operation: adding $a$ or multiplying by $a$.

The first set of properties is about the algebraic operations carried out with the outputs of two functions (parallel repetitions). $$\begin{array}{r|rl|rl} n=1,2,3,...&\text{repeated addition}&=\text{multiplication}&\text{repeated multiplication}&=\text{power}\\ \hline \text{Convention:}&\underbrace{a+a+a+...+a}_{n\text{ times}}&=a\cdot n&\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}&=a^n\\ \hline \text{Repeated }n \text{ times,}&\underbrace{a+a+a+...+a}_{n\text{ times}}\qquad&=a\cdot n&\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}\qquad&=a^n\\ \text{then }m \text{ times more.}&\qquad+\underbrace{a+a+a+...+a}_{m\text{ times}}&\quad=a\cdot m&\qquad\cdot \underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{m\text{ times}}&\quad=a^m\\ \text{Count:}&\underbrace{\quad\qquad\qquad\qquad\qquad}_{n+m\text{ times}}&&\underbrace{\qquad\qquad\qquad}_{n+m\text{ times}}&\\ \hline \text{Property 1:}&a\cdot (n+m)&=a\cdot n+a\cdot m & a^{n+m}&=a^n\cdot a^m\\ \hline \end{array}$$

This property for addition,
$$a\cdot (n+m)=a\cdot n+a\cdot m,$$
is called the *Distributive Property*. It “distributes” multiplication over addition and undoes the effect of factoring.

Warning: we don't “distribute” exponentiation over addition: $a^{n+m} \ne a^n+ a^m$.

Warning: we also can't “distribute” exponentiation over addition this way: $(a+b)^n \ne a^n+b^n$.

The second set of properties is also about the algebraic operations carried out with the outputs of two functions. $$\begin{array}{r|rl|rl} n=1,2,3,...&\text{repeated addition}&=\text{multiplication}&\text{repeated multiplication}&=\text{power}\\ \hline \text{Convention:}&\underbrace{a+a+a+...+a}_{n\text{ times}}&=a\cdot n&\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}&=a^n\\ \hline \text{Repeated }n \text{ times.}&\underbrace{a+a+a+...+a}_{n\text{ times}}&=a\cdot n&\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}&=a^n\\ \text{Repeated }n \text{ times.}&+\underbrace{b+b+b+...+b}_{n\text{ times}}&=b\cdot n&\cdot \underbrace{b\cdot b\cdot b\cdot ...\cdot b}_{n\text{ times}}&=b^n\\ \text{Count:}&=\underbrace{(a+b)+...+(a+b)}_{n\text{ times}}&&=\underbrace{(a\cdot b)\cdot ...\cdot (a\cdot b)}_{n\text{ times}}&\\ \hline \text{Property 2:}&(a+b)\cdot n&=a\cdot n+b\cdot n & (a\cdot b)^n &=a^n\cdot b^n\\ \hline \end{array}$$

This property for addition,
$$(a+b)\cdot n=a\cdot n+b\cdot n,$$
is, once again, the *Distributive Property*. It “distributes” multiplication over addition. The corresponding property for multiplication,
$$(a\cdot b)^n =a^n\cdot b^n,$$
“distributes” exponentiation over multiplication.

In contrast to the properties above, the next set is about *compositions* (repeat the repeated).
$$\begin{array}{r|c|rl|rl}
n=1,2,3,...&&\text{repeated addition}&=\text{multiplication}&\text{repeated multiplication}&=\text{power}\\
\hline
\text{Convention:}&&\underbrace{a+a+a+...+a}_{n\text{ times}}&=a\cdot n&\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}&=a^n\\
\hline
\text{Repeated }n \text{ times,}&1.&\underbrace{a+a+a+...+a}_{n\text{ times}}&=a\cdot n& \underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}&=a^n\\
m \text{ times.}&2.&+\underbrace{a+a+a+...+a}_{n\text{ times}}&=a\cdot n&\cdot\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}&=a^n\\
&\vdots&\vdots\qquad&\quad\vdots&\vdots\quad&\quad\vdots\\
&m.&+\underbrace{a+a+a+...+a}_{n\text{ times}}&=a\cdot n&\cdot\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}&=a^n\\
\text{Count:}&&\underbrace{\quad\qquad\qquad\qquad\qquad}_{nm\text{ times}}&\ \ \underbrace{\quad}_{m\text{ times}}&\underbrace{\quad\qquad\qquad\qquad}_{nm\text{ times}}&\ \ \underbrace{\quad}_{m\text{ times}}\\
\hline
\text{Property 3:}&&a\cdot(n\cdot m)&=(a\cdot n)\cdot m & a^{n\cdot m}&=(a^n)^m\\
\hline
\end{array}$$

The third property for addition,
$$a\cdot(n\cdot m)=(a\cdot n)\cdot m,$$
is called the *Associativity Property* of addition. It means that multiplications can be re-grouped (the middle number can be “associated” with the last one or the next one) arbitrarily. The corresponding property for multiplication,
$$a^{(n\cdot m)}=(a^n)^m,$$
means that exponentiations can be re-grouped arbitrarily too.

**Example.** These properties/rules operate as *shortcuts*:
$$2^{3} \cdot 2^{2} = 2^{3}\cdot 2^{2} = 2^{3+2} = 2^{5}.$$
and
$$ (2^{3})^{4} = 2^{3\cdot 4} = 2^{12}.$$
$\square$

So far, we are facing nothing but a *geometric progression* with ratio $a$:

What about $0$? Can it be the exponent? If it is, what is the outcome -- and the meaning -- of repeating an algebraic operation *zero times*?.. We will need another *convention*! We choose, for $a\ne 0$:
$$\begin{array}{|c|}\hline \quad a^0=1. \quad \\ \hline\end{array}$$

But why? Why not any other number? Because we want the three properties still to be satisfied!

Let's check. We plug in $n=0$ or $m=0$ and use our convention: $$\begin{array}{r|ll|ll} \text{Property 1:} & a^{n+m} &=a^n\cdot a^m & n=0 & \Longleftrightarrow & a^{0+m}&=a^0\cdot a^m & \Longleftrightarrow & a^m=1\cdot a^m & \texttt{TRUE!}\\ \hline \text{Property 2:} & a^n b^n &=(ab)^n & n=0 & \Longleftrightarrow & a^0 b^0 &=(ab)^0 & \Longleftrightarrow & 1\cdot 1=1 & \texttt{TRUE!}\\ \hline \text{Property 3:} & a^{nm} &=(a^n)^m & n=0 & \Longleftrightarrow & a^{0m} &=(a^0) ^m & \Longleftrightarrow & a^0=1^m & \texttt{TRUE!}\\ & && m=0 & \Longleftrightarrow & a^{n0} &=(a^n)^0 & \Longleftrightarrow & a^0=1 & \texttt{TRUE!}\\ \end{array}$$ The three properties are still satisfied and we will continue to use them. Note how choose anything but $a^0=1$ would have ruined them.

From now on, the formula for geometric progression, $$a_n=ar^n,$$ can start with a zeroth element.

This is the summary of the algebra of exponents:
$$\begin{array}{r|rl|rl}
\text{}&\text{Multiplication:}&&\text{Exponentiation:}\\
\hline
n=1,2,3,...&\underbrace{a+a+a+...+a}_{n\text{ times}}&=a\cdot n&\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}&=a^n\\
n=0 & a\cdot 0 & =0 & a^0&=1\\
\hline
\text{Rules: } 1.&a\cdot (n+m)&=a\cdot n+a\cdot m & a^{n+m}&=a^n\cdot a^m\\
\hline
2.&(a+b)\cdot n&=a\cdot n+b\cdot n & (a\cdot b)^n&=a^n\cdot b^n\\
\hline
3.&a\cdot(n\cdot m)&=(a\cdot n)\cdot m & a^{n\cdot m}&=(a^n)^m\\
\hline
\end{array}$$
Note that the right-hand sides of the rules are matched: just replace “$+$” with “$\cdot$” and “$\cdot$” with “$\wedge $” in the first column and you get those in the second. For example, this is how Rule 1 works:
$$\begin{array}{ccc}
a&\cdot &(n+m)&=&a&\cdot& n&+&a&\cdot &m \\
a&\wedge &{n+m}&=&a&\wedge &n&\cdot& a&\wedge &m\\
\end{array}$$
This is not the case -- warning! -- for the left-hand sides of the rules. The reason is that some of the algebra in the left-hand sides has come from *counting* the repetitions, identically for both columns.

We will further continue to expand the idea of exponent in Chapter 4.

## 5 Exponential models

**Example (compounded interest).** A person deposits $a_0$ dollars to his bank account. Suppose the account pays $R$, the decimal representation of the APR compounded annually: $.1$ for $10$ percent etc. The total amount is then multiplied by $1+R$ at the end of each year and then added to the total. Now if $a_n$ is the amount in the bank account after $n$ years, then we have a recursive formula:
$$a_{n+1}=a_n\cdot (1+R).$$
This is just a geometric progression and its $n$th-term formula is:
$$a_n=a_0\cdot (1+R)^n, n=1,2,3...$$

$\square$

**Example (bacteria multiplying).** Suppose we have a population of bacteria that doubles every day. For example, we can imagine that each divides in half every day.

Let $p_n$ be the number of bacteria after $n$ days: $$\underbrace{p_{n+1}}_{\text{population: at time } n+1} = \underbrace{2p_n}_{\text{ at time } n}.$$ To know $p_n$ for all $n$, we need to know $p_0$.

A verbal description of the model: *the rate of growth is proportional to the size of population*.

What does this mean? $$\begin{array}{lll} & \text{time } n &\quad &\text{time } n+1 & & \\ y & = 10 & \quad y &= 20 &\quad \Delta y &= 10 \\ y & =100 & \quad y&= 200 &\quad \Delta y &= 100 \end{array}$$ If triples: $$\begin{array}{lll} & \text{time } n &\text{time } n+1 & & \\ y & = 10 & y = 30 & \Delta y = 20 \\ y & =100 & y= 300 & \Delta y = 200. \end{array}$$

This is the solution: $$y_n = Ca^{kn}$$ for any $C$.

What is $C$? Given $$ y_n = Ca^{kn} ,$$ substitute $x=0$. Then $$y_0 = Ca^{k\cdot 0} = ce^{0} = C .$$ So, $C$ is the initial population. Re-phrase our solution: $$y_n = y_0 a^{kn}. $$ $\square$

**Example (population loss).** City loses $7\%$ of population every year - *exponential decline*:
$$\overbrace{( \underbrace{\underbrace{(1,000 \cdot 0.93 )}_{\textrm{after 1 year}} \cdot 0.93}_{\textrm{after 2 years}} ) \cdot 0.93}^{\textrm{after 3 years}}.$$
Here the multiple is $< 1$, hence it is decreasing.

$\square$

**Example (radioactive decay and radiocarbon dating).** Once the tree is but carbon start to decay, loses half of its mass over a certain period of time. The loss follows the exponential decay model:
$$Ca^{k\cdot n}.$$
The percentage of this element, $^{14}\text{C}$, left:

Idea:

- Know the element's decay constant, $k$.
- Measure the percentage of the element present vs the amount normally present,
- Calculate the time when tree was cut.

Half-life is $5730$ years (i.e., the time it takes to go from $100\%$ to $50\%$). Parchment has $74\%$ of $^{14}\text{C}$ left. How old is it?

Let's estimate first by assuming that decay is linear.

The period is close to $$\frac{1}{4}\text{-life} = \frac{1}{2}\left( \frac{1}{2}\text{half-life} \right)= 2865.$$ (The actual age is younger.) $\square$

**Example (Newton's Law of Cooling).** The *rate of cooling* of an object is proportional to the difference between its temperature and the temperature of the atmosphere.

Let $T_n$ be the temperature and we assume that $T_0 > R$, where $R$ is the room temperature.

What about *warming*? We can also see the case of $T_0<R$:

$\square$

## 6 The algebra of exponents

Recall our analysis from earlier on where the exponents -- by analogy with multiplication -- come from: $$\begin{array}{r|rl|rl} \text{}&\text{Multiplication:}&&\text{Exponentiation:}\\ \hline n=1,2,3, ... &\underbrace{a+a+a+ ... +a}_{n\text{ times}}&=a\cdot n&\underbrace{a\cdot a\cdot a\cdot ... \cdot a}_{n\text{ times}}&=a^n\\ n=0 & a\cdot 0 & =0 & a^0&=1\\ \hline \text{Rules: } 1.&a(n+m)&=a\cdot n+a\cdot m & a^{n+m}&=a^n\cdot a^m\\ \hline 2.&(a+b)n&=a\cdot n+b\cdot n & (ab)^n&=a^n b^n\\ \hline 3.&a\cdot(nm)&=(a\cdot n)\cdot m & a^{nm}&=(a^n)^m\\ \hline \end{array}$$

**Example.** These rules are used as *shortcuts*:
$$2^{3} \cdot 2^{2} = 2^{3}\cdot 2^{2} = 2^{3+2} = 2^{5},$$
and
$$ (2^{3})^{4} = 2^{3\cdot 4} = 2^{12}.$$
$\square$

Recall the **notation** and the terminology:
$$\begin{array}{rl}
&\text{exponent}\\
&\downarrow\\
&\small{n}\\
a\\
\uparrow\\
\text{base}
\end{array}$$
This is nothing but a *geometric progression*:

However, we misses some of the possible values of $n$ that interest us!

**Example.** Suppose bacteria double in number every day. If the current population is $1,024$, how many were there two days ago? We need:
$$x=-2, \quad 1,024 \cdot 2^{-2} = ?$$
It seems that we'll need to *divide*... $\square$

We face new circumstances and we ask ourselves if we can proceed *without changing the rules*?

Can the exponents be *negative*?

We start with $n=-1$. Can $-1$ be the exponent? If it is, what would be the outcome -- and the meaning -- of repeating an algebraic operation *minus $1$ times*?!

We will need another *convention*! We choose, for $a\ne 0$
$$\begin{array}{|c|}\hline \quad a^{-1}=\frac{1}{a}. \quad \\ \hline\end{array}$$

The choice is dictated by our desire for the three properties to be satisfied!

Let's check. We plug in $n=\pm 1$ and $m=\pm 1$, use our convention, and then apply the above version of the corresponding property: $$\begin{array}{r|ll|ll} \text{Property 1:} & a^{n+m} &=a^n\cdot a^m & n=-1,\ m=1 & \Longleftrightarrow & a^{-1+1}&=a^{-1}\cdot a^1 & \Longleftrightarrow & a^{0}=\frac{1}{a}\cdot a & \texttt{TRUE!}\\ \hline \text{Property 2:} & a^n b^n &=(ab)^n & n=-1 & \Longleftrightarrow & a^{-1} b^{-1} &=(ab)^{-1} & \Longleftrightarrow & \frac{1}{a}\cdot \frac{1}{b}=\frac{1}{ab} & \texttt{TRUE!}\\ \hline \text{Property 3:} & a^{nm} &=(a^n)^m & n=-1,\ m=1 & \Longleftrightarrow & a^{(-1)1} &=(a^{-1})^1 & \Longleftrightarrow & a^{-1}=\frac{1}{a} & \texttt{TRUE!}\\ & && n=1,\ m=-1 & \Longleftrightarrow & a^{1(-1)} &=(a^1)^{-1} & \Longleftrightarrow & a^{-1}=a^{-1} & \texttt{TRUE!}\\ \end{array}$$ The three properties are still satisfied and we will continue to use them. Note how choose anything but $a^{-1}=1/a$ would have ruined them.

This is our convention: multiplying by $a^{-1}$ means dividing by $a$. Now, the rest of the negative numbers.

While multiplication by a positive integer means repeated addition, multiplication by a *negative* integer means repeated *subtraction* (the inverse of addition):
$$a(-n)=-an=0-a-a-a-...-a.$$
Similarly, while exponentiation by a positive integer means repeated multiplication, exponentiation by a *negative* integer means repeated *division* (the inverse of multiplication):
$$a^{-n}=1\div a \div a \div a \div ... \div a.$$
Our convention, for any $n=1,2,3...$, will be:
$$\begin{array}{|c|}\hline \quad a^{-n}=\frac{1}{a^n}. \quad \\ \hline\end{array}$$

To confirm, we observe this: $$\begin{array}{ccc} \underbrace{a^{-1}\cdot a^{-1}\cdot a^{-1}\cdot ...\cdot a^{-1}}_{n\text{ times}}&=&\left(a^{-1}\right)^n\\ ||&&||&\\ 1 \underbrace{\div a\div a\div a\div ...\div a}_{n\text{ times}}&=&\left(\frac{1}{a}\right)^n\\ \end{array}$$

**Exercise.** Show that Properties 1-3 still hold.

**Example.** Once again, we see these properties as *shortcuts*:
$$\frac{2^{5} }{ 2^{2}} = 2^{5} \cdot 2^{-2} = 2^{5-2} = 2^{3}.$$
$\square$

This is the summary of what we have discovered: $$\begin{array}{r|rl|rl} \text{}&\text{Multiplication:}&&\text{Exponentiation:}\\ \hline \text{Conventions: }n=1,2, ... &\underbrace{a+a+a+ ... +a}_{n\text{ times}}&=a\cdot n&\underbrace{a\cdot a\cdot a\cdot ... \cdot a}_{n\text{ times}}&=a^n\\ \hline n=0 & a\cdot 0 & =0 & a^0&=1\\ \hline n=-1,-2,-3, ... & 0\underbrace{-a-a-a- ... -a}_{n\text{ times}}&=a\cdot (-n) & 1 \underbrace{\div a\div a\div a\div ... \div a}_{n\text{ times}}&=a^{-n}\\ &=\underbrace{(-a)+(-a)+...+(-a)}_{n\text{ times}}&=(-a)\cdot n & = \underbrace{\frac{1}{a}\cdot\frac{1}{a}\cdot ...\cdot\frac{1}{a}}_{n\text{ times}}&=\left(\frac{1}{a}\right)^{n}\\ &=-(\underbrace{a+a+a+...+a}_{n\text{ times}})&=-(a\cdot n) & =1\div (\underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{n\text{ times}}) &=\frac{1}{a^n}\\ \hline \text{Rules: }\ 1.&a\cdot (n+m)&=a\cdot n+a\cdot m & a^{n+m}&=a^n\cdot a^m\\ \hline 2.&(a+b)\cdot n&=a\cdot n+b\cdot n & (a\cdot b)^n&=a^n b^n\\ \hline 3.&a\cdot(n\cdot m)&=(a\cdot n)\cdot m & a^{n\cdot m}&=(a^n)^m\\ \hline \end{array}$$

These are the possible values of $n$:
$$... ,-3,-2,-1,0,1,2,3, ...$$
Note that we *multiply* by $2$ as we move right and *divide* by $2$ as we move left:

A more general fact is true.

**Theorem (Monotonicity of Exponent).** The progression with ratio $a > 0$ satisfies the following:

- is increasing if $ a > 1$,
- is decreasing if $a < 1$.

This is what the graphs look like if we zoom out:

**Exercise.** Prove the theorem.

The domain still misses some of the numbers that interest us! Suppose bacteria double in number every day starting with $1$. What happens after $10.5$ days? $$n=10.5, \quad 2^{10.5} = ?$$ We address this in Chapter 4.

## 7 The sequence of differences: velocity

In this section we start on the path of development of the idea that culminates with the concept of the derivative in Chapter 7.

The difference represents the change of the sequence, from any of its elements to the next.

If the sequence represents the location, the difference represents the displacement as discussed previously.

**Definition.** For a sequence $a_n$, its *difference* is a new sequence defined to be
$$b_n=a_{n+1}-a_n,$$
and **denoted** by:
$$\Delta a_n=a_{n+1}-a_n.$$

Warning: the notation for the difference is an abbreviation for: $b_n=\Delta_n(a_n)$.

Warning: if the original sequence starts with $n=q$, then the new sequence starts with $n=q+1$.

**Example (arithmetic progression).** An arithmetic progression has a constant difference, by definition:
$$\Delta (a_0+mn)=a_{n+1}-a_n=(a_0+b(m+1))-(a_0+mn)=m.$$

$\square$

**Example (geometric progression).** We can also notice that the difference of a geometric progression with $a_0>0$ is positive and increasing when its ratio $r$ is larger than $1$:
$$\Delta (a_0r^n)=a_{n+1}-a_n=a_0r^{n+1}-a_0r^n=a_0r^n(r-1).$$
It is negative and decreasing when $0<r<1$. $\square$

**Example (alternating sequence).** The difference of the alternating sequence $a_n=(-1)^n$ is computed below:
$$\Delta \left((-1)^n\right)=(-1)^{n+1}-(-1)^n=\begin{cases} (-1)-1,&n \text{ is even}\\ 1-(-1),&n\text{ is odd}\end{cases}=\begin{cases} -2,&n \text{ is even}\\ 2,&n\text{ is odd}\end{cases}=2(-1)^{n+1}.$$
$\square$

**Example.** We can use computers to speed up these computations. This is a formula for a spreadsheet:
$$\texttt{=RC[-1]-R[-1]C[-1] }.$$
Whether the sequence come from a formula or it's just a list of numbers, the above formula applies:

$\square$

This is the time for some *theory*.

Consider this obvious statement about motion:

- “if I am standing still, my speed is zero”.

The *converse* of this statement is as follows:

- “if my speed is zero, I am standing still”.

If a sequence represents the position, we can restate this mathematically.

**Theorem (Constant).** A sequence is constant if and only if the sequence has a zero difference; i.e.,
$$a_n=\text{ constant }\ \Longleftrightarrow\ \Delta a_n=0.$$

**Theorem.** The difference of an arithmetic progression is constant and, conversely, if the difference of a sequence is constant sequence, the sequence is an arithmetic progression.

A matching statement about motion is:

- “if I am moving forward, my speed is positive”;

and the *converse*:

- “if my speed is positive, I am moving forward”.

We can restate this mathematically.

**Theorem (Monotonicity).** A sequence is increasing/decreasing or constant, if and only if the sequence has a positive/negative or zero difference; i.e.,
$$\begin{array}{ll}
a_n&\text{ is increasing }&\ \Longleftrightarrow\ &\Delta a_n&\ge 0,\\
a_n&\text{ is decreasing }&\ \Longleftrightarrow\ &\Delta a_n&\le 0.
\end{array}$$

Suppose now that there are *two* runners; we have a slightly less obvious fact about motion:

- “if the distance between two runners isn't changing, then they run with the same speed”,

and vice versa:

- “if two runners run with the same speed, the distance between them isn't changing”.

It's as if they are holding the two ends of a pole without pulling or pushing.

It is even possible that they speed up and slow down all the time. Once again, for sequences $a_n$ and $b_n$ representing their position, we can restate this idea mathematically in order to confirm that our theory makes sense.

**Corollary.** Two sequences differ by a constant if and only if they have the equal differences; i.e.,
$$a_n-b_n=\text{ constant } \ \Longleftrightarrow\ \Delta a_n = \Delta b_n .$$

We can use the latter theorem to watch after the distance between the two runners. A matching statement about motion is:

- “if the distance from one of the two runners to the other is increasing, the former's speed is higher”;

and the *converse*:

- “if the speed of one runner is higher than the other, the distance between them is increasing”.

We can restate this mathematically.

**Corollary.** The difference of two sequences is increasing or constant if and only if the former's difference is bigger than the latter's; i.e.,
$$\begin{array}{ll}
a_n-b_n&\text{ is increasing }& \ \Longleftrightarrow\ &\Delta a_n& \ge &\Delta b_n,\\
a_n-b_n&\text{ is decreasing }& \ \Longleftrightarrow\ &\Delta a_n& \le &\Delta b_n .
\end{array}$$

**Example (runners).** The graph shows the positions of three runners as functions of time, $n$. Describe what has happened.

They are all at the start line together and at the end they are all at the finish line. Furthermore, $A$ reaches the finish line first and $B$ and $C$ later who also starts late. This is *how* each did it:

- $A$ starts fast and the slows down;
- $B$ maintains the same speed;
- $C$ starts late and then runs fast.

We can see that $A$ is running faster because the distance from $B$ is increasing. It is later slower which is visible from the decreasing distance. We can discover this and the rest of the fact by examining the graphs of the *differences* of the sequences:

$\square$

**Definition.** Suppose $x_n$ and $y_n$ are two sequences. Then their *difference quotient* is defined to be the difference of $y_n$ over the difference of $x_n$:
$$\frac{\Delta y_n}{\Delta x_n}=\frac{y_{n+1}-y_n}{x_{n+1}-x_n}.$$

It is the relative change -- the rate of change -- of the two sequences. The numerator is the change of $y$, i.e., the difference of $y$: $$\Delta y_n=y_{n+1}-y_n.$$ The denominator is the change of $x$, i.e., the difference of $x$: $$\Delta x_n=x_{n+1}-x_n.$$

When $x$ is time, the sequence $x_n$ is often chosen to be an arithmetic progression. In that case, its difference is the *increment* of the sequence:
$$h=\Delta x_n.$$
Then, the difference quotient is simply a multiple of the difference of $y_n$:
$$\frac{\Delta y_n}{\Delta x_n}=\frac{y_{n+1}-y_n}{h}.$$

## 8 The sequence of the sums: displacement

In the beginning of the chapter, we learned how adding the progress of the car during each of the time period -- a sequence -- will give you the whole displacement.

This is how we represent the sum of a sequence: $$\underbrace{a_{1}}_{\text{1 step}}\quad \underbrace{+a_{2}}_{\text{2 step}}\ + ... \quad\underbrace{+a_{i}}_{i\text{ step}}\ + ... \quad \underbrace{+a_{n}}_{n\text{ step}}. $$ This is repetitive and cumbersome.

The **notation** for the sum of a segment -- from $m$ to $n$ -- of a sequence $a_i$ is chosen to be:
$$a_m+a_{m+1}+...+a_n =\sum_{i=m}^{n}a_i.$$
Here the letter $\Sigma$ stands for the letter S meaning “sum”. This is how the notation is deconstructed:
$$\begin{array}{rlrll}
\text{beginning}&\text{and end values for }k\\
\downarrow&\\
\begin{array}{r}3\\ \\k=0\end{array}&\sum \big(\quad k^2 + k \quad\big) &=&20\\
&\qquad\qquad\uparrow&&\uparrow\\
&\qquad\text{a specific sequence}&&\text{a specific number}
\end{array}$$
It is called the *sigma notation*.

The notation applies to all sequences, both finite and infinite.

**Example.** For example, this is how we *contract* the summation:
$$1^2 + 2^2 + 3^2 + ... + 17^2 = \sum_{k=1}^{n} k^2.$$
It is only possible if we find the $n$th term formula for the sequence: $a_k=k$. And this is how we *expand* back from this compact notation, by plugging the values of $k=1,2,...,17$ into the formula:
$$\sum_{k=1}^{17} k^2 = \underbrace{1^2}_{k=1} + \underbrace{2^2}_{k=2} + \underbrace{3^2}_{k=3} + ... + \underbrace{17^2}_{k=17}.$$
Similarly,
$$1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+...+\frac{1}{32}=\sum_{i=0}^{5} \frac{1}{2^k}.$$
$\square$

In contrast to the differences, the sum must be computed recursively. Once can see below how the terms of the original sequence are stack up on top of each other, more and more:

**Definition.** For a sequence $a_n$, its *sum* is a new sequence $b_n$ defined and **denoted** for each $q\ge p$ within the domain of $a_n$ by a recursive formula:
$$b_{n+1}=b_n+a_n;$$
in other words, we have:
$$b_q=\sum_{n=p}^q a_n=a_{p}+a_{p+1}+...+a_q.$$

For infinite sequences, the sum sequence is also called the *series*.

While adding, we can change the order of terms freely; this is called the *Associativity Property* of addition. At its simplest, it allows us to remove the parentheses:
$$\begin{array}{rlcr}
&(u_p+u_{p+1}+...+u_{q-1}+u_q)&+&(u_{q+1}+u_{q+2}+...+u_{r-1}+u_r)\\
=&u_p+u_{p+1}+...++u_{q-1}+u_q&+&u_{q+1}+u_{q+2}+...+u_{r-1}+u_r\\
=&u_p+u_{p+1}+&...&+u_{r-1}+u_r.
\end{array}$$

An abbreviated version is as follows.

**Theorem (Additivity).** The sum of the sums of two consecutive parts of a sequence is the sum of the total; i.e., for any sequence $\{u_n\}$ and for any $p,q,r$ with $p\le q\le r$, we have:
$$\sum_{n=p}^q u_n +\sum_{n=q}^r u_n = \sum_{n=p}^r u_n .$$

Applied to *motion*, the theorem might say:
$$\begin{array}{lll}
&\text{distance covered during the 1st hour }\\
+&\text{distance covered during the 2nd hour }\\
=&\text{distance during the two hours}.
\end{array}$$

Can we compare the values of two sums? Consider this simple algebra: $$\begin{array}{lll} u&\le&U,\\ v&\le&V,\\ \hline u+v&\le&U+V.\\ \end{array}$$ The rule applies even if we have more than just two terms: $$\begin{array}{rcl} u_p&\le&U_p,\\ u_{p+1}&\le&U_{p+1},\\ \vdots&\vdots&\vdots\\ u_q&\le&U_q,\\ \hline u_p+...+u_q&\le&U_p+...+U_q. \end{array}$$

An abbreviated version is as follows.

**Theorem (Comparison Rule).** The sum of a sequence with smaller elements is smaller; i.e., if $u_n\le U_n$, then we have for any $p,q$ with $p\le q$:
$$\sum_{n=p}^{q} u_n \le\sum_{n=p}^{q} U_n.$$

Applied to *motion*, the theorem might say: the faster covers the longer distance.

**Example (runners).** The graph shows the velocities of three runners as functions of time, $n$. Describe what has happened.

It's easy:

- $A$ starts fast and the slows down;
- $B$ maintains the same speed;
- $C$ starts late and then runs fast.

But where are they?

$\square$

## 9 The fundamental relation between differences and sums

We know that *subtraction and addition are inverses*; it makes sense then that making the sequences of differences and the sequences of sums will cancel each other in a similar manner!

Just comparing the illustrations above demonstrates that the two operations -- the difference and the sum -- undo the effect of each other:

As you can see, the sum stacks up the terms of the sequence on top of each other while the difference takes this back down.

**Example.** We know how to get velocity from the location -- and the location from the velocity. Of course, executing these two operations consecutively should bring us back where we started.

We now take another look at the *two* computations about motion -- a broken odometer and a broken speedometer -- presented in the beginning of this chapter.

First, this is how we use the velocity function to acquire the displacement; but each of the values of the latter is the sum of the former:

Second, this is how we use the location function to acquire the velocity:

But each of the values of the latter is a difference of the former! $\square$

Suppose we have a sequence $a_k$. We compute its *sum*:
$$b_n=\sum_{k=1}^n a_k,\ n=1,2,....$$
This defines a new sequence. The new sequence can also be written recursively:
$$b_0=0,\ b_{n+1}=b_n+a_n.$$

Suppose we have a sequence $b_n$. We compute its *difference*:
$$c_n=b_n-b_{n-1},\ k=1,2...,n.$$
This defines a new sequence.

The first question we would like to answer is, *what is the quotient of the sum*?

We apply the second formula to the first:
$$…$$
So, the answer is, *the original sequence*.

**Theorem (Fundamental Relation I).** The difference of the sum of $a_n$ is $a_n$:
$$\Delta \big(\sum_{k=1}^n a_k \big)=a_n.$$

The two operations *cancel* each other!

The second question we would like to answer is, *what is the sum of the difference*?

We apply the first formula to the second:
$$…$$
So, the answer is, *the original function plus a constant*.

**Theorem (Fundamental Relation II).** The sum of the difference of $b_n$ is $b_n+C$, where $C$ is a constant:
$$\sum_{k=1}^n\left( \Delta b_k \right) =b_n+C.$$

The two operations -- almost -- cancel each other, again!

The result shouldn't be surprising considering the operations involved: $$\left. \begin{array} {rccc} \text{difference, } \Delta a_n :& \text{ subtraction } \\ & && \text{opposite!}\\ \text{sum, } \sum_{k=1}^n a_k :& \text{ addition } \end{array} \right. $$

**Example.** For complex data, we use a spreadsheet. From a function to its sum:
$$\texttt{ =R[-1]C+RC[-1]}\ .$$
From a function to its difference:
$$\texttt{ =RC[-1]-R[-1]C[-1]}\ .$$
What if we combine the two consecutively? In this order first, from a function to its sum to the difference of the latter:

It's the same function! Now in the opposite order, from a function to its sum to the difference of the latter:

It's the same function! $\square$

This idea will be developed in Chapter 11 into the so-called *Fundamental Theorem of Calculus*.

*Calculus:* We will make the widths of those intervals smaller and smaller creating a sequence of approximations and then search for a long-term trend.

Free fall...

**Example.** To repeat an example from earlier, we watching a ping-pong ball falling down and recording -- every $.05$ second -- how high it is:

We ignored the time and concentrated on the locations only:

The location is plotted -- as it is -- vertically and the time horizontally. Now the time too. It's just another sequence (an arithmetic progression). We have a new *table* with an extra column:
$$\begin{array}{r|ll}
&\text{time}&\text{height}\\
n&t_n&a_n\\
\hline
1&.00&36\\
2&.05&35\\
3&.10&32\\
4&.15&25\\
5&.20&20\\
6&.25&11\\
7&.30&0
\end{array}$$

$\square$

## 10 The algebra of sums and differences

What happens to the differences of sequences as we perform *algebraic operations* on them?

The idea of *addition* of the change is illustrated below:

Here, the bars that represent the change of the output variable are stacked on top of each other, then the heights are added to each other and so are the height differences. The algebra behind this geometry is very simple:
$$(A+B)-(a+b)=(A-a)+(B-b).$$
It's the *Associative Rule* of addition.

**Theorem (Sum Rule).** The difference of the sum of two sequences is the sum of their differences; i.e., for any two sequence $a_n,b_n$, their differences satisfy:
$$\Delta(a_n+b_n)=\Delta a_n+\Delta b_n.$$

**Proof.**
$$\begin{array}{lll}
\Delta (a_n + b_n)&=(a_{n+1} + b_{n+1})- (a_n + b_n)\\
&=(a_{n+1}-a_{n})+ (b_{n+1} - b_n)&\\
&=\Delta a_n+\Delta b_n.
\end{array}$$
$\blacksquare$

In terms of motion, if two runners are running *away* from each other starting from a common location, then the distance between them is the sum of the distances they have covered.

**Example.** Let's consider the sum of an arithmetic and a geometric progressions:
$$\Delta (a+mn+ar^n)=\Delta (a+mn)+\Delta (ar^n)=m+ar^n(r-1).$$
$\square$

The idea *proportion* of the change is illustrated below:

Here, if the heights triple then so do the height differences. The algebra behind this geometry is very simple:
$$kA-ka=k(A-a).$$
It's the *Distributive Rule*.

**Theorem (Constant Multiple Rule).** The difference of a multiple of a sequence is the multiple of the sequence's difference; i.e., for any sequence $a_n$, the differences satisfy:
$$\Delta(ka_n)=k\Delta a_n.$$

**Proof.**
$$\begin{array}{lll}
\Delta (ka_n )&=ka_{n+1} k- ka_n\\
&=ka_{n+1} k- ka_n\\
&=k\Delta a_n.
\end{array}$$
$\blacksquare$

In terms of motion, if the distance is re-scaled, such as from miles to kilometers, then so is the velocity -- at the same proportion.

Next, the products of sequences. The idea is illustrated below:

As the width and the depth are increasing, so is the area of the rectangle. But the increase of the area cannot be expressed entirely in terms of the increases of the width and depth! This increase is split into two parts corresponding to the two terms in the right-hand side of the formula below.

**Theorem (Product Rule).** The difference of the product of two sequences is found as a combination of these sequences and either of the two differences; for any two sequences $a_n,b_n$ the differences satisfy:
$$\Delta (a_n \cdot b_n)=a_{n+1} \cdot \Delta b_n + \Delta a_n \cdot b_n.$$

**Proof.**
$$\begin{array}{lll}
\Delta (a_n \cdot b_n)&=a_{n+1} \cdot b_{n+1}- a_n \cdot b_n&\text{ ... insert terms...}\\
&=a_{n+1} \cdot b_{n+1}-a_{n+1} \cdot b_{n}+ a_{n+1} \cdot b_n- a_n \cdot b_n&\text{ ... factor...}\\
&=a_{n+1} \cdot (b_{n+1})-b_{n})+ (a_{n+1} - a_n) \cdot b_n&\text{}\\
&=a_{n+1} \cdot \Delta b+ \Delta a_n \cdot b_n.
\end{array}$$
$\blacksquare$

In terms of motion, it is as if two runners are unfurling a flag while running east and north respectively.

**Example.** Let's consider the product of an arithmetic and a geometric progressions:
$$\Delta (mn \cdot ar^n)=\Delta (mn)\cdot ar^n +mn\Delta(ar^n)=mar^n+mnar^n(r-1).$$
$\square$

**Example (squares).** The difference of the square sequence $a_n=n^2$ is computed with the *Product Rule*:
$$\Delta (n^2)=\Delta (n\cdot n)=(n+1) \cdot \Delta (n) + \Delta (n) \cdot (n)=(n+1)+(n)=2n+1.$$
It's an arithmetic progression.

$\square$

Now, division.

**Example (reciprocals).** Let's find the formula for the difference of the reciprocals:
$$\Delta\left(\frac{1}{n}\right)=\frac{1}{n+1}-\frac{1}{n}=\frac{n-(n+1)}{n(n+1)}=-\frac{1}{n(n+1)}.$$
The sequence decreases but slower and slower.

$\square$

**Theorem (Quotient Rule).** The difference of the quotient of two sequences is found as a combination of these functions and either of the two differences; for any two sequences $a_n,b_n$, the differences satisfy:
$$\Delta \left(\frac{a_n}{b_n}\right)=\frac{a_{n+1} \cdot \Delta b_n + \Delta a_n \cdot b_n}{b_nb_{n+1}}.$$

**Exercise.** Prove the theorem.

What happens to the sums under *algebraic operations* on the sequences involved? There are a few shortcut properties.

When two sequences are added to each other, what happens to their sums? This simple algebra, the *Associative Property* combined with the *Commutative Property*, tells the whole story:
$$\begin{array}{lll}
&u&+&U,\\
+\\
&v&+&V,\\
\hline
=&(u+v)&+&(U+V).\\
\end{array}$$
The rule applies even if we have more than just two terms; it's just re-arranging terms:
$$\begin{array}{rcl|lll}
u_p&+&U_p&(u_{p}+U_{p})+\\
u_{p+1}&+&U_{p+1}&(u_{p+1}+U_{p+1})+\\
\vdots&\vdots&\vdots&\quad\vdots\\
u_q&+&U_q&(u_q+U_q)\\
\hline
=(u_p+...+u_q)&+&(U_p+...+U_q)&=(u_p+U_p)+&...&+(u_q+U_q).
\end{array}$$

An abbreviated version is as follows.

**Theorem (Sum Rule).** The sum of the sums of two sequences is the sum of the sequence of sums; i.e., if $\{u_n\}$ and $\{U_n\}$ are sequences, then for any $p,q$ with $p\le q$, we have:
$$\sum_{n=p}^{q} u_n+\sum_{n=p}^{q}U_n=\sum_{n=p}^{q} (u_n +U_n) . $$

Applied to *motion*, the theorem might say: if two runners are running away from a post, their velocities are added and so are their distances to the post.

Next, when a sequence is multiplied by a constant, what happens to its sums? This simple algebra, the *Distributive Property*, tells the whole story:
$$\begin{array}{lll}
c\cdot(&u&+&U)\\
=&cu&+&cU.\\
\end{array}$$
The rule applies even if we have more than just two terms; it's just factoring:
$$\begin{array}{rcl|lll}
c&\cdot&u_p&c\cdot u_p+\\
c&\cdot&u_{p+1}&c\cdot u_{p+1}+\\
\vdots&\vdots&\vdots&\quad\vdots\\
c&\cdot&u_q&c\cdot u_q\\
\hline
=c\cdot u_p+&...&+c\cdot u_q&=c\cdot(u_p+...+u_q).
\end{array}$$

An abbreviated version is as follows.

**Theorem (Constant Multiple Rule).** The sum of a multiple of a sequence is the multiple of its sum; i.e., if $\{u_n\}$ is a sequence, then for any $p,q$ with $p\le q$ and any real $c$, we have:
$$ \sum_{n=p}^{q} (cu_n) = c\sum_{n=p}^{q} u_n .$$

Applied to *motion*, the theorem might say: if your velocity is tripled, then so is the distance you have covered.