This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# Vector fields

### From Mathematics Is A Science

## Contents

- 1 What are vector fields?
- 2 Motion under forces: a discrete model
- 3 The algebra and geometry of vector fields
- 4 Summation along a curve: flow and work
- 5 Line integrals: work
- 6 Sums along closed curves reveal exactness
- 7 Path-independence of integrals
- 8 How a ball is spun by the stream
- 9 The Fundamental Theorem of Discrete Calculus of degree $2$
- 10 Green's Theorem: the Fundamental Theorem of Calculus for vector fields in dimension $2$

## 1 What are vector fields?

The first metaphor for a vector field is a *hydraulic system*.

**Example.** Suppose we have a system of pipes with water flowing through them. We model the process with a partition of the plane with its edges representing the pipes and nodes representing the junctions. Then a number is assigned to each edge representing the strength of the flow (in the direction of one of the axes). Such a system may look like this:

Here the strength of the flow is shown as the thickness of the arrow. This is a real-valued $1$-form.

Furthermore, there may be *leakage*. In addition to the amount of water that actually passes all the way through the pipe, we can make record of the amount that is lost.

That's another real-valued $1$-form. If we assume that the direction of the leakage is perpendicular to the pipe, the two numbers can be combined into a *vector*. The result is a vector-valued $1$-form. $\square$

Warning: the two real-valued $1$-forms are re-constructed from the vector-valued $1$-form but *not* as its two components but as its projections on the corresponding edges.

The second metaphor for a vector field is a *flow-through*.

**Example.** The data from last example can be used to illustrate a flow of liquid or another material from compartment to compartment through walls. A vector-valued $1$-form may look like this:

The situation is reversed in comparison to the last example: the component perpendicular to the edge is the relevant one.

This interpretation is changes in dimension $3$ however: the component perpendicular to the *face* is the relevant one. $\square$

The third metaphor for a vector field is *velocities of particles*.

**Example.** Imagine little flags placed on the lawn; then their directions form a vector field, while the air flow that produced it remains invisible.

Each flag shows the direction (if not the magnitude) of the velocity of the flow at that location. Such flags are also placed on a model airplane in a wind-tunnel. A similar idea is used to model a fluid flow. The dynamics of each particle is governed by the velocity of the flow, at each location, the same at every moment of time. In other words, the vector field supplies a *direction to every location*.

How do we trace the path of a particle? Let's consider this vector field: $$V(x,y)=<y,-x>.$$ Even though the vector field is continuous, the path can be approximated by a parametric curve over a partition of an interval, as follows. At our current location and current time, we examine the vector field to find the velocity and then move accordingly to the next location. We start at this location: $$X_0=(2,0).$$ We substitute these two numbers into the equations: $$V(0,2)=<2,0>.$$ This is the direction we will follow. Our next location on the $xy$-plane is then: $$X_1=(0,2)+<2,0>=(2,2).$$ We again substitute these two numbers into $V$: $$V(2,2)=<2,-2>,$$ leading to the next step. Our next location on the $xy$-plane is: $$X_2=(2,2)+<2,-2>=(4,0).$$ One more step: $X_2$ is substituted into $V$ and our next location is: $$X_3=(4,0)+<0,-4>=(4,-4).$$

The sequence is spiraling away from the origin. Let's now carry out this procedure with a spreadsheet (with a smaller time increment). The formulas for $x_n$ and $y_n$ are respectively: $$\texttt{=R[-1]C+R[-1]C[1]*R3C1}, \qquad \texttt{=R[-1]C-R[-1]C[-1]*R3C1}.$$ These are the results:

In general, a vector field $V(x,y)=<f(x,y),\ g(x,y)>$ is used to create a system of two *ordinary differential equations* (ODEs):
$$X'(t)=V(X(t))\quad\text{ or }\quad <x'(t),\ y'(t)>=V(x(t),\ y(t))\quad\text{ or }\quad \begin{cases}
x'(t)&=f(x(t),\ y(t)),\\
y'(t)&=g(x(t),\ y(t)).
\end{cases}$$
Its solution is a pair of functions $x=x(t)$ and $y=y(t)$ that satisfy the equations for every $t$.

The equations mean that the vectors of the vector field are tangent to these trajectories. ODEs are discussed in Part IV. $\square$

The fourth metaphor for vector fields is a *location-dependent force*.

**Example (gravity).** Recall from last chapter that *Newton's Law of Gravity* states that the force of gravity between two objects is given by the formula:
$$f(X) = G \frac{mM}{r^2},$$
where:

- $f$ is the
*magnitude*of the force between the objects; - $G$ is the gravitational constant;
- $m$ is the mass of the first object;
- $M$ is the mass of the second object;
- $r$ is the distance between the centers of the masses.

Now, let's assume that the first object is located at the origin. Then the vector of location of the second object is $X$ and the force is a multiple of this vector. If $F(X)$ is the *vector* of the force at the location $X$, then:
$$F(X)=-G mM\frac{X}{||X||^3}.$$
That's the vector form of the law! We plot the magnitude of the force as a function of two variables:

And this is the resulting vector field:

The motion is approximated in the manner described in the last example with the details provided in this chapter. $\square$

When the initial velocity of an object is *zero*, it will follow the direction of the force. For example, on object will fall directly on the surface of the Earth. This idea bridges the gap between velocity fields and force fields.

**Definition.** A *vector field* is a function defined on a subset of ${\bf R}^n$ with values in ${\bf R}^n$.

Warning: Though unnecessary mathematically, for the purposes of visualization and modeling we think of the input of vector fields as points and outputs as vectors.

But what about the *difference* of a functions of several variables? It's not vector-valued! Some vector fields however might have the difference behind them: the *projection* $p$ of a vector field $V$ on a partition is a function defined at the secondary nodes of the partition as the dot product of the vectors with the corresponding oriented edges:
$$p(C)=V(C)\cdot E,$$
where $C$ is the secondary node of the edge $E$. When the projection of $V$ is the difference of some function, we call $V$ *gradient*.

When no secondary nodes are specified, the formula $p(E)=V(E)\cdot E$ makes a real-valued $1$-form from a vector-valued one.

## 2 Motion under forces: a discrete model

Suppose we know the forces affecting a moving object. How can we predict its dynamics?

We simply generalize the $1$-dimensional analysis from Part I to the vector case.

Assuming a fixed mass, the total force gives us our acceleration. We to compute:

- the velocity from the acceleration, and then
- the location from the velocity.

A fixed time increment $\Delta t$ is supplied ahead of time even though it can also be variable.

We start with the following three quantities that come from the setup of the motion:

- the initial time $t_0$,
- the initial velocity $V_0$, and
- the initial location $P_0$.

They are placed in the consecutive cells of the first row of the spreadsheet:
$$\begin{array}{c|c|c|c|c}
&\text{iteration } n&\text{time }t_n&\text{acceleration }A_n&\text{velocity }V_n&\text{location }P_n\\
\hline
\text{initial:}&0&3.5&--&<33,44>&<22,11>\\
\end{array}$$
As we progress in time and space, new numbers are placed in the next row of our spreadsheet. There is a *set of columns* for each vector, two or three depending on the dimension.

Just as before, we rely on *recursive formulas*.

The current acceleration $A_0$ given in the first cells of the second row. The current velocity $V_1$ is found and placed in the second pair (or triple) of cells of the second row of our spreadsheet:

- current velocity $=$ initial velocity $+$ current acceleration $\cdot$ time increment.

The second quantity we use is the initial location $P_0$. The following is placed in the third set of cells of the second row:

- current location $=$ initial location $+$ current velocity $\cdot$ time increment.

This dependence is shown below: $$\begin{array}{c|c|c|cccc} &\text{iteration } n&\text{time }t_n&\text{acceleration }A_n&&\text{velocity }V_n&&\text{location }P_n\\ \hline \text{initial:}&0&3.6&--&&<33,44>&&<22,11>\\ &&&& &\downarrow& &\downarrow\\ \text{current:}&1&t_1&<66,77>&\to&V_1&\to&P_1\\ \end{array}$$

We continue with the rest in the same manner. As we progress in time and space, numbers and vectors are supplied and placed in each of the four sets of columns of our spreadsheet one row at a time: $$t_n,\ A_n,\ V_n,\ P_n,\ n=1,2,3,...$$

The first quantity in each row we compute is the time: $$t_{n+1}=t_n+\Delta t.$$

The next is the acceleration $A_{n+1}$. Where does it come from? It may come as pure data: the column is filled with number ahead of time or it is being filled as we progress in time and space. Alternatively, there is an explicit, functional dependence of the acceleration (or the force) on the rest of the quantities. The acceleration may be a function of the following:

- 1. the current time, e.g., $A_{n+1}=<\sin t_{n+1},\ \cos t_{n+1}>$, such as when we speed up the car, or
- 2. the last location, such as when the gravity depends on the distance to the planet (below), or
- 3. the last velocity, e.g., $A_{n+1}=-V_n$ such as when the air resistance works in the opposite direction of the velocity,

or all three.

The $n$th iteration of the velocity $V_n$ is computed:

- current velocity $=$ last velocity $+$ current acceleration $\cdot$ time increment,
- $V_{n+1}=V_n+A_n\cdot \Delta t$.

The values of the velocity are placed in the second set of columns of our spreadsheet.

The $n$th iteration of the location $P_n$ is computed:

- current location $=$ last location $+$ current velocity $\cdot$ time increment,
- $P_{n+1}=P_n+V_n\cdot \Delta t$.

The values of the location are placed in the third set of columns of our spreadsheet.

The result is a growing table of values:
$$\begin{array}{c|c|c|c|c|c}
&\text{iteration } n&\text{time }t_n&&\text{acceleration }A_n&\text{velocity }V_n&\text{location }P_n\\
\hline
\text{initial:}&0&3.5&&--&<33,44>&<22,11>\\
&1&3.6&&<66,77>&<38.5,45.1>&<25.3,13.0>\\
&...&...&&...&...&...\\
&1000&103.5&&<666,777>&<4,1>&<336,200>\\
&...&...&&...&...&...\\
\end{array}$$
The result may be seen as four sequences $t_n,\ A_n,\ V_n,\ P_n$ or as the table of values of three *vector-valued functions* of $t$.

**Exercise.** Implement a variable time increment: $\Delta t_{n+1}=t_{n+1}-t_n$.

**Example.** A rolling ball is unaffected by horizontal forces. Therefore, $A_n=0$ for all $n$. The recursive formulas for the horizontal motion simplify as follows:

- the velocity $V_{n+1}=V_n+A_n\cdot \Delta t=V_n=V_0$ is constant;
- the position $P_{n+1}=P_n+V_n\cdot \Delta t=P_n+V_0\cdot \Delta t$ grows at equal increments.

In other words, the position depends linearly on the time. $\square$

**Example.** A falling ball is unaffected by horizontal forces and the vertical force is constant: $A_n=A$ for all $n$. The first of the two recursive formulas for the vertical motion simplifies as follows:

- the velocity $V_{n+1}=V_n+A_n\cdot \Delta t=V_n+A\cdot \Delta t$ grows at equal increments;
- the position $P_{n+1}=P_n+V_n\cdot \Delta t$ grows at linearly increasing increments.

In other words, the position depends quadratically on the time. $\square$

**Example.** A falling ball is unaffected by horizontal forces and the vertical force is constant:
$$A_{n}=<0,-g>.$$
Now recall the setup considered previously: from a $200$ feet elevation, a cannon is fired horizontally at $200$ feet per second.

The initial conditions are:

- the initial location, $P_0=<0,200>$;
- the initial velocity, $V_0=<200,0>$.

Then we have recursive vector equations: $$V_{n+1}=V_n+<0,-g>\Delta t\ \text{ and }\ P_{n+1}=P_n+V_n\Delta t.$$ Implemented with a spreadsheet, the formulas produce these results:

$\square$

**Example.** Let's apply what we have learned to *planetary motion*. The problem above about a ball thrown in the air has a solution: its trajectory is a *parabola*.

However, we also know that if we throw really-really hard (like a rocket), the ball will start to orbit the Earth following an *ellipse*.

The motion of two planets (or the sum and a planet, or a planet and a satellite, etc.) is governed by the *Newton Law of Gravity*. From this law, another law of motion can be derived. Consider the *Kepler's Laws of Planetary Motion:*

- 1. The orbit of a planet is an ellipse with the Sun at one of the two foci.
- 2. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time.
- 3. The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit.

To confirm the law, we use the formulas above but this time the acceleration depends on the location, as follows:

The resulting trajectory does seem to be an ellipse (confirmed by finding its foci):

Note that the Second Kepler's Law implies that the motion is different from one provided by the standard parametrization of the ellipse.

Our computation can produce other kinds of trajectories such as a hyperbola:

$\square$

**Example.** The Earth revolves around the Sun and the Moon revolves around the Earth. The result derived from such a generic description should look like the one on left.

Now, let's use the actual data:

- (1) The average distance between the Earth and the Sun is $149.60$ million km. (2) The average distance between the Moon and the Earth is $385,000$ km.
- (3) The Moon orbits Earth one revolution in $27.323$ days.

The paths are plotted on right. As you can see, not only the Moon never goes backwards but also its orbit is in fact convex! (By “convex orbit” we mean “convex region inside the orbit”: any two points inside are connected by the segment that is also inside.) $\square$

**Example.** Below we have: a hypothetical star (orange) is orbited by a planet (blue) which is also orbited by its moon (purple). Now we vary the number of times per year the moon orbits the planet, from $20$ to $1/3$.

$\square$

## 3 The algebra and geometry of vector fields

Vector fields appear in all dimensions. The idea is the same: there is a flow of liquid or gas and we record how fast a single particle at every location is moving.

**Example (dimension $1$).** The flow is in a pipe. The same idea applies to a canal with the water that has the exact same velocity at all locations across it.

Of course these are just numerical functions:

This is just another way to visualize them. $\square$

**Example (dimension $2$).** Not every vector field of dimension $n>1$ is gradient and, therefore, some of them cannot be visualized as flows on a surface under nothing but gravity. A vector field of dimension $n=2$ is then seen as a flow on the plane: liquid in a pond or the air over a surface of the Earth.

The metaphor applies under the assumption that the air or water has the exact same velocity at every locations regardless of the elevation. $\square$

**Example (dimension $3$).** This time, a vector field is thought of as a flow without any restrictions on the velocities of the particles.

$\square$

A model of stock prices as a flow will lead to $10,000$-dimensional vector field. This necessitates our use of *vector notation*. We also start thinking of the input, just as the output, to be vectors (of the same dimension). For example, the two “radial” vector fields in the last section have the same representation:
$$V(X)=X.$$
An even simpler vector field is a constant:
$$V(X)=V_0.$$

Each vector field is just a vector -- at a fixed location. Then it is just a location-dependent (but time-independent!) vector but still a vector. That is why all algebraic operation for vectors are applicable to vector fields.

First, *addition*. Imagine that that we have a river -- with the velocities of the water particles represented by vector field $V$ -- and then wind starts -- with the velocities of the air particles represented by vector field $W$.

One can argue that the resulting dynamics of water particles will be represented by the vector field $V+U$.

Second, *scalar multiplication*. If the velocities of the water particles in a pipe are represented by vector field $V$ and we then double the pressure (i.e., pump twice as much water), we expect the new velocities to be represented by the vector field $2V$.

Reversing the flow will be represented by the vector field $-V$.

Furthermore, the scalar might also be location-dependent, i.e., we are multiplying our vector field in ${\bf R}^n$ by a (scalar) function of $n$ variables.

**Example.** The computations with specific vector fields are carried out

- one location at a time and
- component at a time.

$\square$

Now *geometry*.

What is the magnitude of a vector? As a function, it takes a vector as an input and produces a number as the output. It's just another function of $n$ variables. We can apply it to vector fields via composition: $$f(X)=||V(X)||.$$ The result is a function of $n$ variables that gives us the magnitude of the vector $V(X)$ at location $X$. The construction is exemplified by the “scalar” version of the Newton's Law of Gravity.

Furthermore, we can use this function to modify vector fields in a special way:
$$W(X)=\frac{V(X)}{||V(X)||}.$$
The result is a new vector fields with the exactly same directions of the vectors but with *unit length*. The domain of the new vector field might change as it is undefined at those $X$ where $V(X)=0$.

This construction is called *normalization*.

**Example.** The “accelerated outflow” presented in the first section is no longer accelerated after normalization:
$$W(X)=\frac{X}{||X||}.$$
The speed is constant!

The price we pay for making the vector field well-behaved is the appearance of a hole in the domain, $X\ne 0$. $\square$

**Exercise.** Show that the hole can't be repaired, in the following sense: there is no such vector $U$ that $||W(X)-U||\to 0$ as $X\to 0$ (i.e., this is a non-removable discontinuity).

**Exercise.** What if we do the dot product of two vector fields?

If can we rotate a vector, we can rotate vector fields $V$? In dimension $2$, the normal vector field of a vector field $V=<u,v>$ on the plane is given by $$V^\perp=<u,v>^\perp=<-v,u>.$$

We have then a special operation on vectors fields. For example, rotating a constant vector field is also constant. However, the normal of the rotation vector field is the radial vector field.

## 4 Summation along a curve: flow and work

**Example.** We look at this as a system of *pipes* with the numbers indicating the rate of the flow in each pipe (along the directions of the axes).

What is the total flow along this “staircase”? We simply add the values located on these edges:
$$W=1+0+0+2 +(-1)+1+(-2).$$
But these edges just happen to be positively oriented. What if we, instead, go around the first square? We have the following:
$$W=1+0-2-0=-1.$$
Going *against* one of the oriented edges, makes us count the flow with the opposite sign. $\square$

Recall that an oriented edge $E_i$ of a partition in ${\bf R}^n$ is a vector that goes with or against the edge and any collection of such edges $C=\{E_i:\ i=0,1,...,n\}$ is seen as an *oriented curve*.

**Definition.** Suppose $C$ is an oriented curve in ${\bf R}^n$ that consists of oriented edges $E_i,\ i=1,...,n$, of a partition in ${\bf R}^n$. If a function $G$ defined at the secondary nodes at the edges of the partition in ${\bf R}^n$ and, in particular, at the edges $\{Q_i\}$ of the curve, then *the sum of $G$ along curve $C$* is defined and **denoted** to be the following:
$$\sum_C G=\sum_{i=1}^n G(Q_i).$$

**Example.**

$\square$

When the secondary nodes aren't specified, this sum is the sum of the real-valued $1$-form $G$.

Unlike the arc-length, the sum depends on the direction of the trip.

This dependence is however very simple: the *sign* is reversed when the direction is reversed.

**Theorem (Negativity).**
$$\sum_{-C} G =-\sum_C G .$$

**Theorem (Linearity).** For any two functions $F$ and $G$ defined at the secondary nodes at the edges of the partition in ${\bf R}^n$ and any two numbers $\lambda$ and $\mu$, we have:
$$\sum_C(\lambda F+\mu G)=\lambda\sum_CF+\mu\sum_{C}G.$$

**Theorem (Additivity).** For any two oriented curves of edges $C$ and $K$ with no edges in common and that together form an oriented curve of edges $C\cup K$, we have:
$$\sum_{C\cup K}F=\sum_CF+\sum_K F.$$

Let's examine another problem: the *work of a force*. Suppose a ball is thrown.

This force is directed down, just as the movement of the ball. The work done on the ball by this force as it falls is equal to the (signed) magnitude of the force, i.e., the weight of the ball, multiplied by the (signed) distance to the ground, i.e., the displacement. All horizontal motion is ignored as unrelated to the gravity. Moving an object up from the ground the work performed by the gravitational force is negative.

Of course, we are speaking of *vectors*.

In the $1$-dimensional case, suppose that the force $F$ is constant and the displacement $D$ is along a straight line. Then the *work* $W$ is equal to their product:
$$W=F\cdot D.$$
The force may vary however with location: spring, gravitation, air pressure.

**Example.** In the case of an object attached to a *spring*, the force is proportional to the (signed) distance of the object to its equilibrium:
$$F=-kx.$$

$\square$

In summary, if a function $F$ on segment $[a,b]$ is called a *force function* then its Riemann integral $\int_a^bF\, dx$ is called the *work* of the force over interval $[a,b]$.

Let's now proceed to the $n$-dimensional case but start with a constant force and linear motion...

This time, the force and the displacement may be misaligned.

In addition to motion “with the force” and “against the force”, the third possibility emerges: what if we move perpendicular to the force? Then the work is zero. This is the case of horizontal motion under gravity force, which is constant close to the surface of the Earth.

What if the direction of our path varies but only within the standard square *grid* on the plane? We realize that there is a force vector associated with each edge of our trip and possibly with every edge of the grid. However, only one of these vector components matters: the horizontal when the edge is horizontal and the vertical when the edge is vertical. It is then sufficient to assign this *single* number to each edge to indicate the force applied to this part of the trip.

**Example.** As a familiar interpretation, we can look at this as a system of *pipes* with the numbers indicating the speed of the flow in each pipe (along the directions of the axes). If, for example, we are moving through a grid with $\Delta x\times \Delta y$ cells, the work along the “staircase” is
$$W=1\cdot \Delta x+0\cdot \Delta y+0\cdot \Delta x+2\cdot \Delta y+(-1)\cdot \Delta x+1\cdot \Delta y+(-2)\cdot \Delta x.$$
When $\Delta x=\Delta y=1$, this is simply the sum of the values provided:
$$W=1+0+0+2+(-1)+1+(-2)=1.$$
What if we, instead, go around the first square? Then
$$W=1+0-2-0=-1.$$
Going *against* one of the oriented edges, makes us count the work with the opposite sign. In other words, the edge and the displacement are multiples of each other. $\square$

When the direction of the force isn't limited to the grid anymore, it can take, of course, one of the diagonal directions. In fact, there is a whole circle of possible directions.

The vector of the force, too, can take all available directions. In order to find and discard the irrelevant part of the force $F$, we decompose it into *parallel and normal components* relative to the displacement:
$$F=F_\perp+F_{||}.$$
The relevant (“collinear”) component of the force $F$ is the projection on the displacement vector:
$$F_{||}=||F||\cos \alpha,$$
where $\alpha$ is the angle of $F$ with $D$.

Of course, we are taking about the *dot product*. The *work of the force vector $F$ along the displacement vector $D$* is defined to be their dot product:
$$W=F\cdot D.$$

The work is proportional to the magnitude of the force and to the magnitude of the displacement. It is also proportional the projection of the former on the latter (the relevant part of the force) and the latter on the former (the relevant part of the displacement). It makes sense.

In our interpretation of a vector field as a system of *pipes* has two numbers, this is a vector associated with each pipe indicating the speed of the flow in the pipe (along the direction of one of the pipe) as well as the leakage (perpendicular to this direction). Then, the relevant part of the force is found as the (scalar) *projection* of the vector of the force on the vector of displacement. The difference is between real-valued and vector-valued $1$-forms.

Thus, the work is represented as the dot product of the vector of the force and the vector of displacement.

**Definition.** Suppose $C$ is an oriented curve in ${\bf R}^n$ that consists of oriented edges $E_i,\ i=1,...,n$, of a partition in ${\bf R}^n$. If a vector field $F$ is defined at the secondary nodes at the edges of the partition in ${\bf R}^n$ and, in particular, at the edges $\{Q_i\}$ of the curve, then *the Riemann sum of $F$ along curve $C$* is defined and **denoted** to be the following:
$$\sum_C F \cdot \Delta X=\sum_{i=1}^n F(Q_i)\cdot E_{i}.$$

In other words, this is the Riemann sum of a vector field, $F$, is the sum of a certain real-valued function, $F\cdot E$, along a curve as defined in the beginning of the section.

When the vector field $F$ is called a *force field*, then the sum of $F$ along $C$ is also called the *work of force $F$ along curve* $C$. Note that only the part of the force field passed through affects the work.

**Example.**

$\square$

The properties follows the ones above.

**Theorem (Negativity).**
$$\sum_{-C} F \cdot \Delta X=-\sum_C F \cdot \Delta X.$$

**Theorem (Linearity).** For any two vector fields $F$ and $G$ defined at the secondary nodes at the edges of the partition in ${\bf R}^n$ and any two numbers $\lambda$ and $\mu$, we have:
$$\sum_C(\lambda F+\mu G)\cdot \Delta X=\lambda\sum_CF\cdot \Delta X+\mu\sum_{C}G\cdot \Delta X.$$

**Theorem (Additivity).** For any two oriented curves $C$ and $K$ with only finitely many points in common and that together form an oriented curve $C\cup K$, we have:
$$\sum_{C\cup K}F\cdot \Delta X=\sum_CF\cdot \Delta X+\sum_K F\cdot \Delta X.$$

## 5 Line integrals: work

A more general setting is that of a motion through space, ${\bf R}^n$, with a *continuously changing force*.

We first assume that we move from point to point along a *straight line*.

**Example.** Away from the ground, the *gravity* is proportional to the reciprocal of the square of the distance of the object to the center of the planet:
$$F(X)=-\frac{kX}{||X||^3}.$$
The *pressure* and, therefore, the medium's resistance to motion may change arbitrarily. Multiple springs create a $2$-dimensional variability of forces:

$\square$

The definition of work applies to straight travel... or to travel along multiple straight edges:

If these segments are given by the displacement vectors $D_1,...,D_n$ and the force for each is given by the vectors $F_1,...,F_n$, then the work is defined to be the simple sum of the work along each: $$W=F_1\cdot D_1+...+F_n\cdot D_n.$$

**Example.** If the force is constant $F_i=F$, we simplify,
$$W=F\cdot D_1+...+F\cdot D_n=F\cdot (D_1+...+D_n),$$
and discover that the total work is the dot product of the force and the *total displacement*.

This makes sense. This is a simple example of “path-independence”. Furthermore, the round trip will require zero work... unless one has to walk to school “$5$ miles -- uphill both ways!” The issue isn't as simple as it seems: even though it is impossible to make round trip while walking uphill, it is possible during this trip to walk against the wind even though the wind doesn't change. It all depends on the nature of the vector field. $\square$

**Example.** In order to compute the work of a vector field along a curve made of straight edges, all we need is the formula:
$$W=F_1\cdot D_1+...+F_n\cdot D_n.$$
In order for the computation to make sense, the edges of the path and the vectors of the force have to be paired up! Here's a simple example:

We pick the value of the force from the *initial* point of each edge:
$$W=<-1,0>\cdot <0,1>+<0,2>\cdot <1,0>+<1,2>\cdot <1,1>=3.$$

$\square$

**Example.** It is possible that there is *no vector field* and the force is determined entirely by our motion. For example, the air or water resistance is directed against our velocity (and is proportional to the speed).

The computations remain the same. $\square$

The general setup for defining and computing work along a curve is identical to what we have done several times.

Suppose we have a sequence of points $P_i,\ i=0,1,...,n$, in ${\bf R}^n$. We will treat this sequence as an oriented curve $C$ by representing it as the path of a *parametric curve* as follows. Suppose we have a sampled partition of an interval $[a,b]$:
$$a=t_0\le c_1\le t_1\le ... \le c_n\le t_n=b.$$
We define a parametric curve by:
$$X(t_i)=P_i,\ i=0,1,...,n.$$

However, it doesn't matter how fast we go along this path. It is the path itself -- the locations we visit -- that matters. The direction of the trip matters too. This is then about an *oriented curve*. In the meantime, a non-constant vectors along the path typically come from a *vector field*, $F=F(X)$. If its vectors change incrementally, one may be able to compute the work by a simple summation, as above.

We then find a regular parametrization of the latter: a parametric curve $X=X(t)$ defined on the interval $[a,b]$. We divide the path into small segments with end-points $X_i=X(t_i)$ and then sample the force at the points $Q_i=X(c_i)$.

Then the work along each of these segments is approximated by the work with the force being constantly equal to $F(Q_i)$: $$\text{ work along }i\text{th segment}\approx \text{ force }\cdot \text{ length}=F(Q_i)\cdot \Delta X_i,$$ where $\Delta X_i$ is the displacement along the $i$th segment. Then, $$\text{total work }\approx \sum_{i=1}^n F(Q_i)\cdot (X_{i+1}- X_i)=\sum_{i=1}^n F(X(c_i))\cdot (X(t_{i+1})-X(t_i)).$$ This is the formula that we have used and will continue to use for approximations. Note that this is just the sum of a discrete $1$-form.

**Example.** Estimate the work of the force field
$$F(x,y)=<xy,\ x-y>$$
along the upper half of the unit circle directed counterclockwise. First we parametrize the curve:
$$X(t)=<\cos t,\ \sin t>,\ 0\le t\le \pi.$$
We choose $n=4$ intervals of equal length with the left-ends as the secondary nodes:
$$\begin{array}{lll}
x_0=0& x_1=\pi/4& x_2=\pi/2& x_3=3\pi/4&x_4=\pi\\
c_1=0& c_2=\pi/4& c_3=\pi/2& c_4=3\pi/4&\\
X_0=(1,\ 0)& X_1=(\sqrt{2}/2,\ \sqrt{2}/2)& X_2=(0,1)& X_3=(-\sqrt{2}/2,\ \sqrt{2}/2)& X_4=(-1,0)\\
Q_1=(1,\ 0)& Q_2=(\sqrt{2}/2,\ \sqrt{2}/2)& Q_3=(0,1)& Q_4=(-\sqrt{2}/2,\ \sqrt{2}/2)\\
F(Q_1)=<0,\ 1>& F(Q_2)=<1/2,\ 0>& F(Q_3)=<0,\ -1>& F(Q_4)=<-1/2,\ -\sqrt{2}>
\end{array}$$
Then,
$$\begin{array}{lll}
W&\approx <0,1>\cdot <\sqrt{2}/2-1,\ \sqrt{2}/2> + <1/2,0>\cdot <-\sqrt{2}/2,\ 1-\sqrt{2}/2>\\
&+<0,-1>\cdot <-\sqrt{2}/2,\ \sqrt{2}/2-1> + <-1/2,\ -\sqrt{2}>\cdot <-1+\sqrt{2}/2,\ -\sqrt{2}/2>\\
&=.. .
\end{array}$$
$\square$

To bring the full power of the calculus machinery, we, once again, proceed to convert the expression into the Riemann sum of a certain function over this partition: $$\text{total work }\approx \sum_{i=1}^n F(X(c_i))\cdot \frac{X(t_{i+1})-X(t_i)}{t_{i+1}-t_i}(t_{i+1}-t_i)=\sum_a^b\left((F\circ X)\cdot \frac{\Delta X}{\Delta t}\right)\Delta t.$$ Then, we define the work of the force as the limit, if it exists, of these Riemann sums, i.e., the Riemann integral.

**Definition.** Suppose $C$ is an oriented curve in ${\bf R}^n$. For a vector field $F$ in ${\bf R}^n$, the *line integral of $F$ along $C$* is **denoted** and defined to be the following:
$$\int_CF\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt,$$
where $X=X(t),\ a\le t\le b$, is a regular parametrization of $C$.

When the vector field $F$ is called a *force field*, then the integral of $F$ along $C$ is also called the *work of force $F$ along curve* $C$.

The first term in the integral shows how the force varies with time during our trip. Just as always, the Leibniz notation reveals the meaning: $$\int_CF\cdot dX=\int_a^b (F\circ X)\cdot \frac{dX}{dt} dt,$$

Once all the vector algebra is done, this is just a familiar numerical integral from Part I. Furthermore, when $n=1$, this *is* a familiar numerical integral from Part I. Indeed, suppose $x=F(t)$ is just a numerical function and $C$ is the interval $[A,B]$ in the $x$-axis.

Then we have:
$$\int_CF\cdot dX=\int_{x=A}^{x=B} F\, dx=\int_{t=a}^{t=b}F(x(t))x'(t)\, dt,$$
where $x=x(t)$ serves as a parametrization of this interval so that $x(a)=A$ and $x(b)=B$. This is just an interpretation of the *integration by substitution* formula.

**Example.** Compute the work of a constant vector field, $F=<-1,2>$, along a straight line, the segment from $(0,0)$ to $(1,3)$. First parametrize the curve and find its derivative:
$$X(t)=<1,3>t,\ 0\le t\le 1,\ \Longrightarrow\ X'(t)=<1,3>.$$
Then,
$$W=\int_CF\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt=\int_0^1<-1,2>\cdot <1,3>\, dt=\int_0^1 5\, dt=5.$$
$\square$

**Example.** Compute the work of the *radial vector field*, $F(X)=X=<x,y>$, along the upper *half-circle* from $(1,0)$ to $(-1,0)$. First parametrize the curve and find its derivative:
$$X(t)=<\cos t,\ \sin t >,\ 0\le t\le \pi,\ \Longrightarrow\ X'(t)=<-\sin t,\cos t>.$$
Then,
$$\begin{array}{lll}
W&=\int_CF\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt\\
&=\int_0^\pi <\cos t,\ \sin t >\cdot <-\sin t,\ \cos t>\, dt\\
&=\int_0^\pi (\cos t (-\sin t)+\sin t\cos t)\, dt\\
&=0.
\end{array}$$
$\square$

**Theorem.** The work is independent of parametrization.

Thus, just as we used parametric curves to study a function of several variables, we use them to study a vector field. Note however, that only the part of the vector field visited by the parametric curve affects the line integral.

Unlike the arc-length, the work depends on the direction of the trip.

This dependence is however very simple: the *sign* is reversed when the direction is reversed.

**Theorem (Negativity).**
$$\int_{-C}F\cdot dX=-\int_CF\cdot dX.$$

**Example.** Is the work positive or negative?

When all the angles are acute, it's positive. $\square$

**Exercise.** Finish the example.

**Exercise.** How much work does it take to move an object attached to a spring $s$ units from the equilibrium?

**Exercise.** How much work does it take to move an object $s$ units from the center of a planet?

**Exercise.** What is the value of the line integral of the gradient of a function along one of its level curves?

**Theorem (Linearity).** For any two vector fields $F$ and $G$ and any two numbers $\lambda$ and $\mu$, we have:
$$\int_{C}(\lambda F+\mu G)\cdot dX=\lambda\int_CF\cdot dX+\mu\int_{C}G\cdot dX.$$

**Theorem (Additivity).** For any two oriented curves $C$ and $K$ with only finitely many points in common and that together form an oriented curve $C\cup K$, we have:
$$\int_{C\cup K}F\cdot dX=\int_CF\cdot dX+\int_K F\cdot dX.$$

Let's look at the *component representation of the integral*. Starting with dimension $n=1$, the definition,
$$\int_C F\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt,$$
becomes ($F=f,\ X=x,\ C=[A,B]$):
$$\int_A^B f(x)\, dx=\int_a^b f(x(t)) x'(t)\, dt,$$
where $A=x(a)$ and $B=x(b)$. In ${\bf R}^2$, we have the following component representation of a vector field $F$ and the increment of $X$:
$$F=<p,q> \text{ and } dX=<dx,dy>.$$
Then the line integral of $F$ along $C$ is **denoted** by:
$$\int_C F\cdot dX=\int_C <p,q>\cdot <dx,dy>=\int_C p\, dx+ q\, dy.$$
Here, the integrand is a *differential form of degree $1$*:
$$p\, dx+ q\, dy$$
The notation matches the formula of the definition. Indeed, the curve's parametrization $X=X(t),\ a\le t\le b$, has a component representation:
$$X=<x,y>,$$
therefore,
$$\int_a^bF(X(t))\cdot X'(t)\, dt=\int_a^bF(x(t),y(t))\cdot <x'(t),y'(t)>\, dt=\int_a^b p(x(t),y(t))x'(t)\, dt+\int_a^b q(x(t),y(t))y'(t)\, dt.$$
Similarly, in ${\bf R}^3$, we have a component representation of a vector field $F$ and the increment of $X$:
$$F=<p,q,r> \text{ and } dX=<dx,dy,dz>.$$
Then the line integral of $F$ along $C$ is **denoted** by:
$$\int_C F\cdot dX=\int_C p\, dx+ q\, dy+ r\, dz.$$

**Example.**

Let's review the recent integrals that involve parametric curves. Suppose $X=X(t)$ is a parametric curve on $[a,b]$.

$\bullet$ The first is the (component-wise) *integral of the parametric curve*:
$$\int_a^bX(t)\, dt,$$providing the displacement from the known velocity, as functions of time.

$\bullet$ The second is the *arc-length integral*:
$$\int_C f\, ds=\int_a^bf(X(t))||X'(t)||\, dt,$$
providing the mass of a curve of variable density.

$\bullet$ The third is the *line integral along an oriented curve*:
$$\int_C F\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt,$$
providing the work of the force field.

The main difference between the first and the other two is that in the former case the parametric curve is the *integrand* (and the output is another parametric curve) and in the latter it provides the *domain of integration* (and the output is a number).

## 6 Sums along closed curves reveal exactness

**Example.** Let's consider the curve around a single square of the partition. Suppose $G$ is *constant* function on the partition (left): it has same value for each horizontal edge and same for each vertical edge. Then the flow along the curve is *zero*!

Note that $G$ is exact: $G=\Delta f$ with the values of $f$ given by:
$$f=\left[ \begin{array}{ll}\hline
2&3\\1&2\\
\hline\end{array} \right].$$
Suppose $G$ is *rotational* (right). Then the flow is *not zero*! Note that $G$ isn't exact, as demonstrated in the first section of the chapter. $\square$

Suppose $C$ is an oriented curve that consists of oriented edges $Q_i,\ i=1,...,m$, of a partition of a region $D$ in ${\bf R}^n$.

Suppose a function defined on the secondary nodes $F$ is the difference in $D$, $G=\Delta f$, of some function $f$ defined on the primary nodes of the partition. We carry out a familiar computation: we just add all of these and cancel the repeated nodes: $$\begin{array}{lll} \sum_{C} G&=G(Q_1)&+G(Q_2)&+...&+G(Q_m)\\ &=G(P_{0}P_{1})&+G(P_{1}P_{2})&+...&+G(P_{m-1}P_{m})\\ &=\big[f(P_{1})-f(P_{0})\big]&+\big[f(P_{2})-f(P_{1})\big]&+...&+\big[f(P_{m})-f(P_{m-1})\big]\\ &=-f(P_0)&&&+f(P_m)\\ &=f(B)-f(A). \end{array}$$ We have proven the following.

**Theorem (Fundamental Theorem of Calculus for differences II).** Suppose a function defined on the secondary nodes $G$ is exact, i.e., $G=\Delta f$ for some function $f$ defined on the primary nodes of the partition of region $D$. If an oriented curve $C$ in $D$ starts at node $A$ and ends at node $B$, then we have:
$$\sum_C G=f(B)-f(A).$$

Now, the sum on right is independent of our choice of $C$ as long as it is from $A$ to $B$! We formalize this property below.

**Definition.** A function defined on the secondary nodes of a partition of a region $D$ in ${\bf R}^n$ is called *path-independent over $D$* if its sum along any oriented curve depends only on the start- and the end-points of the curve; i.e.,
$$\sum_C G=\sum_K G,$$
for any two curves of edges $C$ and $K$ from node $A$ to node $B$ that lie entirely in $D$.

What can we say about the sums of such functions along *closed* curves?

The path-independence allows us to compare the curve to any curve with the same end-points. What is the simplest one? Consider this: if there are no pipes, there is no flow! We are talking about a special kind of path, a *constant curve*: $K=\{A\}$. Let's compare it to another curve $C$.

The curve $K$ is trivial; therefore, we have: $$\sum_C G=\sum_K G=0.$$ So, path-independence implies zero sums along any closed curve.

The converse is also true. Suppose we have two curves $C$ and $K$ from $A$ to $B$. We create a new, *closed* curve from them. We glue $C$ and the reversed $K$ together:
$$Q=C\cup -K.$$
It goes from $A$ to $A$.

Then, from *Additivity* and *Negativity* we have:
$$0=\sum_Q G=\sum_C G+\sum_{-K} G=\sum_C G-\sum_{K} G.$$
Therefore,
$$\sum_C G=\sum_{K} G.$$
In summary, we have the following.

**Theorem (Path-independence).** A function defined on the secondary nodes of a partition of a region $D$ in ${\bf R}^n$ is path-independent if and only all of its sums along closed curves in the partition are equal to zero.

Suppose we have a path-independent function $G$ defined on edges of a partition of some set $D$ in ${\bf R}$. We know it to be exact, but how do we find $f$ with $\Delta f=G$? The idea comes from Part II. First, we choose an arbitrary node $A$ in $D$ and then carry out a summation along *every* possible curve from $A$. We define for each $X$ in $D$:
$$f(X)=\sum_C G,$$
where $C$ is any curve from $A$ to $X$. A choice of $C$ doesn't matter because $G$ is path-independent.

To ensure that this function is well defined we need an extra requirement.

**Theorem (Fundamental Theorem of Calculus for differences I).** On a partition of a path-connected region $D$ in ${\bf R}^n$, if $G=\Delta f$, the function below is well-defined for a fixed $A$ in $D$:
$$g(X)=\sum_C G,$$
where $C$ is any curve from $A$ to $X$ within the partition of $D$, and, furthermore,
$$\Delta g=G.$$

**Proof.** Because the region in path-connected, there is always a curve from $A$ to any $X$. $\blacksquare$

What about vector fields? If $F$ is a vector field, we apply the above analysis to its projection $G=F\cdot \Delta X$. The sums become Riemann sums...

**Example.** Suppose $C$ is an oriented curve that consists of oriented edges $Q_i,\ i=1,...,m$, of a partition of a region $D$ in ${\bf R}^n$ and
$$Q_i=P_{i-1}P_i\text{ with } P_0=P_m=A.$$

Suppose $F$ is constant vector field in $D$: $F(X)=G$ for all $X$ in $D$.

Then the work of $G$ along $C$ is the following Riemann sum:
$$\begin{array}{ll}
\sum_C F \cdot \Delta X&=\sum_{i=1}^m F(Q_i)\cdot Q_{i}\\
&=\sum_{i=1}^m F\cdot Q_{i}\\
&=F\cdot \sum_{i=1}^m P_{i-1}P_i\\
&=F\cdot \sum_{i=1}^m (P_i-P_{i-1})\\
&=F\cdot \big[(P_1-P_{0})+(P_2-P_{1})+...+(P_m-P_{m-1})\big]\\
&=F\cdot \big[-P_{0}+P_m\big]\\
&=0.
\end{array}$$
It is *zero*! $\square$

**Example.** The story is the exact opposite for the *rotation vector field*:
$$F=<-y,x>.$$

Let's consider a single square of the partition; for example, $S=[1,2]\times [1,2]$.

Suppose curve $C$ goes counterclockwise and the secondary nodes are the starting points of the edges. Then the work of $G$ along $C$ is the following Riemann sum:
$$\begin{array}{ll}
\sum_C F \cdot \Delta X&=\sum_{i=1}^4 F(Q_i)\cdot Q_{i}\\
&=F(1,1)\cdot <1,0>+F(2,1)\cdot <0,1>+F(2,2)\cdot <-1,0>+F(1,2)\cdot <0,-1>\\
&=<-1,1>\cdot <1,0>+<-1,2>\cdot <0,1>+<-2,2>\cdot <-1,0>+<-2,1>\cdot <0,-1>\\
&=-1+2+2-1\\
&=2.
\end{array}$$
It is *not zero*! $\square$

The above formula for differences takes the following form: $$\sum_C F\cdot \Delta X=f(B)-f(A).$$

Not only the proof but also the formula itself looks like the familiar Fundamental Theorem of Calculus for numerical integrals from Part II.

**Definition.** A vector field $F$ defined on the secondary nodes of a partition of a region $D$ in ${\bf R}^n$ is called *path-independent* if its projection $F\cdot \Delta X$ is; i.e., the Riemann sum along any oriented curve depends only on the start- and the end-points of the curve:
$$\sum_C F\cdot \Delta X=\sum_K F\cdot \Delta X,$$
for any two curves of edges $C$ and $K$ from node $A$ to node $B$ that lie entirely in $D$.

For the sum along a *closed* curve, we note once again: if we stay home, we don't do any work! We have for a path-independent vector field $F$:
$$\sum_C F\cdot \Delta X=\sum_K F\cdot \Delta X=0.$$

Conversely, suppose we have two curves $C$ and $K$ from $A$ to $B$. We create a new, *closed* curve from them, from $A$ to $A$, by gluing $C$ and the reversed $K$ together:
$$Q=C\cup -K.$$

From the corresponding result for differences we derive the following.

**Theorem (Path-independence of vector fields).** A vector field defined on the secondary nodes of a partition of a region $D$ in ${\bf R}^n$ is path-independent if and only if all of its Riemann sums along closed curves of edges in $D$ are equal to zero.

## 7 Path-independence of integrals

Again, let's consider *constant* force fields along closed curves, i.e., parametrized by some $X=X(t),\ a\le t\le b$, with $X(a)=X(b)=A$.

Line integrals along closed curves have special **notation**:
$$\oint_C F\cdot dX.$$

**Example.** Once again, what is the work of a constant force field along a closed curve such as a circle?
Consider two diametrically opposite points on the circle. The directions of the tangents to the curve are opposite while the vector field is the same. Therefore, the terms $F\cdot X'$ in the *work* integral are negative of each other. So, because of this symmetry, two opposite halves of the circle will have work negative of each other and cancel. The work must be *zero*! Let's confirm this for $F=<p,q>$ and the standard parametrization of the circle:
$$\begin{array}{ll}
W&=\oint_C F\cdot dX=\int_a^b F(X(t))\cdot X'(t)\, dt\\
&=\int_0^{2\pi} <p,q>\cdot <\cos t,\ \sin t>'\, dt\\
&=\int_0^{2\pi}<p,q>\cdot <-\sin t,\ \cos t>\, dt\\
&=\int_0^{2\pi}(-p\sin t+q\cos t)\, dt\\
&=(p\cos t-q\sin t)\bigg|_0^{2\pi}+(p\cos t-q\sin t)\bigg|_0^{2\pi}\\
&=0+0=0.
\end{array}$$
So, work cancels out during this round trip. $\square$

**Example.** The story is the exact opposite for the *rotation vector field*:
$$F=<-y,x>.$$

Consider any point. The direction of the tangent to the curve is the same as the vector field. Therefore, the terms $F\cdot X'$ cannot cancel. The work is *not* zero! Let's confirm this result:
$$\begin{array}{ll}
W&=\oint_C F\cdot dX=\int_a^bF(X(t))\cdot X'(t)\, dt\\
&=\int_0^{2\pi}<-y,x>\bigg|_{x=\cos t,\ y=\sin t}\cdot <\cos t,\ \sin t>'\, dt\\
&=\int_0^{2\pi}<-\sin t,\ \cos t>\cdot <-\sin t,\ \cos t>\, dt\\
&=\int_0^{2\pi}(\sin^2 t+\cos^2 t)\, dt\\
&=\int_0^{2\pi}1\, dt\\
&=2\pi.
\end{array}$$
We have walked against wind all the way in this round trip! The same logic applies to any location-dependent multiple of $F$ as long as the symmetry is preserved. For example, the familiar one below qualifies:
$$G(X)=\frac{F(X)}{||X||^2}.$$
Even though, as we know, this vector field passes the *Gradient Test*, it has a positive line integral over a circle:
$$W=\oint_C G\cdot dX=\int_a^b G(X(t))\cdot G'(t)\, dt>0,$$
because the integrand is positive. $\square$

The difference between the two outcomes may be explained by the fact that the constant vector field is *gradient*:
$$<p,q>=\nabla f, \text{ where } f(x,y)=px+qy,$$
while the rotation vector field is *not*:
$$<-y,x>\ne\nabla f, \text{ for any } z=f(x,y).$$

Is there anything special about line integrals of gradient vector fields over curves that aren't closed? We reach the same conclusion for the discrete case: the line integral depends only on the potential function of $F$. But the latter is an antiderivative of $F$! This idea shows that this is just an analog of the original Fundamental Theorem of Calculus II (there will be FTC I later).

**Theorem (Fundamental Theorem of Calculus of gradient vector fields II).** If on a subset of ${\bf R}^n$, we have $F=\nabla f$ and an oriented curve $C$ in ${\bf R}^n$ starts at point $A$ and ends at $B$, then
$$\int_C F\cdot dX=f(B)-f(A).$$

**Proof.** Suppose we have:
$$F=\nabla f,$$
and an oriented curve $C$ in ${\bf R}^n$ that starts at point $A$ and ends at $B$:

Then, after parametrizing $C$ with $X=X(t),\ a\le t\le b$, we have via the *Fundamental Theorem of Calculus* from Part II and the *Chain Rule* from Part I:
$$\begin{array}{lll}
W&=\int_C F\cdot dX\\
&=\int_a^b F(X(t))\cdot X'(t)\, dt\\
&=\int_a^b \nabla f(X(t))\cdot X'(t)\, dt&\text{...we recognize the integrand as a part of CR...}\\
&=\int_a^b \frac{d}{dt} f(X(t))\, dt&\text{...we apply now FTC II...}\\
&=f(X(t))\bigg|_a^b\\
&=f(X(b))-f(X(a))\\
&=f(B)-f(A).
\end{array}$$
$\blacksquare$

For dimension $n=1$, we just take $y=F(x)$ to be a numerical function with antiderivative $f$ and $C$ is the interval $[A,B]$ in the $x$-axis.

We also choose $x=x(t)$ to be a parametrization of this interval so that $x(a)=A$ and $x(b)=B$. Then we have from above:
$$\int_CF\, dX=\int_{x=A}^{x=B} F\, dx=\int_{t=a}^{t=b}F(x(t))x'(t)\, dt=f(x(t))\bigg|_{t=a}^{t=b}=f(x(b))-f(x(a))=f(B)-f(A).$$
We have another interpretation of *substitution in definite integrals*.

Not only the proof but also the formula itself looks like the familiar Fundamental Theorem of Calculus for numerical integrals from Part II. Because it is restricted to gradient vector fields, this is just a *preliminary version*.

Warning: Before applying the formula, confirm that the vector field is *gradient*! The example of $F=<-y,x>$ is to be remembered at all times.

So, if $F$ is a gradient vector field then
$$\oint_C F\, dX=0.$$
Therefore, the work is zero on net so that there is no gain or loss of energy. This is the reason why gradient vector fields are also called *conservative*.

Not only the expression on right $$\int_C F\cdot dX=f(B)-f(A)$$ is independent of parametrization of the curve $C$, it is independent of our choice of $C$ as long as it is from $A$ to $B$!

**Definition.** A vector field defined on a subset $D$ of ${\bf R}^n$ is called *path-independent* if any line integral along a curve depends only on the start- and the end-points of the curve; i.e.,
$$\int_C F\cdot dX=\int_K F\cdot dX,$$
for any two curves $C$ and $K$ from point $A$ to point $B$ that lie entirely in $D$.

What if $A=B$? What can we say about line integral along a *closed* curve $C$? As an example, consider this: if we stay home, we don't do any work! We are talking about a constant curve, $K=\{A\}$. Let's compare it to another curve $C$.

The parametrization of $K$ is trivial: $X(t)=A$ on the whole interval $[a,b]$. Therefore, $X'(t)=0$ and we have: $$\int_C F\cdot dX=\int_K F\cdot dX=\int_a^b F(X(t))\cdot X'(t)\, dt=\int_a^b F(X(t))\cdot 0\, dt=0.$$

The converse is also true. Suppose we have two curves $C$ and $K$ from $A$ to $B$. Just as in the last section, we create a new, *closed* curve from them. We glue $C$ and the reversed $K$ together:
$$Q=C\cup -K.$$
It goes from $A$ to $A$.

Then, from *Additivity* and *Negativity* we have:
$$0=\int_Q F\cdot dX=\int_C F\cdot dX+\int_{-K} F\cdot dX=\int_C F\cdot dX-\int_{K} F\cdot dX.$$
Therefore,
$$\int_C F\cdot dX=\int_{K}F\cdot dX.$$
In summary, we have the following.

**Theorem.** A vector field defined on a subset $D$ of ${\bf R}^n$ is path-independent if and only if it has all of its line integrals along closed curves in $D$ equal to zero.

We have established the following.

**Theorem.** All gradient vector fields are path-independent.

**Proof.**

- $F$ is a gradient vector field, then
- $\oint_C F\cdot dX=0,$ then
- $F$ is path-independent.

$\blacksquare$

Recall that we considered Riemann integral (the area under the graph) but with a *variable upper limit*. It is illustrated below: $x$ runs from $a$ to $b$ and beyond.

Then the *Fundamental Theorem of Calculus II* states that for any continuous function $F$ on $[a,b]$, the function defined by
$$\int_{a}^{x} F \, dx $$
is an antiderivative of $F$ on $(a,b)$. In the new setting, we have a path-independent vector field $F$ defined on some set $D$ in ${\bf R}$ and we need to find its potential function, i.e., a function the gradient of which is $F$, $\nabla f=F$. First, we choose an arbitrary point $A$ in $D$ and then do a lot of line integration. We define for each $X$ in $D$:
$$f(X)=\int_CF\cdot dX,$$
where $C$ is any curve from $A$ to $X$. A choice of $C$ doesn't matter because $F$ is path-independent by assumption.

There is an extra requirement.

**Theorem (Fundamental Theorem of Calculus of gradient vector fields I).** For any gradient vector field $F$ defined on a path-connected region in ${\bf R}^n$, the function defined for a fixed $A$ in $D$ by:
$$f(X)=\int_CF\cdot dX,$$
where $C$ is any curve from $A$ to $X$ within $D$, is a potential function of $F$ on $D$.

## 8 How a ball is spun by the stream

Suppose we have a vector field that describes the velocity field of a fluid flow. Let's place a ping-pong ball within the flow. We put it on a pole so that the ball remains fixed while it can freely rotate. We see the particles bombarding the ball and think of the vector field of the flow as a *force field*. Due to the ball's rough surface, the fluid flowing past it will make it spin around the pole.

It is clear that a constant vector field will produce no spin or rotation. However, it is not the rotation of the vectors that we speak of. We are not asking: is a specific particle of water making a circle? but rather: does the combined motion of the particles make the ball spin? For example, this is what we see in the image on right.

- The ball in the center is in the middle of a whirl and will be clearly spun in the counterclockwise direction.
- The ball at the bottom is in the part of the stream with a constant direction but not magnitude. Will it spin?
- The ball at the top is being pushed in various directions at the same time and its spin seems very uncertain.

How do we predict and how do we measure the amount of rotation?

**Example.** The answer is simple when the force is applied to just one side of the ball as in the case of all racket sports:

Let's take a closer look at the ball in the stream. For simplicity, let's assume that we can detect only four distinct values of the vector field on the four sides (on the grid) of the ball.

We also assume at first that these four vectors are tangent to the surface of the ball. In other words, this is just a *vector field*. What is the net effect of these forces on the ball? Think of the ball as a tiny *wind-mill* with the four forces pushing (or pulling) its four blades. We just go around the ball (counterclockwise starting at the bottom) adding these numbers:
$$1+1-2+1=1>0.$$
The ball will spin counterclockwise!

In order to measure the amount of spin, let's assume that this is a *unit* square. Then, of course, the sum above is just a line sum from last chapter representing the *work* performed by the force of the flow to spin the ball.

Let's look at this quantity from the coordinate point of view. We observe that the forces with the same direction but on the opposite sides are cancelled. We see this effect if we re-arrange the terms: $$W=\text{horizontal: } 1-2\ +\text{ vertical: } 1+1.$$ We then represent each vector in terms of its $x$ and $y$ components: $$\text{force }=\quad\begin{array}{|lcr|} \hline \bullet-& \to\to&-\bullet\\ |&&|\\ \downarrow& &\uparrow\\ |&&|\\ \bullet-&\to&-\bullet\\ \hline \end{array} \quad=\quad \begin{array}{|lcr|} \hline \bullet-& 2&-\bullet\\ |&&|\\ -1& &1\\ |&&|\\ \bullet-&1&-\bullet\\ \hline \end{array}$$ The expression can then be seen as:

- $W=-$(the vertical change of the horizontal values) $+$ (the horizontal change of the vertical values).

$\square$

According to the *Exactness Test* for dimension $2$, a function $G$ defined on the edges of a partition is *not* exact when $\Delta_y p\ne\Delta_x q$. We form the following function to study this further.

**Definition.** For a function $G$ defined on the edges of a partition of the $xy$-plane, the *difference* of $G$ is a function of two variables defined at the $2$-cells of the partition and **denoted** by:
$$\Delta G=\Delta_x q-\Delta_y p,$$
where $p$ and $q$ are the $x$- and $y$-components of $G$ (i.e., its values on the horizontal and vertical edges respectively).

It is as if we cover the whole stream with those little balls and study their rotation.

**Definition.** If its difference is zero, a function defined on the edges of a partition is called *closed*.

The negative rotation simply means rotation in the opposite direction.

**Example.** All vector fields have vectors that change directions, i.e., *rotate*. What if they don't? Let's consider a flow with a constant direction but variable magnitude:
$$\text{force }=\quad\begin{array}{|lcr|}
\hline
\bullet-& \to\to&-\bullet\\
|&&|\\
\cdot& &\cdot\\
|&&|\\
\bullet-&\to&-\bullet\\
\hline
\end{array} \quad=\quad
\begin{array}{|lcr|}
\hline
\bullet-& 2&-\bullet\\
|&&|\\
0& &0\\
|&&|\\
\bullet-&1&-\bullet\\
\hline
\end{array}$$
The rotor is $-1$ but where is rotation? Well, the speed of the water on one side is faster than on the other and this difference is the cause of the ball's spinning. $\square$

With this new concept, we can restate the Exactness Test.

**Theorem (Exactness Test dimension $2$).** If $G$ is exact, it is closed; briefly:
$$\Delta (\Delta h)=0.$$

Let's try a more general point of view: vector fields.

**Example.** We represent each vector in terms of its $x$- and $y$-components:
$$\text{force }=\quad\begin{array}{|lcr|}
\hline
\bullet-& \to\to&-\bullet\\
|&&|\\
\downarrow& &\uparrow\\
|&&|\\
\bullet-&\to&-\bullet\\
\hline
\end{array} \quad=\quad
\begin{array}{|lcr|}
\hline
\bullet-& <2,0>&-\bullet\\
|&&|\\
<0,-1>& &<0,1>\\
|&&|\\
\bullet-&<1,0>&-\bullet\\
\hline
\end{array}$$
The expression can then be seen as:

- $W=-$(the vertical change of the horizontal vectors) $+$ (the horizontal change of the vertical vectors);

or: $$W=\text{horizontal: } -(2-1)+\ \text{ vertical: } 1-(-1).$$

Of course, only the vertical/horizontal components of the vectors acting along the vertical/horizontal edges matter! So the result should remain the same if we modify make the other components non-zero:

Then, we have: $$F=\begin{array}{|lcr|} \hline \bullet-& <2,0>&-\bullet\\ |&&|\\ <1/2,-1>& &<1,1>\\ |&&|\\ \bullet-&<1,-1>&-\bullet\\ \hline \end{array}$$ The value of $W$ above remains the same even though the forces are directed off the tangent of the ball! The difference is between a real-valued $1$-form and a vector-valued $1$-form.

If $F=<p,q>$, we have component-wise:
$$p=\begin{array}{|lcr|}
\hline
\bullet-& 2&-\bullet\\
|&&|\\
1/2& &1\\
|&&|\\
\bullet-&1&-\bullet\\
\hline
\end{array}\quad\leadsto\ \Delta_y p =2-1=1,\qquad q=
\begin{array}{|lcr|}
\hline
\bullet-& 0&-\bullet\\
|&&|\\
-1& &1\\
|&&|\\
\bullet-&-1&-\bullet\\
\hline
\end{array}\leadsto\ \Delta_x y =1-(-1)=2.$$
Then,
$$W=-\Delta_y p+\Delta_x q=-1+2=1.$$
This is the familiar *rotor* from last chapter!

Here is another way to arrive to this quantity. If $C$ is the border of the square oriented in the counterclockwise direction, the line sum along $C$ gives us the following: $$\begin{array}{ll} W&=\sum_C F\\ &=\begin{array}{|cccc|} \hline & <2,0>\cdot<-1,0>&+\\ <1/2,-1>\cdot<0,-1>& + &<1,1>\cdot<0,1>\\ +&<1,-1>\cdot<1,0>\\ \hline \end{array}\\ &=\begin{array}{|cccc|} \hline & -2&+\\ 1& + &1\\ +&1\\ \hline \end{array}\\ &=1. \end{array}$$ $\square$

According to the *Gradient Test* for dimension $2$, a vector field $F=<p,q>$ is *not* gradient when $\frac{\Delta p}{\Delta y}\ne\frac{\Delta q}{\Delta x}$. We form the following function of two variables to study this further.

**Definition.** For a vector field $F$ defined on the secondary nodes (the $1$-cells) of a partition of region in the $xy$-plane, the *rotor* of $F$ is a function defined on tertiary nodes (the $2$-cells) of the partition and **denoted** by:
$$\operatorname{rot} F=\frac{\Delta p}{\Delta y}-\frac{\Delta q}{\Delta x},$$
where $p$ and $q$ are the $x$- and $y$-components of $V$ (i.e., its values on the horizontal and vertical edges respectively).

**Definition.** If the rotor is zero, the vector field is called *irrotational*.

One can see a high value of the rotor in the center and zero around it in the following example:

**Example.** From the equality of the mixed partial difference quotients, it follows that the rotor of the gradient of a function gives values exactly equal to $0$:

$\square$

With this new concept, we can restate the Gradient Test.

**Corollary (Gradient Test dimension $2$).** If a vector field is gradient, then it's irrotational.

**Example.**

What about the $3$-*dimensional* space? Once again, we place a small ball within the flow in such a way that the ball remains fixed while being able to rotate. If the ball has a rough surface, the fluid flowing past it will make it spin. In the discrete case, each face, i.e., a $2$-cell, of the partition is subject to the $2$-dimensional analysis presented above. In other words, the ball located within a face rotates around the axis perpendicular to the face.

According to the *Exactness Test* for dimension $3$, $G=<p,q,r>$ is *not* exact when one of these fails:
$$\Delta_y p=\Delta_x q,\ \Delta_z q=\Delta_y r,\ \Delta_x r=\Delta_z p.$$
We form the following vector field to study this further.

**Definition.** For a function $F$ defined on the secondary nodes (edges) of a partition of the $xyz$-space, the *difference* of $G$ is a function defined at the tertiary nodes ($2$-cells) of a partition of a cell in the $xyz$-space and **denoted** by:
$$\Delta G=\begin{cases}
\Delta_y r-\Delta_z q&\text{ on the faces parallel to the }yz\text{-plane},\\
\Delta_z p-\Delta_x r&\text{ on the faces parallel to the }xz\text{-plane},\\
\Delta_x q-\Delta_y p&\text{ on the faces parallel to the }xy\text{-plane},
\end{cases}$$
where $p$, $q$, and $r$ are the $x$-, $y$-, and $z$-components of $G$ respectively. If the difference is zero, $G$ is called *closed*.

Of course, the $3$-dimensional difference is made of the three $2$-dimensional ones with respect to each of the three pairs of coordinates.

**Theorem (Exactness Test dimension $3$).** If $G$ is exact, it is closed; briefly:
$$\Delta (\Delta h)=0.$$

Same statement as for dimension $2$!

According to the *Gradient Test* for dimension $3$, a vector field $V=<p,q,r>$ is *not* gradient when one of these fails:
$$\frac{\Delta p}{\Delta y}=\frac{\Delta q}{\Delta x},\ \frac{\Delta q}{\Delta z}=\frac{\Delta r}{\Delta y} ,\ \frac{\Delta r}{\Delta x}=\frac{\Delta p}{\Delta z}.$$

**Definition.** For a function $F$ defined on the edges of a partition of the $xyz$-space, the *curl*, of $F$ is a function of three variables defined at the $2$-cells of a partition of a cell in the $xyz$-space and **denoted** by:
$$\operatorname{curl} F=\begin{cases}
\frac{\Delta r}{\Delta y}-\frac{\Delta q}{\Delta z}&\text{ on the faces parallel to the }yz\text{-plane},\\
\frac{\Delta p}{\Delta z}-\frac{\Delta r}{\Delta x}&\text{ on the faces parallel to the }xz\text{-plane},\\
\frac{\Delta q}{\Delta x}-\frac{\Delta p}{\Delta y}&\text{ on the faces parallel to the }xy\text{-plane},
\end{cases}$$
where $p$, $q$, and $r$ are the $x$-, $y$-, and $z$-components of $F$ respectively. If the curl is zero, $F$ is called *irrotational*.

Of course, the curl is made of the three rotors with respect to the three pairs of coordinates.

**Corollary (Gradient Test dimension $3$).** If a vector field is gradient, then it's irrotational.

Same statement!

**Example.**

$\square$

The two theorems can be restated in an even more concise form, in terms of the compositions of these *functions of functions*:
$$\Delta\Delta=0.$$
When no secondary nodes are specified, we deal with discrete forms. Then, if travel along the following diagram, we end up at *zero* no matter what the starting point is:
$$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{llll}
0\text{-forms }&\ra{\Delta}&\ 1\text{-forms }& \ra{\Delta}&\ 2\text{-forms}.
\end{array}$$

## 9 The Fundamental Theorem of Discrete Calculus of degree $2$

Suppose curve $C$ is the border of the rectangle $R$ oriented in the counterclockwise direction.

Suppose the flow is given by these numbers as defined on each of the edges of the rectangle:
$$G=\begin{array}{|ccc|}
\hline
\bullet& p_3&\bullet\\
q_4& &q_2\\
\bullet&p_1&\bullet\\
\hline
\end{array},\quad C=\begin{array}{|ccc|}
\hline
\bullet& \leftarrow &\bullet\\
\downarrow& &\uparrow\\
\bullet&\to&\bullet\\
\hline
\end{array}$$
Then the line integral along $C$ is the following:
$$\begin{array}{ll}
W&=\sum_C G\\
&\begin{array}{cccc}
=&&& -p_3&\\
&+&-q_4& + &q_2\\
&+&&p_1
\end{array}\\
&=-p_3-q_4+ q_2+p_1\\
&=(q_2-q_4)-(p_3-p_1)\quad (\text{i.e., horizontal change of }q\text{ and vertical change of }p)\\
&=\Delta_x G- \Delta_y G\\
&=\Delta G.
\end{array}$$
As you can see, rearranging the four terms of the work that come from the trip around the square creates the following. First, it is the difference of the vertical flow on the two sides of the ball and, second, it is the difference of the horizontal flow on the other two sides. Finally, the difference of these two quantities appears and it indicates the total flow. It is the *difference* of $G$.

We have a preliminary result below.

**Theorem.** In a partition of a plane region $R$, if $C$ is a simple closed curve that constitutes the boundary of a single $2$-cell $D$ of the partition by going counterclockwise around $D$, we have the following for any function $G$ defined on the secondary nodes of the partition:
$$\sum_{C} G=\Delta G.$$

What if we have a more complex object in the stream? How do we measure the amount of flow around it?

We approach the problem as follows: we suppose that there are many little balls in the flow forming some shape and then find the amount of the flow around the balls. Note that every ball will try to rotate all of its adjacent balls in the same direction at the same speed with no more flow required. This idea of cancellation of spin takes an algebraic form below.

We will start with a single rectangle and then build more and more complex regions on the plane from the rectangles of our grid -- as if each contains a ball -- while maintaining the formula.

Let's put two rectangles together. Suppose we have two adjacent ones, $R_1$ and $R_2$, bounded by curves $C_1$ and $C_2$. We write the Fundamental Theorem for either and then add the two:
$$\begin{array}{lll}
&\sum_{C_1} G&=\sum_{R_1} \Delta G\\
+\\
&\sum_{C_2} G&=\sum_{R_2} \Delta G\\
\hline
&\sum_{C_1\cup C_2} G&=\sum_{R_1\cup R_2} \Delta G
\end{array}$$
In the right-hand side, we have a single sum according to *Additivity* of sums and in the left-hand side, we have a single sums according to *Additivity*. Here $C_1\cup C_2$ is the curve that consists of $C_1$ and $C_2$ traveled consecutively.

Now, this is an unsatisfactory result because ${C_1\cup C_2}$ doesn't bound ${R_1\cup R_2}$. Fortunately, the left-hand side can be simplified: the two curves share an edge but travel it in the *opposite* directions.

We have a cancellation according to *Negativity* for sums. The result is:
$$\sum_{\partial D}\Delta G=\sum_{D} \Delta G,$$
where $D$ is the union of the two rectangles and $\partial D$ is its boundary. We have constructed the Fundamental Theorem for this more complex region!

We continue on adding one rectangle at a time to our region $D$ and cancelling the edges shared with others producing bigger and bigger curve $C=\partial D$ that bounds $D$:

We can add as many rectangles as we like and producing larger and larger region made of the rectangles and bounded by a single closed curve made of edges... unless we circle back!

Then the boundary curve might break into two... We will ignore this possibility for now and state the second preliminary version of the main theorem.

**Theorem.** In a partition of a plane region $R$, if $C$ is a simple closed curve that constitutes the boundary of $R$ by going counterclockwise around $R$, we have for any function $G$ defined on the secondary nodes of the partition:
$$\sum_{C=\partial R} G=\sum_{R} \Delta G.$$

What if the function is the difference? Then its difference is zero and, therefore, our formula takes this form:
$$\sum_{\partial D} \Delta G=\sum_{D} \Delta G=\sum_{D}0 =0.$$
The sum along any closed curve is then zero and, according to the *Path-independence Theorem*, $G$ is path-independent. Then,
$$\sum_{C} \Delta G=f(B)-f(A),$$
for any curve $C$ from $A$ to $B$, where $f$ is a potential function of $G$. We have arrived at the Fundamental Theorem of Calculus for differences. It follows that the Fundamental Theorem is its generalization. However, as the Fundamental Theorem of Calculus for parametric curves, i.e., degree $1$, indicates, *there are more than one fundamental theorem for each dimension*!

What is our function doesn't depend on $y$, i.e., $G(x,y)=q(x)$, while $R$ is a rectangle $[a,b]\times [c,d]$? In the left-hand side of the formula, the sums along the two horizontal sides of $R$ cancel each other:
$$\sum_{C} G\cdot dX=q(b)-q(b).$$
In the right-hand side of the formula, we have:
$$\sum_{R} \Delta G=\sum_{[a,b]\times [c,d]} \Delta_x G=\sum_{[a,b]} \Delta q.$$
We have arrived at the original *Fundamental Theorem of Discrete Calculus* (degree $1$) from Part II:
$$q(b)-q(a)=\sum_{[a,b]} \Delta q.$$
Not only have we derived the degree $1$ from degree $2$, but also both theorems have the same form! We realize that in the above formula,
$$\sum_{\{a,b\}}q=\sum_{[a,b]} \Delta q,$$
the right-hand side is an integral of a $1$-form over a ($1$-dimensional) region, $R=[a,b]$, while the left hand-side is a $0$-form over the boundary, $\partial R=\{a,b\}$, properly oriented, of that region.

Now, what if the boundary curve does break into two when we add a new square? In the example below the square is added along with four of its edges. As a result, we add the two vertical edges while the two horizontal are cancel as before. Thus a new square is seamlessly added but we also see the appearance of a *hole*:

The difference is dramatic: not only the boundary of the region is now made of two curves but also the one outside goes counterclockwise (as before) while the one inside goes clockwise! However, either curve has the region to its *left*.

Our formula, $$\sum_{C} G=\sum_{R} \Delta G,$$ doesn't work anymore, even though the meaning of the right-hand side is still clear. But what should be the meaning of the left-hand side? It should be the total sum of $G$ over all boundary curves of $R$, correctly oriented!

Thus, the fundamental is the relation between a region $R$ in a partition and its boundary $\partial R$.

**Theorem (Fundamental Theorem of Discrete Calculus of degree $2$).** In a partition of a plane region $R$, we have the following for any function $G$ defined on the secondary nodes of the partition:
$$\sum_{\partial R} G=\sum_{R} \Delta G.$$

**Example.** We know that for a region bounded by a simple closed curve, the sum along *any* closed curve is $0$. Let's take a look at what happens in regions with *holes*. Consider this *rotation* function $G$:

Its values are $\pm 1$ with directions indicated except for the four edges in the middle with values of $\pm 3$. The function is defined on the $3\times 3$ region $R$ that excludes the middle square. By direct examination we show that the difference of $G$ is zero at every face of $R$:
$$\Delta G=0.$$
So, $G$ passes the *Exactness Test*; however, is it exact? We demonstrate now that it is not. Indeed, the sum of $G$ along the outer boundary of $R$ isn't zero:
$$\sum_C G=12.$$
How does it work with our theorem:
$$\sum_{C} G=\sum_{R} \Delta G?$$
It seems that the left-hand side is positive while right-hand side is zero... What we have overlooked is that $G$ and, therefore, its difference are undefined at the middle square! So, $C$ doesn't bound $R$. In fact, the boundary of $R$ includes another curve, $C'$, going clockwise. Then,
$$\sum_{C} G+\sum_{C'} G=\sum_{R} \Delta G=0.$$
Therefore, we have:
$$\sum_{C} G=\sum_{-C'} G.$$
So, moving from the larger path to the smaller (or vice versa) doesn't change the sum! Also notice that the sums from one corner to the opposite are $6$ and $-6$. There is no path-independence! $\square$

To summarize, even when the difference is -- within the region -- zero, the sum along a path that *goes around the hole* may be non-zero:

Furthermore, the sum remains the same for all closed curves as long as they make exactly the same number of turns around the origin! The meaning of path-independence changes accordingly; it all depends on how the curve goes between the holes:

Next, we consider the relation of this line integral that represents the *work* performed by the flow to spin the ball and the rotor of the vector field.

Recall that we have a vector field $F=<p,q>$ of the velocity field of a fluid flow with a ping-pong ball within it that can freely rotate but not move. We measure the amount of rotation as the work performed by the force of the flow rotating the ball.

Let's first suppose we have a grid on the plane with rectangles: $$\Delta x\times \Delta y.$$ Suppose that the flow rotates this rectangle just like the ball before.

We can also look at a vector field $F=<p,q>$ of the velocity field of a fluid flow.

Thus, the Riemann sum of the vector field along the boundary of a rectangle is equal to the (double) Riemann sum of the rotor over this rectangle and, furthermore, over any region made of such rectangles.

**Theorem (Fundamental Theorem of Discrete Calculus for vector fields).** In a partition of a plane region $R$, we have the following, we have for any vector field $F$ defined on the secondary nodes of the partition:
$$\sum_{\partial R} F\cdot \Delta X=\sum_{R} \operatorname{rot} F\, \Delta A.$$

**Proof.** The proof can be independent from the last theorem. Suppose curve $C$ is the border of the rectangle $R$ oriented in the counterclockwise direction. Suppose the vector field is given by these vectors as defined on each of the edges of the rectangle, which is shown on right:
$$F=\begin{array}{|ccc|}
\hline
\bullet& <p_3,q_3>&\bullet\\
<p_4,q_4>& &<p_2,q_2>\\
\bullet&<p_1,q_1>&\bullet\\
\hline
\end{array},\quad \Delta X=
\begin{array}{|ccc|}
\hline
\bullet& <-\Delta x,0>&\bullet\\
<0,-\Delta y>& &<0,\Delta y>\\
\bullet&<\Delta x,0>&\bullet\\
\hline
\end{array}$$
Then the Riemann sum along $C$ is:
$$\begin{array}{ll}
W&=\sum_C F\cdot \Delta X\\
&\begin{array}{cccc}
=&&& <p_3,q_3>\cdot<-\Delta x,0>&\\
&+&<p_4,q_4>\cdot<0,-\Delta y>& + &<p_2,q_2>\cdot<0,\Delta y>\\
&+&&<p_1,q_1>\cdot<\Delta x,0>\\
=&&& -p_3\Delta x&\\
&+&-q_4\Delta y& + &q_2\Delta y\\
&+&&p_1\Delta x
\end{array}\\
&=-p_3\Delta x-q_4\Delta y+ q_2\Delta y+p_1\Delta x\\
&=(q_2-q_4)\Delta y-(p_3-p_1)\Delta x\\
&=\frac{q_2-q_4}{\Delta x}\Delta x\Delta y-\frac{p_3-p_1}{\Delta y}\Delta y\Delta x\\
&=\left(\frac{\Delta q}{\Delta x}-\frac{\Delta p}{\Delta y}\right)\Delta x\Delta y.
\end{array}$$
$\blacksquare$

## 10 Green's Theorem: the Fundamental Theorem of Calculus for vector fields in dimension $2$

According to the *Gradient Test* for dimension $2$, a vector field $F=<p,q>$ is *not* gradient when $p_y\ne q_x$. We form the following function of two variables to study this further (as if we cover the whole stream with those little balls).

**Definition.** The *rotor* of a differentiable on an open region on the plane vector field $F=<p,q>$ is a function of two variables defined on the region and **denoted** by
$$\operatorname{rot}F=q_x-p_y.$$

**Definition.** If the rotor is zero, the vector field is called *irrotational*.

One can see a high value of the rotor in the center and zero around it in the following example:

The negative rotation simply means rotation in the opposite direction.

**Example.** All vector fields have vectors that change directions, i.e., “rotate”. What if they don't? Let's consider a vector field with a constant direction but variable magnitude. Let's try:
$$F(x,y)=<y^2,0>.$$

Then $$\operatorname{rot}V=q_x-p_y=0-2y\ne 0.$$ The rotation is again non-zero. In fact the graph of the rotor shows that the rotation will be counterclockwise on right and clockwise on left. The effect is seen when a person lies on the top of two adjacent -- up and down -- escalators:

$\square$

**Example.** From the equality of the mixed partial difference quotients, it follows that the rotor of the gradient of a function gives values exactly equal to $0$:

$\square$

With this new concept, we can restate the Gradient Tests from the last chapter.

**Theorem (Gradient Test dimension $2$).** Suppose $F$ is a vector field on an open region in ${\bf R}^2$ with continuously differentiable component functions. If $F$ is gradient (i.e., $F=\operatorname{grad}h$), then it's irrotational: $\operatorname{rot}F=0$; briefly:
$$\operatorname{rot}(\operatorname{grad}h)=0.$$

**Example.**

What about $3$-dimensional vector fields? Once again, suppose we have a vector field that describes the velocity field of a fluid flow. We place a small ball within the flow in such a way that the ball remains fixed while being able to rotate. If the ball has a rough surface, the fluid flowing past it will make it spin. The ball can rotate around any axis.

We can restate the *Gradient Test* for dimension $3$ as follows.

**Theorem (Gradient Test dimension $3$).** Suppose $F$ is a vector field on an open region in ${\bf R}^3$ with continuously differentiable component functions. If $F$ is gradient (i.e., $F=\operatorname{grad}h$), then it's irrotational with respect to all three pairs of coordinates:
$$\operatorname{rot}_{y,z}<q,r>=0,\ \operatorname{rot}_{z,x}<r,p>=0,\ \operatorname{rot}_{x,y}<p,q>=0.$$

The subscripts indicate with respect to which two variables we differentiate while the third to be kept fixed.

In fact, we can form the following vector field called the *curl* of F that take care of all three rotors:
$$\operatorname{curl}F=\operatorname{rot}_{y,z}<q,r>i+\operatorname{rot}_{z,x}<r,p>j+\operatorname{rot}_{x,y}<p,q>k=<r_y-q_z,p_z-r_x,q_x-p_y>.$$

In particular, when the vector field $V=pi+qj+rk$ has a zero $z$-component, $r=0$, while $p$ and $q$ don't depend on $z$, the curl is reduced to the rotor: $$\operatorname{curl}(pi+qj)= \operatorname{rot}<p,q>k.$$

**Example.**

$\square$

**Exercise.** Define a $4$-dimensional analog of the rotor.

The two theorems can be restated in an even more concise form, in terms of the compositions of these *functions of functions*:
$$\operatorname{rot}\operatorname{grad}=0\text{ and } \operatorname{curl}\operatorname{grad}=0.$$
Once again, we end up at *zero* no matter what the starting point is:
$$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{llll}
\text{ functions of two variables }&\ra{\operatorname{grad}}& \text{ vector fields in }{\bf R}^2 & \ra{\operatorname{rot}}&\text{ functions of two variables}, \\
\text{ functions of three variables }& \ra{\operatorname{grad}}&\text{ vector fields in }{\bf R}^3 &\ra{\operatorname{curl}}&\text{ vector fields}.
\end{array}$$

The analysis of the work integral in the continuous case is similar to the one for the discrete case. How do we measure the amount of its rotation, i.e., the work performed by the force of the flow rotating it?

We suppose that there are many little balls in the flow forming some shape and then find the amount of their total rotation, i.e., the work performed by the force of the flow rotating the balls.

Just as before, we start with a single rectangle and then build more and more complex regions on the plane from the rectangles of our grid -- as if each contains a ball -- while maintaining the formula. If we have just two adjacent squares, $R_1$ and $R_2$, bounded by curves $C_1$ and $C_2$, we write Green's formula for either and then add the two:
$$\begin{array}{lll}
&\oint_{C_1} F\cdot dX&=\iint_{R_1} \operatorname{rot} F\, dA\\
+\\
&\oint_{C_2} F\cdot dX&=\iint_{R_2} \operatorname{rot} F\, dA\\
\hline
&\oint_{C_1\cup C_2} F\cdot dX&=\iint_{R_1\cup R_2} \operatorname{rot} F\, dA
\end{array}$$
In the right-hand side, we have a single integral according to *Additivity* of double integrals and in the left-hand side, we have a single integral according to *Additivity* of line integrals. Here $C_1\cup C_2$ is the curve that consists of $C_1$ and $C_2$ traveled consecutively. The left-hand side is simplified: the two curves share an edge but travel it in the *opposite* directions.

We have a cancellation according to *Negativity* for line integrals. The result is:
$$\oint_{\partial D} F\cdot dX=\iint_{D} \operatorname{rot} F\, dA,$$
where $D$ is the union of the two rectangles and $\partial D$ is its boundary. We continue on adding one rectangle at a time to our region $D$ and cancelling the edges shared with others producing bigger and bigger curve $C=\partial D$ that bounds $D$:

Or we can add whole regions...

It is possible, however, that the boundary curve might seize to be a single closed curve!

**Theorem (Fundamental Theorem of Calculus for vector fields).** Suppose a plane region $R$ is bounded by piece-wise differentiable curve $C$ (possibly made of several disconnected pieces). Then for any vector field $F$ with continuously differentiable components on an open set containing $R$, we have:
$$\oint_{C} F\cdot dX=\iint_{R} \operatorname{rot} F\, dA.$$

**Proof.** We only demonstrate the proof for a region $R$ that has a partition that also produces a partition of $C$. We sample $F$ at the secondary nodes of the partition of $C$ and $\operatorname{rot} F$ at the tertiary nodes of the partition of $R$. We then use the *Fundamental Theorem of Discrete Calculus for vector fields*:
$$\sum_{\partial R} F\cdot \Delta X=\sum_{R} \operatorname{rot} F\, \Delta A.$$
We take the limits of these two Riemann sums over the partitions with the mesh approaching zero. $\blacksquare$

This is also known as *Green's Formula*. Written component-wise, it takes the following form:
$$\int_C p\, dx+ q\, dy=\iint_{R} (q_x-p_y)\, dxdy.$$

Let's trace the theorem back to some familiar things.

What if the vector field is gradient? Then its rotor is zero and, therefore, our formula takes this form:
$$\oint_{\partial D} F\cdot dX=\iint_{D} \operatorname{rot} F\, dA=\iint_{D}0\, dA =0.$$
The line integral along any closed curve is then zero and, according to the *Path-independence Theorem*, $F$ is path-independent. Then,
$$\int_{C} F\cdot dX=f(B)-f(A),$$
for any curve $C$ from $A$ to $B$, where $f$ is a potential function of $F$. We have arrived at the Fundamental Theorem of Calculus for gradient vector fields. It follows that Green's Theorem is its generalization. This confirms the role of Green's Theorem is *the* Fundamental Theorem of Calculus for all vector fields for dimension $2$.

What is the vector field doesn't depend on $y$, i.e., $F(x,y)=F(x)=<p(x),q(x)>$, while $R$ is a rectangle $[a,b]\times [c,d]$? First the left-hand side of the formula... The line integrals along the two horizontal sides of $R$ cancel each other. We are left with:
$$\oint_{\partial D} F\cdot dX=F(b)\cdot A-F(a)\cdot A,$$
where $A$ is the vector that represents the vertical sides of $R$ (oriented vertically). Then,
$$\oint_{\partial D} F\cdot dX=(q(b)-q(a))(d-c).$$
Now the right-hand side of the formula... The rotor is simply $q'(x)$. Then,
$$\iint_{R} \operatorname{rot} F\, dA=\iint_{[a,b]\times [c,d]} q'(x)\, dxdy=\int_a^b\int_c^d q'(x)\, dxdy=\int_a^b q'(x)\, dx\, (d-c).$$
We have arrived to the original *Fundamental Theorem of Calculus* from Part II:
$$q(b)-q(a)=\int_a^b q'(x)\, dx.$$

**Example.** Consider this *rotation vector field*, $V=<-y,x>,$ and especially its multiple:
$$F=\frac{V}{||V||^2}=\frac{1}{x^2+y^2}<y,\ -x>=\left< \frac{y}{x^2+y^2},\ -\frac{x}{x^2+y^2}\right>=<p,q>.$$

We previously demonstrated the following:
$$\begin{array}{lll}
p_y=\frac{\partial}{\partial y}\frac{y}{x^2+y^2}=\frac{1\cdot (x^2+y^2)-y\cdot 2y}{(x^2+y^2)^2}=\frac{x^2-y^2}{(x^2+y^2)^2}\\
q_x=\frac{\partial}{\partial x}\frac{-x}{x^2+y^2}=-\frac{1\cdot (x^2+y^2)-x\cdot 2x}{(x^2+y^2)^2}=-\frac{y^2-x^2}{(x^2+y^2)^2}\\
\end{array}\ \Longrightarrow\operatorname{rot}F=q_x-p_y=0$$
So, the rotor of the vector field is zero and it passes the *Gradient Test*; however, is it gradient? We demonstrate now that it is not. Indeed, suppose $X=X(t)$ is a counterclockwise parametrization of the circle. Then $F(X(t))$ is parallel to $X'(t)$. Therefore, $F(X(t))\cdot X'(t)>0$. It follows that the line integral along the circle is positive:
$$W=\oint_C F\cdot dX=\int_a^b F(X(t))\cdot X'(t)\, dt>0.$$
It is as if we have climbed a spiral staircase! How does it work with our theorem:
$$\oint_{C} F\cdot dX=\iint_{R} \operatorname{rot} F\, dA?$$
The left-hand side is positive while right-hand side is zero... however, the vector field and, therefore, its rotor are undefined at the origin! So, $C$ doesn't bound $R$.

A hole is what makes a spiral staircase possible by providing a place for the pole. Now, we'd need $R$ to be a *ring* so that the boundary of $R$ would include another curve, maybe a smaller circle, $C'$, going clockwise, Then,
$$\oint_{C} F\cdot dX+\oint_{C'} F\cdot dX=\iint_{R} \operatorname{rot} F\, dA=0.$$
Therefore, we have:
$$\oint_{C} F\cdot dX=\oint_{-C'} F\cdot dX.$$
So, moving from the larger circle to the smaller (or vice versa) doesn't change the line integral, i.e. the work. A remarkable result! It is seen as even more remarkable once we realize that the integral remains the same for all closed curves as long as they make exactly the same number of turns around the origin! $\square$

To summarize, even when the rotor is -- within the region -- zero, the line integral along a curve that goes around the hole may be non-zero.

Furthermore, the integral remains the same for all closed curves as long as they make exactly the same number of turns around the origin! The meaning of path-independence changes accordingly; it all depends on how the curve goes between the holes:

**Example.** Imagine that we need to find the area of a piece of land we have no access to, such as a fortification or a pond. Conveniently, Green's Formula allows us to compute area of a region without visiting the inside but by just taking a trip around it. We just need to pick an appropriate vector field:
$$F=<0,x>\ \Longrightarrow\ p=0,\ q=x\ \Longrightarrow\ p_y=0,\ q_x=1.$$
Then the formula takes the following form:
$$\begin{array}{ll}
\iint_{R} (q_x-p_y)\, dxdy&=\int_C p\, dx&+ q\, dy,\\
\iint_{R} 1\, dxdy&=\int_C 0\, dx&+ x\, dy,\\
\text{area of }R&=& \int_C x\, dy.\\
\end{array}$$
For example, the area of the disk $R$ of radius $r$ is a certain line integral around the circle $C$. We take $C$ to be parametrized the usual way:
$$x=r\cos t,\ y=r\sin t.$$
Then,
$$\text{area of the circle}= \int_C x\, dy=\int_0^{2\pi}r\cos t(r\sin t)'\, dt=r^2\int_0^{2\pi}\cos^2 t\, dt=r^2\big(x/2+\sin 2x\big)\big|_0^{2\pi}=r^22\pi/2=\pi r^2.$$
$\square$