When studying calculus in university, students will inevitably learn about integrals and various methods to solve (in)definite integrals. These methods can be related to rules for differentiation; integration by substitution as a complement to the chain rule, integration by parts for the product rule. There are others, like partial fractions for rational polynomials, and trig substitution for certain quadratic and inverse polynomials.

There are a handful of other tricks, of course, but the list isn’t very long of methods you learn in undergrad to solve integrals analytically (I was always fond of algebraic integration, or integration by reduction, which was particularly useful for Fourier analysis).

So when I recently saw a blog post about a method for analytic integration, often used by Richard Feynman, that I had never seen before, it reminded me of a trick I had come across that was particularly handy in vector calculus and electrodynamics, that I had never learned in class, but thought more people should know about it.

## Vector Calculus

Helmholtz’s decomposition theorem (aka the Fundamental Theorem of Vector Calculus) states:

Let \(\mathbf{F}\) be a vector field on a bounded domain \(V \subseteq \mathbb{R}^3\), that is twice continuously differentiable. Then \(\exists\) a scalar function \(\Phi\) and vector field \(\mathbf{A}\) such that \(\mathbf{F} = -\nabla \Phi + \nabla \times \mathbf{A}\).

All vector fields with these properties can be decomposed into curl-free (\(\nabla \Phi\)) and divergence free (\(\nabla \times \mathbf{A}\)) components.

The Wikipedia page for Helmholtz decomposition gives a good description of this. What’s nice about the proof of this theorem is that it’s constructive - it tells you what \(\Phi\) and \(\mathbf{A}\) are.

\[\Phi(\mathbf{r}) = \frac{1}{4\pi} \int_V \frac{\nabla^\prime \cdot \mathbf{F}(\mathbf{r}^\prime)}{|\mathbf{r} - \mathbf{r}^\prime|} dV^\prime - \frac{1}{4\pi} \oint_{\partial V} \hat{\mathbf{n}}^\prime \cdot \frac{\mathbf{F}(\mathbf{r}^\prime)}{|\mathbf{r} - \mathbf{r}^\prime|} dS^\prime\] \[\mathbf{A}(\mathbf{r}) = \frac{1}{4\pi} \int_V \frac{\nabla^\prime \times \mathbf{F}(\mathbf{r}^\prime)}{|\mathbf{r} - \mathbf{r}^\prime|} dV^\prime - \frac{1}{4\pi} \oint_{\partial V} \hat{\mathbf{n}}^\prime \times \frac{\mathbf{F}(\mathbf{r}^\prime)}{|\mathbf{r} - \mathbf{r}^\prime|} dS^\prime\]If \(V = \mathbb{R}^3\) and \(\mathbf{F}\) decays at a rate greater than \(\frac{1}{r}\) as \(r \rightarrow \infty\), then the above simplifies to:

\[\Phi(\mathbf{r}) = \frac{1}{4\pi} \int_{\mathbb{R}^3} \frac{\nabla^\prime \cdot \mathbf{F}(\mathbf{r}^\prime)}{|\mathbf{r} - \mathbf{r}^\prime|} dV^\prime\] \[\mathbf{A}(\mathbf{r}) = \frac{1}{4\pi} \int_{\mathbb{R}^3} \frac{\nabla^\prime \times \mathbf{F}(\mathbf{r}^\prime)}{|\mathbf{r} - \mathbf{r}^\prime|} dV^\prime\]There is a clear symmetry in the definition of these functions, and you can see how all the divergence is packed into \(\Phi\) and all the curl is packed into \(\mathbf{A}\). It’s also clear, though, that for even simple vector fields, these integrals are difficult to solve analytically.

So the question is, are there simple tricks to calculate \(\Phi\) and \(\mathbf{A}\) given \(\mathbf{F}\)?

## Poincare’s Lemma

Given the Helmholtz decomposition theorem solutions, if you’re lucky you may be able to solve that integral analytically. If you’re not so lucky, the next most obvious approach to solving \(-\nabla \Phi = \mathbf{F}\) and \(\nabla \times \mathbf{A} = \mathbf{G}\) for general vector fields \(\mathbf{F}\) and \(\mathbf{G}\) is to try and solve the coupled linear system of PDEs. But making statements about coupled linear systems of PDEs, generally, is hard (see the Navier-Stokes existence and smoothness problem).

Are there other ways we can do this? \(\Phi\) being a scalar function makes it typically easier to solve. Often you can use tricks like exact differentials and the equivalence of second derivative of \(\Phi\) to more easily solve for the scalar potential. The vector potential, however, is not so simple.

As you do during undergrad, searching Math StackExchange yields a few answers referencing this theorem called “Poincare’s Lemma” (see here, here, and here).

\[\mathbf{F} = \nabla \times \mathbf{A} \implies \mathbf{A} = \int_0^1 \mathbf{F}(t\mathbf{r}) \times (t\mathbf{r}) dt\]It’s worth noting that \(\mathbf{A}\) here is not unique. Since \(\nabla \times (\nabla f) = 0 \forall f, \mathbf{A}\) is unique modulo a conservative vector field.

Unfortunately, searching around for “Poincare’s lemma” doesn’t help much.
Poincare was kind of a great mathematician, so there are many, *many* lemmas named after him and it’s difficult to find information about this specific one.

But to see what this lemma does, I think it’s worth going through a couple examples to see how it works.

## A simple example

Suppose \(\mathbf{F}(\mathbf{x}) = z \hat{x} + x \hat{y} + y \hat{z}\). Find an \(\mathbf{A} \ni \mathbf{F}(\mathbf{x}) = \nabla \times \mathbf{A}\).

We clearly see that \(\mathbf{F}\) is twice-differentiable on \(\mathbb{R}^3\) and that \(\nabla \cdot \mathbf{F} = 0\). The most straightfoward method would be purely algebraic, by solving the system of differential equations.

\[\begin{bmatrix} z \\ x \\ y \end{bmatrix} = \begin{bmatrix} \frac{\partial}{\partial x} \\ \frac{\partial}{\partial y} \\ \frac{\partial}{\partial z} \\ \end{bmatrix} \times \begin{bmatrix} A_x \\ A_y \\ A_z \end{bmatrix} \\ \begin{array}{rcl} z & = & \frac{\partial A_z}{\partial y} - \frac{\partial A_y}{\partial z} \\ x & = & \frac{\partial A_x}{\partial z} - \frac{\partial A_z}{\partial x} \\ y & = & \frac{\partial A_y}{\partial x} - \frac{\partial A_x}{\partial y} \\ \end{array}\]Even a simple system like this is surprisingly difficult to solve. Using Poincare’s lemma, we have

\[\begin{array}{rl} & \large\int_0^1 \mathbf{F}(t \mathbf{r}) \times (t\mathbf{r}) dt \\ = & \int_0^1 (tz \hat{x} + tx \hat{y} + ty \hat{z}) \times (tx\hat{x} + ty\hat{y} + tz\hat{z}) dt \\ = & \begin{bmatrix} xz - y^2 \\ xy - z^2 \\ yz - x^2 \end{bmatrix} \int_0^1 t^2 dt \\ = & \frac{1}{3}\begin{bmatrix} xz - y^2 \\ xy - z^2 \\ yz - x^2 \end{bmatrix} \end{array}\]Which we can check to make sure it satisfies the equation \(\mathbf{F} = \nabla \times \mathbf{A}\):

\[\begin{array}{rl} & \nabla \times \frac{1}{3}\begin{bmatrix} xz - y^2 \\ xy - z^2 \\ yz - x^2 \end{bmatrix} \\ & = \frac{1}{3}\begin{bmatrix} 2z + z \\ 2x + x \\ 3y + y \end{bmatrix} \\ & = \begin{bmatrix} z \\ x \\ y \end{bmatrix} \\ = & \mathbf{F} \end{array}\]What we see here is that by turning this coupled PDE problem into a simple integration problem, we arrive at a solution with minimal effort.

## When Poincare’s lemma applies

Because I’ve been able to find so little about Poincare’s lemma, it’s unclear to me what the assumptions and caveats of this lemma are. Clearly, it cannot be used in all cases with all vector fields.

For example, suppose \(\mathbf{F}(\mathbf{x}) = \frac{1}{r^2} \hat{\theta}\), written in spherical coordinates. Then, if Poincare’s method can be used here, we’d get something like this:

\[\begin{array}{rcl} \mathbf{A} & = & \large\int_0^1 \mathbf{F}(t\mathbf{r}) \times (t\mathbf{r}) dt \\ & = & \large\int_0^1 \frac{1}{t^2 r^2} \hat{\theta} \times (tr \hat{r}) dt \\ & = & \frac{-1}{r} \hat{\phi} \large\int_0^1 \frac{1}{t} dt \end{array}\]Which clearly isn’t well defined. If we have that \(\mathbf{F}(t\mathbf{r})\) is well-defined over \(t \in [0, 1]\), then \(\mathbf{F}(t\mathbf{r}) \times (t\mathbf{r})\) will be as well. This means \(\mathbf{F}\) needs to be defined along the line connecting \(\mathbf{0}\) and \(\mathbf{r}\). This is also known as a star domain. If we further assume that \(\nabla \cdot \mathbf{F} = 0\), then \(\mathbf{F}\) being defined and divergence-free in some star-domain that includes an open ball around \(\mathbf{r}\) will ensure that Poincare’s lemma holds (provided that \(\mathbf{F}(t\mathbf{r}) \times (t\mathbf{r})\) is integrable over the star-domain).

## Proof of Poincare’s lemma

Let \(\mathbf{F}\) be a divergence-free vector field that is defined on an star-shaped open set, \(S \subseteq \mathbb{R}^3\), containing the origin and \(\mathbf{r}\). Moreover, let \(\mathbf{F}(t\mathbf{r}) \times (t\mathbf{r})\) be integrable in \(S\). Let \(\mathbf{G} = \large\int_0^1 \mathbf{F}(t\mathbf{r}) \times (t\mathbf{r}) dt\). Then \(\nabla \times \mathbf{G} = \mathbf{F}\).

**Proof**:

## Conclusions

While solving for scalar and vector potentials isn’t easy, Poincare’s lemma is a good trick that can make finding vector potentials much more straightforward. Yes, you may run into issues with star-domains and integrability, but for simple enough functions this isn’t much of a concern, and solving this integral is usually much quicker than Helmholtz’s general form.

This method can also work in some special cases where \(\nabla \cdot \mathbf{F} \ne 0\), but I’ll leave that as an exercise to the reader. At the very least, Poincare’s lemma always gave me a starting point to work with. Even if it didn’t work, it often pointed me in the direction of what the vector potential should look like. Adding gradients of whatever scalar potential you like gives you some wiggle room to simplify equations. From there, you can play around with functional forms to get at the right answer.