On these pages I am investigating the evolution of interactions in the MacArthur/Levins resource competition model.

This page explains the adaptive dynamics concepts and questions I'm using, and includes Sage source code for the models.

Child pages use this theory and code to investigate specific cases of the models:

## MacArthur-Levins population dynamics

$\frac{dX_{i}}{dt}=b_{i}X_{i}(\sum _{\ell}c_{{i\ell}}w_{\ell}R_{\ell}-m_{i})$
$\frac{dR_{\ell}}{dt}=r_{\ell}(K_{\ell}-R_{\ell})-\sum _{i}c_{{i\ell}}X_{i}$

where $X_{i}$ is the density of population $i$, $R_{\ell}$ is the abundance of resource $\ell$ and the parameters are $b_{i}$, an intrinsic population growth rate, $m_{i}$, mortality rate, $c_{{i\ell}}$, the rate at which population $i$ captures resource $\ell$, $w_{\ell}$, the amount a unit of resource $\ell$ contributes to population growth, $r_{\ell}$, the resupply rate of resource $\ell$, and $K_{\ell}$ its maximum possible abundance.

This is a fine model on its own, but I am interested in Lotka-Volterra models, which are models in which there is a simple number $a_{{ij}}$ describing how populations $i$ and $j$ interact with each other. Using a Lotka-Volterra model will let us look at how these interaction terms $a_{{ij}}$ change, telling us how evolution drives these populations to become more competitive, antagonistic, or mutualistic.

In the above model the populations $i$ interact indirectly by taking resources from each other, but we can make the interactions direct by making simplifying assumptions, and this is what MacArthur and Levins did.

We do this by assuming that the resources come to equilibrium very quickly compared to the populations. Under this assumptions, we can hold the population sizes $X_{i}$ fixed and solve the second equation above for $R_{\ell}$ when $\frac{dR_{\ell}}{dt}=0$:

$\hat{R}_{\ell}=K_{\ell}-\frac{1}{r_{\ell}}\sum _{i}c_{{i\ell}}X_{i}$.

Then we simply use that value of $R_{\ell}$ in the first equation, and we have a system of population sizes only:

$\frac{dX_{i}}{dt}=b_{i}X_{i}(\sum _{\ell}c_{{i\ell}}w_{\ell}(K_{\ell}-\frac{1}{r_{\ell}}\sum _{j}c_{{j\ell}}X_{j})-m_{i})$.

We can rearrange the terms of this to have the standard Lotka-Volterra form:

$\frac{dX_{i}}{dt}=k_{i}X_{i}+\sum _{j}a_{{ij}}X_{i}X_{j}$,

where

$k_{i}=b_{i}(\sum _{\ell}c_{{i\ell}}w_{\ell}K_{\ell}-m_{1})$ (ordinarily I'd call this $r_{i}$, but the name is already in use),
$a_{{ij}}=-b_{i}\sum _{\ell}\sum _{j}\frac{c_{{i\ell}}c_{{j\ell}}w_{\ell}}{r_{\ell}}$.

I'll be interested in how the interaction terms $a_{{ij}}$ change as the populations coevolve, because this expresses whether competition becomes stronger or weaker. When two coevolving populations compete for resources, we expect them to differentiate from each other and lessen the competition, but I'd like to look at both that and other cases, and do some detailed analysis about when it lessens and when it greatens.

## Adaptive dynamics in the Macarthur-Levins model

And now here's where we look at how that single population evolves in the Mac-Lev model. I won't review the details of how adaptive dynamics is done, but in summary: suppose that certain aspects of the population dynamics depend on the characteristics of the population, and those characteristics are able to mutate. Then over time, mutants will arise, and if they are better able to thrive in the environment they encounter than their forefathers, they will gradually replace them, and the characteristics of the population will slowly change.

In this model, it's $b_{i}$, $c_{{i\ell}}$, and $m_{i}$ that have to do with the populations, so we introduce a phenotype variable $u_{i}$ representing the characteristics of the population and suppose that $b_{i}$, $c_{{i\ell}}$ and $m_{i}$ are determined by the value of $u$. Then we can find out how $u$ changes in time in response to the conditions created by the population dynamics, and also how the other values such as $X_{i}$, $R_{\ell}$, $a_{{ij}}$, $k_{i}$, change as $u$ evolves.

The change in $u$, to make a long story short, is driven by how the population growth rate of a rare mutant varies with the mutant's phenotype $u$. That is, there's an "invasion speed" $\mathcal{I}$ that is closely related to the population growth rate $\frac{dX_{i}}{dt}$, and the change in $u$ (that is, $\frac{du}{dt}$) is in proportion to $\frac{\partial\mathcal{I}}{\partial u}$.

To be precise:

$\mathcal{I}(u_{i}|E)=\lim _{{X_{i}\to 0}}\frac{1}{X_{i}}\frac{dX_{i}}{dt}$

defines the invasion speed (where $E$ is the environment that an individual population $i$ experiences, including the rest of population $i$ and all other populations), and then

$\frac{du_{i}}{dt}=\left.\gamma\hat{X}_{i}\frac{\partial\mathcal{I}(v|u_{1},\ldots,u_{n})}{\partial v}\right|_{{v=u_{i}}}$,

where $\gamma$ is a coefficient accounting for the frequency and size of available mutations. In simple cases, it's a constant, while when available mutations are not quite so simple and uniform, it may not be (as we will see).

## Adaptive geometry of ecological parameters

Now let's go deeper into what happens in that evolutionary change. We're considering several different ways to describe the evolving population, and now I want to give them clearer notation:

These are vectors that are functions of one another. Using a suitable vector notation in our calculus, we can write the adaptive dynamics of $\mathbf{u}$ directly, ignoring the intermediate variables $\mathbf{p}$ and $\mathbf{A}$ for the moment:

$\frac{d\mathbf{u}_{i}}{dt}=\left.\gamma\hat{X}_{i}\frac{\partial\mathcal{I}(\mathbf{v}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}{\partial\mathbf{v}}\right|_{{\mathbf{v}=\mathbf{u}_{i}}}=\gamma\hat{X}_{i}\partial _{1}\mathcal{I}(\mathbf{u}_{i})^{T}$.

The $T$ sign is for vector or matrix transposition: with the notation I'm using, $\mathbf{u}_{i}$ is a column vector - a point in a vector space of values - and $\frac{\partial\mathcal{I}}{\partial\mathbf{u}_{i}}$ is a row vector - a thing that operates on points. Since $\frac{d\mathbf{u}_{i}}{dt}$ is a column, we need to transpose the derivative of $\mathcal{I}$ to make it match. The $\partial _{1}$ sign is for the partial derivative with respect to the first argument.

Now we can use the chain rule to see how $\mathbf{p}$ changes:

$\frac{d\mathbf{p}(\mathbf{u}_{i})}{dt}=\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}\frac{d\mathbf{u}_{i}}{dt}$
$=\gamma\hat{X}_{i}\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}\partial _{1}\mathcal{I}(\mathbf{u}_{i})^{T}$
$=\gamma\hat{X}_{i}\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}(\partial _{1}\mathcal{I}(\mathbf{p}(\mathbf{u}_{i}))\frac{d\mathbf{p}}{d\mathbf{u}_{i}})^{T}$
$=\gamma\hat{X}_{i}\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}^{T}\partial _{1}\mathcal{I}(\mathbf{p}(\mathbf{u}_{i}))^{T}$.

Here we see the usefulness of this row-and-column-vector notation, because it lets us use the chain rule in a natural way: when we write $\frac{d\mathbf{p}(\mathbf{u}_{i})}{dt}=\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}\frac{d\mathbf{u}_{i}}{dt}$, we're multiplying a matrix by a column vector to get another column vector, in just the way we want.

I like to write $S(\mathbf{u})=\partial _{1}\mathcal{I}(\mathbf{u})^{T}$ for the "selection gradient" of $\mathbf{u}$ -- the direction of increasing fitness in the space of possible $\mathbf{u}$ values. I'm very interested in the role of this selection gradient in various spaces, as we'll see. Notice that in the derivation above I switched from $S(\mathbf{u}_{i})=\left(\frac{\partial\mathcal{I}(\mathbf{v}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}{\partial\mathbf{v}}\right)_{{\mathbf{v}=\mathbf{u}_{i}}}$ to

$S(\mathbf{p}(\mathbf{u}_{i}))=\left(\begin{array}[]{c}\frac{\partial\mathcal{I}(\mathbf{v}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}{\partial b(\mathbf{v})}\\ \vdots\\ \frac{\partial\mathcal{I}(\mathbf{v}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}{\partial c_{{im}}(\mathbf{v})}\end{array}\right)_{{\mathbf{v}=\mathbf{u}_{i}}}$.

These different selection gradients are very different objects: they're vectors expressing the direction of selection in different spaces -- here, the space of $\mathbf{u}$ values vs. the space of $\mathbf{p}$ values. Soon enough we'll also be looking at the selection gradient in $\mathbf{A}$ space.

Using the above chain rule manipulations, we can write

$\frac{d\mathbf{u}_{i}}{dt}=\gamma\hat{X}_{i}S(\mathbf{u}_{i})$
$\frac{d\mathbf{p}(\mathbf{u}_{i})}{dt}=\gamma\hat{X}_{i}\frac{\partial\mathbf{p}}{\partial\mathbf{u}}\frac{\partial\mathbf{p}}{\partial\mathbf{u}}^{T}S(\mathbf{p})$.

Like $\mathbf{u}$ or whatever other vector, $\mathbf{p}$ would evolve in the direction of $S(\mathbf{p})$ if it were to mutate in all directions equally. However, it doesn't do that: since $\mathbf{p}$ is a function of $\mathbf{u}$, it only mutates in directions given by mutations in $\mathbf{u}$. The matrix multiplying $S(\mathbf{p})$ in the dynamics of $\mathbf{p}$ is a projection matrix, restricting the motion of $\mathbf{p}$ to directions allowed by the mapping between $\mathbf{u}$ and $\mathbf{p}$. (To be precise, $\frac{\partial\mathbf{p}}{\partial\mathbf{u}}\frac{\partial\mathbf{p}}{\partial\mathbf{u}}^{T}S(\mathbf{p})$ is not precisely the projection of $S(\mathbf{p})$ onto the subspace parametrized by $\mathbf{u}$ unless the columns of $\frac{\partial\mathbf{p}}{\partial\mathbf{u}}$ are unit-length; otherwise there's some scaling by positive numbers involved. But it definitely transforms the vector into a direction allowed by the restriction to points parametrized by $\mathbf{u}$.)

The motion of $\mathbf{A}$ is the same way, except that it is influenced by all the populations in the system, not just one, because two phenotypes are involved in each $a_{{ij}}$ value. For that reason, the dynamics of $\mathbf{A}$ has some extra terms...

But before we go into that, let's look at the selection gradient and the dynamics of $\mathbf{p}(\mathbf{u})$.

## Adaptive geometry of interaction terms

Now, let's get to how $a_{{ij}}$ and $k_{i}$ "would like to evolve" and why they move in the directions they do.

Expanding out the dynamics of the $\mathbf{A}$ vector has more involved than doing it for the $\mathbf{p}$ vector, because $\mathbf{A}$ depends on all the different phenotypes $\mathbf{u}_{j}$, not just one we're concerned with. Also, we have to keep careful track of $\mathbf{u}_{i}$, because it appears in $\mathbf{A}$ in two different ways: on the left-hand side of each $\mathbf{a}_{{ij}}=a(\mathbf{u}_{i},\mathbf{u}_{j})$ term, which describes how population $i$ (the "patient") is affected by an encounter with population $j$ (the "agent"); and also on the right-hand side of the $a_{{ii}}=a(\mathbf{u}_{i},\mathbf{u}_{i})$ term, as the "agent" in an encounter between $i$ and $i$. This distinction is important because every encounter has two effects, the effect on oneself and the effect on the other, and we'll see that selection treats these two effects very differently.

So to be clear, we'll work with the notation

$\mathbf{A}_{i}=\left.\mathbf{A}(\mathbf{v},\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\right|_{{\mathbf{v}=\mathbf{u}_{i}}}$ $=\left(\begin{array}[]{c}\mathbf{a}(\mathbf{v},\mathbf{u}_{1})\\ \vdots\\ \mathbf{a}(\mathbf{v},\mathbf{u}_{n})\\ k(\mathbf{v})\end{array}\right)_{{\mathbf{v}=\mathbf{u}_{i}}}$,

to distinguish the two different roles of $\mathbf{u}_{i}$ (suppressing the intermediate variable $\mathbf{p}$), and we'll refer to the two different partial derivatives that relate to $\mathbf{u}_{i}$ as $\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})=\left.\frac{\partial\mathbf{A}_{i}}{\partial\mathbf{v}}\right|_{{\mathbf{v}=\mathbf{u}_{i}}}$ and $\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{i})=\left.\frac{\partial\mathbf{A}_{i}}{\partial\mathbf{u}_{i}}\right|_{{\mathbf{v}=\mathbf{u}_{i}}}$.

So first of all, if we use $\mathbf{A}_{i}$ as an intermediate variable, the dynamics of $\mathbf{u}_{i}$ is

$\frac{d\mathbf{u}_{i}}{dt}=\gamma\hat{X}_{i}\partial _{1}\mathscr{I}(\mathbf{u}_{i})^{T}$
$=\gamma\hat{X}_{i}(\partial _{1}\mathscr{I}(\mathbf{A}_{i})\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i}))^{T}$
$=\gamma\hat{X}_{i}\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})^{T}\partial _{1}\mathscr{I}(\mathbf{A}_{i})^{T}$.

Then

$\frac{d\mathbf{A}_{i}}{dt}=\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})\frac{d\mathbf{u}_{i}}{dt}+\sum _{{j=1}}^{n}\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{j})\frac{d\mathbf{u}_{j}}{dt}$
$=\gamma\hat{X}_{i}\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})^{T}\partial _{1}\mathcal{I}(\mathbf{A}_{i})^{T}+\sum _{{j=1}}^{n}\gamma\hat{X}_{j}\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{j})\partial _{1}\mathbf{A}_{j}(\mathbf{u}_{j})^{T}\partial _{1}\mathcal{I}(\mathbf{A}_{j})^{T}$
$=\gamma\hat{X}_{i}\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})^{T}S(\mathbf{A}_{i})+\sum _{{j=1}}^{n}\gamma\hat{X}_{j}\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{j})\partial _{1}\mathbf{A}_{j}(\mathbf{u}_{j})^{T}S(\mathbf{A}_{j})$.

This expression is made up of two somewhat complex terms. The first term, the $S(\mathbf{A}_{i})$ term, expresses the change in the interactions experienced by population $i$ due to change in population $i$ in its patient role. I refer to this as the direct effect of selection on population $i$. The second term, the sum of $S(\mathbf{A}_{j})$ vectors, expresses the change in population $i$'s interactions due to change in all the different agents it encounters, including population $i$ itself in its agent role. I refer to this as an indirect effect of selection on the various populations.

What would happen if there were no constraints on $\mathbf{A}_{i}$? That is, if all the terms just mutate independently without being constrained by dependence on $\mathbf{u}$ variables. In this case, we would simply have

$\frac{d\mathbf{A}_{i}}{dt}=\gamma\hat{X}_{i}S(\mathbf{A}_{i})$.

In the constrained case, as with $\mathbf{p}$, the motion due to $S(\mathbf{A}_{i})$ is modified by a transformation matrix. Here, though, there's also another term dealing with the effect of changing $\mathbf{u}$ values as the second argument to $a(\mathbf{u}_{i},\mathbf{u}_{j})$. Let's expand that out one step further:

$\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}{\color{blue}\frac{d\mathbf{A}_{i}}{dt}}={\color{green!70!black}\gamma\hat{X}_{i}\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})^{T}S(\mathbf{A}_{i})+{\color{violet}\sum _{{j=1}}^{n}\gamma\hat{X}_{j}\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{j})\partial _{1}\mathbf{A}_{j}(\mathbf{u}_{j})^{T}S(\mathbf{A}_{j})}\]$
$\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}={\color{green!70!black}D(\mathbf{u}_{i})}+{\color{violet}I(\mathbf{u}_{i}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}$.

Here I'm using colors to distinguish three different vector quantities, which I'll soon plot in the same colors:

## Supplementary materials

Here are the Sage classes that do the work for the Mac-Lev models. above. They use generalized Sage machinery that's stored at SageDynamics.

Now here's the MacArthur-Levins resource competition model.