Muller's Ratchet

Image credit: Physics World

Sham, Dean, and I recently found ourselves on the Cape Cod Rail Trail furiously debating evolution (it’s not clear we can’t not furiously debate anything - but let’s move on). They started a digression on how hard it is for evolution to hold a gradient - deriving the time till a new deleterious mutation fixates (spreads to everyone in the population). I hadn’t heard about this - so I resolved to understand it! It seems deep - how can the stochastic nature of mutations prevent a gradient signal from being caught. Sham quipped “if you want to say that life shouldn’t really exist, then I’m fine with that” and it sort of stuck with me.

Ok, fine - so I needed to read about this phenomenon called “Muller’s Ratchet”. To my surprise, Google had no great resources. When does that ever happen? So here it was, my chance to finally make a mark on the world. To be the response that Google gives someone. A scientist’s only dream. So let’s get into it:

The Moran Process

Consider a finite population $N$ of two species, $A$ and $B$, let’s say there are $i$ individuals of the $A$ species. That means there are $N-i$ individuals of the $B$ species.

If we keep the population fixed at $N$, at any time $t$, only a few things can happen:

  • An $A$ individual dies and a $B$ individual is born, which transitions the number of $A$ individuals from (say) $i$ to $i-1$
  • A $B$ individual dies and an $A$ individual is born, which transitions the number of $A$ individuals from $i$ to $i+1$
  • Finally, an $A$ individual dies and another $A$ individual is born, which keep the the number of $A$ individuals at $i$

Denote the probability of transitioning (for the $A$’s) from $i$ to $i+1$ by $u_i$ and the probability of transitioning from $i$ to $i-1$ individuals by $d_i$. Therefore, we know that $u_0 = 0$ and $d_N = 0$.

This is another way of saying that the transition matrix for this process is tri-diagonal.

Understanding the probability of fixation in a finite population

Let’s create some notation you might not have seen before. We can $p_i$ the probability of reaching state $N$ from $i$. That means, we’d get the following recurrence relation:

$$\begin{align} p_0 &= 0 \\\ p_N &= 1 \\\ p_i &= d_i p_{i-1} + u_i p_{i + 1} + (1-u_i-d_i) p_i \end{align}$$

Thinking about what these recurrence relations are saying makes them pretty easy to understand: the first one just says, if the species A transitions to $0$ individuals then it can never get to state N. And that makes sense! If there are no individuals to reproduce, they certainly won’t make it to fixation. The second one just says the converse, if there are no B individuals to reproduce - they certainly won’t be making it to fixation either!

Okay, so the third one is a little hairier. The probability of being absorbed at state $N$ from $i$ is given by the following three terms: (i) the probability of going from $i$ to $i-1$ times the probability of going from $i-1$ to $N$ (ii) the probability of going from $i$ to $i+1$ times the probability of going from $i+1$ to $N$ and (iii) the probability of staying at $i$ and then making it all the way to $N$

This recurrence relation is pretty useful! In fact, we can kind of see that $p_i$ and $p_{i+1}$ are going to have a lot of terms in common that will cancel each other out. So let’s define a new quantity!

$$ q_i = p_i - p_{i-1} $$

This difference quantity should help us make a lot of the math easier! Specially because we get these neat formulas:

$$ \sum^N_i q_i = p_N - p_0 = 1 $$

and

$$ \sum^i_j q_j = p_i - p_0 = p_i $$

Okay, awesome. So we have $q_1 = p_1$, but what about $q_2$? $q_3$? (gasp) $q_i$?

$$\begin{align} p_i &= d_i p_{i-1} + + u_i p_{i + 1} + (1-u_i-d_i) p_i \\\ \implies p_i &= d_i p_{i-1} + p_i - u_i p_i - d_i p_i + u_i p_{i+1} \\\ \implies 0 &= -d_i q_i + u_i q_{i+1} \end{align}$$

If we define the quantity $\mu_i = \frac{d_i}{u_i}$, then we’d have:

$$ q_{i+1} = \mu_i q_i $$

So we get $q_2 = \mu_1 q_1$ and $q_3 = \mu_2 \mu_1 q_1$ and the general formula:

$$ q_k = \biggl( \prod^{k-1}_{j=1} \mu_j \biggr) p_1 $$

Since we know that $\sum^N_{i=1} q_i = 1$, we can use that to get:

$$\begin{align} \sum^N_{i=1} q_i = 1 &= p_1 + \sum^{N-1}_{j=1} \biggl( \prod^j_{k=1} \mu_k \biggr) p_1 \\\ \implies p_1 &= \frac{1}{1+ \sum^{N-1}_{j=1} \bigl( \prod^j_{k=1} \mu_k \bigr) } \\\ \end{align} $$

Finally, we need to relate $p_i$ to $p_1$. We know that:

$$\begin{align} \sum^i_{j=1} q_j = p_i = p_1 + \sum^{i-1}_{j=1} \biggl( \prod^j_{k=1} \mu_k \biggr) p_1 \\\ \implies p_i = \frac{1 + \sum^{i-1}_{j=1} \bigl( \prod^j_{k=1} \mu_k \bigr) }{1+ \sum^{N-1}_{j=1} \bigl( \prod^j_{k=1} \mu_k \bigr) } \\\ \end{align} $$

Drift with a constant selection

So what happens if we have a constant selection pressure (say $r$) for A. How does the probability of fixation evolve?

We’ll follow the convention that $r > 1$ corresponds to a selective advantage and $r < 1$ to a selective disadvantage.

From the equation above, we’d get:

$$\begin{align} p_i &= \frac{1 + \sum^{i-1}_{j=1} \prod^j_{k=1} \frac{1}{r} }{1+ \sum^{N-1}_{j=1} \prod^j_{k=1} \frac{1}{r} } \\\ \implies p_i &= \frac{1 + \sum^{i-1}_{j=1} \frac{1}{r^j} }{1+ \sum^{N-1}_{j=1} \frac{1}{r^j} } \\\ \end{align}$$

This is just a geometric series. So, we can write:

$$ p_i = \frac{1-\frac{1}{r^i}}{1-\frac{1}{r^N}} $$

Asymptotics

We know the two following limits $$ \lim_{N \rightarrow \infty} \rho(r, N) = 1-\frac{1}{r} \quad \quad \lim_{r \rightarrow 1} \rho(r, N) = \frac{1}{N} $$

Muller’s Ratchet

So far, we haven’t really talked about mutations or any sort of ratchet really. So, let’s introduce the idea. Suppose a single A mutant shows up in the population with a selective (disadvantage) $r$. What would be it’s chance of fixation (let’s call that $\rho$)?

$$ \rho(r, N) := x_1 = \frac{1-\frac{1}{r}}{1-\frac{1}{r^N}} $$

Dhruv Madeka
Dhruv Madeka
Senior Staff Research Engineer, Google

I’m a Senior Staff Research Engineer working on LLMs, and GenAI.