Archive

Posts Tagged ‘probability’

Probability Puzzles

February 16th, 2010 No comments

Today in a dinner with Thanh, Hu and Joel I heard about a paradox I haven’t heard so far. Probability is full of cute problems that challenge our understanding of the basic concepts. The most famous of them is the Monty Hall Problem, which asks:

You are on a TV game show and there are {3} doors – one of them contains a prize, say a car and the other two door contain things you don’t care about, say goats. You choose a door. Then the TV host, who knows where the prize is, opens one door you haven’t chosen and that he knows has a goat. Then he asks if you want to stick to the door you have chosen or if you want to change to the other door. What should you do?

Probably you’ve already came across this question in some moment of your life and the answer is that changing doors would double your probability of getting the price. There are several ways of convincing your intuitions:

  • Do the math: when you chose the door, there were three options so the prize is in the door you chose with {\frac{1}{3}} probability and in the other door with probability {\frac{2}{3}} (note that the presenter can always open some door with a goat, so conditioning on that event doesn’t give you any new information).
  • Do the actual experiment (computationally) as done here. One can always ask a friend to help, get some goats and perform the actual experiment.
  • To convince yourself that “it doesn’t matter” is not correct, think {100} doors. You choose one and the TV host open {98} of them and asks if you want to change or stick with your first choice. Wouldn’t you change?

I’ve seen TV shows where this happened and I acknowledge that other things may be involved: there might be behavioral and psychologic issues associated with the Monty Hall problem – and possibly those would interest Dan Ariely, whose book I began reading today – and looks quite fun. But the problem they told me about today in dinner was another: the envelope problem:

There are two envelopes and you are told that in one of them there is twice the amount that there is in the other. You choose one of the envelopes at random and open it: it contains {100} bucks. Now, you don’t know if the other envelope has {50} bucks or {200} bucks. Then someone asks you if you wanted to pay {10} bucks and change to the other envelope. Should you change?

Now, consider two different solutions to this problem: the first is fallacious and the second is correct:

  1. If I don’t change, I get {100} bucks, if I change I pay a penalty of {10} and I get either {50} or {200} with equal probability, so my expected prize if I change is {\frac{200+50}{2}-10 = 115 > 100}, so I should change.
  2. I know there is one envelope with {x} and one with {2x}, then my expected prize if I don’t change is {\frac{x + 2x}{2} = \frac{3}{2}x}. If I change, my expected prize is {\frac{x + 2x}{2} - 10 < \frac{3}{2}x}, so I should not change.

The fallacy in the first argument is perceiving a probability distribution where there is no one. Either the other envelope contains {50} bucks or it contains {200} bucks – we just don’t know, but there is no probability distribution there – it is a deterministic choice by the game designer. Most of those paradoxes are a result of either an ill-defined probability space, as Bertrand’s Paradox or a wrong comprehension of the probability space, as in Monty Hall or in several paradoxes exploring the same idea as: Three Prisioners, Sleeping Beauty, Boy or Girl Paradox, …

73a_humpty-dumpty

There was very recently a thrilling discussion about a variant on the envelope paradox in the xkcd blag – which is the blog accompaning that amazing webcomic. There was a recent blog post with a very intriguing problem. A better idea is to go there and read the discussion, but if you are not doing so, let me summarize it here. The problem is:

There are two envelopes containing each of them a distinct real number. You pick one envelope at random, open it and see the number, then you are asked to guess if the number in the other envelope is larger or smaller then the previous one. Can you guess correctly with more than {\frac{1}{2}} probability?

A related problem is: given that you are playing the envelope game and there are number {A} and {B} (with {A < B}). You pick one envelope at random and then you are able to look at the content of the first envelope you open and then decide to switch or not. Is there a strategy that gives you expected earnings greater than {\frac{A+B}{2}} ?

The very unexpected answers is yes !!! The strategy that Randall presents in the blog and there is a link to the source here is: let {X} be a random variable on {{\mathbb R}} such that for each {a<b} we have {P(a < X < b) > 0}, for example, the normal distribution or the logistic distribution.

Sample {X} then open the envelope and find a number {S} now, if {X < S} say the other number is lower and if {X > S} say the other number is higher. You get it right with probability

\displaystyle  P(\text{picked }A) P(X > A) + P(\text{picked }B) P(X < B) = \frac{1}{2} (1 + P(A < X < B))

which is impressive. If you follow your guess, your expected earning {Y} is:

\displaystyle \begin{aligned} &P(\text{picked }A) \mathop{\mathbb E}[Y \vert \text{picked }A] + P(\text{picked }B) \mathop{\mathbb E}[Y \vert \text{picked }B] = \\ & = \frac{1}{2} [P(X<A) A + P(X>A) B] + \frac{1}{2} [P(X<B) B + P(X>B) A] \\ &= \frac{1}{2}[A [P(X<A) + P(X>B)] + B [P(X>A) + P(X<B)]] > \frac{A+B}{2} \\ \end{aligned}

The xkcd pointed to this cool archive of puzzles and riddles. I was also told that the xkcd puzzle forum is also a source of excellent puzzles, as this:

You are the most eligible bachelor in the kingdom, and as such the King has invited you to his castle so that you may choose one of his three daughters to marry. The eldest princess is honest and always tells the truth. The youngest princess is dishonest and always lies. The middle princess is mischievous and tells the truth sometimes and lies the rest of the time. As you will be forever married to one of the princesses, you want to marry the eldest (truth-teller) or the youngest (liar) because at least you know where you stand with them. The problem is that you cannot tell which sister is which just by their appearance, and the King will only grant you ONE yes or no question which you may only address to ONE of the sisters. What yes or no question can you ask which will ensure you do not marry the middle sister?

copied from here.

Categories: puzzles Tags: ,

Looking at probability distributions

November 12th, 2009 No comments

I’ve been taking two classes in probability this semester and in those I saw the proofs of a lot of interesting theorems which I knew about previously but I have never seen the proof, as the Central Limit Theorem, the Laws of Large Numbers and so on… Also, some theory which is looks somewhat ugly in the undergrad courses becomes very clear with the proper formal treatment. Today I was thinking what was the main take-home message that a computer scientist could take from those classes and. at ;east for me, this message is the various ways of looking to probability distributions. I’ve heard about moments, Laplace transform, Fourier transform and other tools like that, but I never realized before their true power. Probably still today, most of their true power is hidden from me, but I am starting to look at them in a different way. Let me try to go over a few examples of different ways we can look at probability distributions and show cases where they are interesting.

Most of ways of looking at probability distributions are associated with multiplicative system: a multiplicative system {Q} is a set of real-valued functions with the property that if {f_1, f_2 \in Q} then {f_1 \dot f_2 \in Q}. Those kinds of sets are powerful because of the Multiplicative Systems Theorem:

Theorem 1 (Multiplicative Systems Theorem) If {Q} is a multiplicative system, {H} is a linear space containing {1} (the constant function {1}) and is closed under bounded convergence, then {Q \subseteq H} implies that {H} contains all bounded {\sigma(Q)}-measurable functions.

The theorem might look a bit cryptic if you are not familiar with the definitions, but it boils down to the following translation:

Theorem 2 (Translation of the Multiplicative Systems Theorem) If {Q} is “general” multiplicative system, {X} and {Y} are random variable such that {\mathop{\mathbb E}{f(X)} = \mathop{\mathbb E}{f(Y)}} for all {f \in Q} then {X} and {Y} have the same distribution.

where general excludes some troublesome cases like {Q = \{1\}} or all constant functions, for example. In technical terms, we wanted {\sigma(Q) = \sigma\{f^{-1}((-\infty,a]); a\in {\mathbb R}, f \in Q \}} to be the Borel {\sigma}-algebra. But let’s not worry about those technical details and just look at the translated version. We now, discuss several kinds of multiplicative systems:

  1. The most common description of the a random variable is by the cummulative distribution function {F(u) = P\{X \leq u\}}. This is associated with {Q = \{ 1_{(-\infty,u]}(x); u \in {\mathbb R} \}} notice that simply {F(u) = \mathop{\mathbb E}{1_{(-\infty,u]}}}.
  2. We can characterize a random variable by its moments: the variable {X} is characterized by the set {M_n = \int_{\mathbb R} x^n \mu_X(dx)}. Given the moemnts {M_1, M_2, \hdots}, the variable is totally characterized, i.e., if two variables have the same moments, then they have the same distribution by the Multiplicative Systems Theorem. This description is associated with the system {Q = \{x^n; n = 1, 2, \hdots\}}
  3. Moment Generating Function: If {X} is a variable that assumes only integer values, we can describe the it as {p_0, p_1, p_2, \hdots}, where {p_n = P\{ X = n \}}. An interesting way of representing those probabilities is as the moment generating function {\psi_X(z) = \mathop{\mathbb E}{z^X} = \sum_{n=0}^\infty p_n z^n}. This is associated with the multiplicative system {Q = \{z^x, 0 \leq z < 1\}}.Now suppose we are given two discrete independent variables {X} and {Y}. What do we know about {X+Y}. It is easy to know its expectation, its variance, … but what about more complicated things? What is the distribution of {X+Y} ? Moment generating functions answer this question very easily, since:

    \displaystyle \psi_{X+Y} (z) = \mathop{\mathbb E} [z^{X+Y}] = \mathop{\mathbb E} [z^{X}] \cdot \mathop{\mathbb E}[z^{Y}] = \psi_X(z) \cdot \psi_Y(z)

    If we know moment generating functions, we can calculate expectation very easily, since {\mathop{\mathbb E}[X] = \psi'_X(1)}. For example, suppose we have a process like that: there is one bacteria in time {t = 0}. In each timestep, either this bacteria dies (with probability {p_0}), continues alive without reproducing (with probability {p_1} or has {k} offsprings (with probability {p_{1+k}}). In that case {\sum_0^\infty p_n = 1}. Each time, the same happens, independently with each of the bacteria alive in that moment. The question is, what is the expected number of bacteria in time {t} ?

    It looks like a complicated problem with just elementary tools, but it is a simple problem if we have moment generating functions. Just let {X_{ti}} be the variable associated with the {i^{th}} bacteria of time {t}. It is zero if it dies, {1} if it stays the same and {k+1} if it has {k} offsprings. Let also {N_t} be the number of bacteria in time {t}. We want to know {\mathop{\mathbb E} N_t}. First, see that:

    \displaystyle N_t = \sum_{i=1}^{N_{t-1}} X_{ti}

    Now, let’s write that in terms of moment generating functions:

    \displaystyle \psi_{N_t}(z) = \mathop{\mathbb E}{z^{X_{t1} + \hdots + X_{tN_t}}} = \sum_{k=0}^\infty P\{N_{t-1} = k\} \cdot \mathop{\mathbb E}[z^{X_{t1} + \hdots + X_{tN_t}} \vert N_t = k]

    which is just:

    \displaystyle \psi_{N_t}(z) = \sum_{k=0}^\infty P\{N_{t-1} = k\} \cdot \mathop{\mathbb E}[z^{X_{t1} + \hdots + X_{tk}}] = \sum_{k=0}^\infty P\{N_{t-1} = k \} \cdot \psi_X (z)^k

    since the variables are all independent and identically distributed. Now, notice that:

    \displaystyle \psi_{N_{t-1}}(z) = \sum_{k=0}^\infty P\{N_{t-1} = k\} \cdot z^k

    by the definition of moment generating function, so we effectively proved that:

    \displaystyle \psi_{N_{t}}(z) = \psi_{N_{t-1}}(\psi_X(z)) = \psi_X \psi_X \hdots \psi_X(z)

    We proved that {\psi_{N_{t}}(z)} is just {\psi(z)} iterated {t} times. Now, calculating the expectation is easy, using the fact that {\psi_X(1) = 1} and {\psi'_X(1) = \mathop{\mathbb E} X}. Just see that: {\mathop{\mathbb E} N_t = \psi_{N_{t}}'(1) = \psi_{N_{t-1}}'(\psi_X(1)) \cdot \psi'_X(1) = \psi_{N_{t-1}}'(1) \cdot \mathop{\mathbb E} X }. Then, clearly {\mathop{\mathbb E} N_t = (\mathop{\mathbb E} X)^t}. Using similar technique we can prove a lot more things about this process, just by analyzing the behavior of the moment generating function.

  4. Laplace Tranform: Now, moving to continuous variables, if {X} is a continuous non-negative variable we can define its Laplace tranform as: {\phi_X(u) = \mathop{\mathbb E} [e^{u X}] = \int_0^\infty e^{ux} \mu_X(dx)}, where {\mu_X(dx)} stands for the distribution of {X}, for example, {\rho_X(x) dx}. This is associated with the multiplicative system {Q = \{e^{ux}; u \geq 0\}}. Again, by the Multiplicative Systems Theorem, if {\phi_X = \phi_Y}, then the two variables have the same distribution. The Laplace tranform has the same nice properties as the Moment Generating Function, for example, {\phi_{X+Y}(u) = \mathop{\mathbb E}[e^{(X+Y)u}] = \mathop{\mathbb E}[e^{Xu}] \cdot \mathop{\mathbb E}[e^{Yu}] = \phi_X(u) \cdot \phi_Y(u)}.And it allows us to do similar tricks than the one I just showed for Moment Generating Functions. One common trick that is used, for example, in the proof of Chernoff bounds is, given independent non-negative random variables:

    \displaystyle P\left\{\sum_i X_i > u\right\} = P\left\{e^{\sum_i X_i} > e^u\right\} \leq \frac{\mathop{\mathbb E}[e^{\sum_i X_i} ]}{e^u} = \frac{\prod_i \mathop{\mathbb E}[e^{X_i} ]}{e^u}

    where we also used Markov Inequality: {\mathop{\mathbb E}[X] \geq \beta P\{X \geq \beta\}}. Passing to the Laplace transform is the main ingredient in the Chernoff bound and it allows us to sort of “decouple” the random variables in the sum. There are several other cases where the Laplace transform proves itsself very useful and turns things that looked very complicated when we saw in undergrad courses into simple and clear things. One clear example of that is the motivation for the Poisson random variable:

    If {T_i} are independend exponentially distributed random variables with mean {\lambda}, then {\rho_{T_i}(t) = \lambda e^{- \lambda t}}. An elementary calculation shows that its laplace transform is {\phi_{T_i}(u) = \frac{\lambda}{\lambda+u}}. Let {S_n = T-0 + T_1 + T_2 + \hdots + T_n}, i.e., the time of the {(n+1)^{th}} arrival. We want to know what is the distribution of {S_n}. How to do that?

    \displaystyle \phi_{S_n}(u) = \mathop{\mathbb E}[e^{u(T_0 + \hdots + T_n)}] = \phi_T(u)^{n+1} = \left( \frac{\lambda}{\lambda+u} \right)^{n+1}

    Now, we need to find {\rho_{S_n}} such that {\int_0^\infty \rho_{S_n}(t) e^{iu} dt = \left( \frac{\lambda}{\lambda+u} \right)^{n+1}}. Now it is just a matter of solving this equation and we get: {\rho_{S_n}(t) = \frac{\lambda (\lambda t)^n}{n!} e^{-\lambda t}}. Now, the Poisson varible {N_t} measures the number of arrivals in {[0,t]} and therefore:

    \displaystyle \begin{aligned} P\{N_t = n\} & = P\{S_{n-1} < t < S_n\} = P\{S_n > t\} - P\{S_{n-1} \geq t\} \\ & = \int_t^\infty \rho_{S_n}(t) dt - \int_t^\infty \rho_{S_{n-1}}(t) dt = \frac{(\lambda t)^n}{n!} e^{-\lambda t} \end{aligned}

  5. Characteristic Function or Fourier Tranform: Taking {Q = \{e^{i\lambda x}; x \in \mathbb{C}\}} we get the Fourier Transform: {f_X(\lambda) = \mathop{\mathbb E} e^{i \lambda x}} which also has some of the nice properties of the previous ones and some additional ones. The characteristic functions were the main actors in the development of all the probability techniques that lead to the main result of 19th century Probability Theory: the Central Limit Theorem. We know that moment generating functions and Laplace transforms completely characterize the distributions, but it is not clear how to recover a distribution once we have a transform. For Fourier Transform there is a cleas and simple way of doing that by means of the Inversion Formula:

    \displaystyle \rho_X(x) = \frac{1}{2 \pi} \int_{\mathbb R} e^{-i \lambda x} f_X(\lambda) d\lambda

    One fact that always puzzled me was: why is the normal distribution {\rho_N (x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2/2} } so important? What does it have in special to be the limiting distribution in the Central Limit Theorem, i.e., if {X_1, X_2, \hdots, X_n} is a sequence of independent random variables, {S_n = \sum_1^n X_i - \sum_1^n \mathop{\mathbb E} X_i} then {S_n / \sqrt{var S_n} \rightarrow N(0,1)} under some natural conditions on the variables. The reason the normal is so special is because it is a “fixed point” for the Fourier Transform. We can see that {f_N(\lambda) = e^{-\lambda_2/2}}. And there we have something special about it that makes me believe the Central Limit Theorem.

————————-

This blog post was based on lectures by Professor Dynkin at Cornell.

Categories: theory Tags: ,

Random Spanning Trees

November 3rd, 2009 No comments

BigRedBits is again pleased to have Igor Gorodezky as a guest blogger directly from UCLA. I leave you with his excelent post on the Wilson’s algorithm.

——————————————

Igor again, with another mathematical dispatch from UCLA, where I’m spending the semester eating and breathing combinatorics as part of the 2009 program on combinatorics and its applications at IPAM. In the course of some reading related to a problem with which I’ve been occupying myself, I ran across a neat algorithmic result – Wilson’s algorithm for uniformly generating spanning trees of a graph. With Renato’s kind permission, let me once again make myself at home here at Big Red Bits and tell you all about this little gem.

The problem is straightforward, and I’ve essentially already stated it: given an undirected, connected graph {G}, we want an algorithm that outputs uniformly random spanning trees of {G}. In the early ’90s, Aldous and Broder independently discovered an algorithm for accomplishing this task. This algorithm generates a tree {T} by, roughly speaking, performing a random walk on {G} and adding the edge {(u,v)} to {T} every time that the walk steps from {u} to {v} and {v} is a vertex that has not been seen before.

Wilson’s algorithm (D. B. Wilson, “Generating random spanning trees more quickly than the cover time,” STOC ’96) takes a slightly different approach. Let us fix a root vertex {r}. Wilson’s algorithm can be stated as a loop-erased random walk on {G} as follows.

Algorithm 1 (Loop-erased random walk) Maintain a tree {T}, initialized to consist of {r} alone. While there remains a vertex {v} not in {T}: perform a random walk starting at {v}, erasing loops as they are created, until the walk encounters a vertex in {T}, then add to {T} the cycle-erased simple path from {v} to {T}.

We observe that the algorithm halts with probability 1 (its expected running time is actually polynomial, but let’s not concern ourselves with these issues here), and outputs a random directed spanning tree oriented towards {r}. It is a minor miracle that this tree is in fact sampled uniformly from the set of all such trees. Let us note that this offers a solution to the original problem, as sampling {r} randomly and then running the algorithm will produce a uniformly generated spanning tree of {G}.

It remains, then, to prove that the algorithm produces uniform spanning trees rooted at {r} (by which we mean directed spanning trees oriented towards {r}). To this we dedicate the remainder of this post.

1. A “different” algorithm

Wilson’s proof is delightfully sneaky: we begin by stating and analyzing a seemingly different algorithm, the cycle-popping algorithm. We will prove that this algorithm has the desired properties, and then argue that it is equivalent to the loop-erased random walk (henceforth LERW).

The cycle-popping algorithm works as follows. Given {G} and {r}, associate with each non-root vertex {v} an infinite stack of neighbors. More formally, to each {v \neq r} we associate

\displaystyle  \mathcal S_v = [u_{0}, u_{1}, \dots ]

where each {u_{i}} is uniformly (and independently) sampled from the set of neighbors of {v}. Note that each stack is not a random walk, just a list of neighbors. We refer to the left-most element above as the top of {\mathcal S_v}, and by popping the stack {\mathcal S_v} we mean removing this top vertex from {\mathcal S_v}.

Define the stack graph {G_{\mathcal S}} to be the directed graph on {V} that has an edge from {v} to {u} if {u} is at the top of the stack {\mathcal S_v}. Clearly, if {G} has {n} vertices then {G_{\mathcal S}} is an oriented subgraph of {G} with {n-1} edges. The following lemma follows immediately.

Lemma 1 Either {G_{\mathcal S}} is a directed spanning tree oriented towards {r} or it contains a directed cycle.

If there is a directed cycle {C} in {G_{\mathcal S}} we may pop it by popping {\mathcal S_v} for every {v \in C}. This eliminates {C}, but of course might create other directed cycles. Without resolving this tension quite yet, let us go ahead and formally state the cycle-popping algorithm.

Algorithm 2 (Cycle-popping algorithm) Create a stack {\mathcal S_v} for every {v \neq r}. While {G_{\mathcal S}} contains any directed cycles, pop a cycle from the stacks. If this process ever terminates, output {G_{\mathcal S}}.

Note that by the lemma, if the algorithm ever terminates then its output is a spanning tree rooted at {r}. We claim that the algorithm terminates with probability 1, and moreover generates spanning trees rooted at {r} uniformly.

To this end, some more definitions: let us say that given a stack {\mathcal S_v}, the vertex {u_i} is at level {i}. The level of a vertex in a stack is static, and is defined when the stack is created. That is, the level of {u_i} does not change even if {u_i} advances to the top of the stack as a result of the stack getting popped.

We regard the sequence of stack graphs produced by the algorithm as leveled stack graphs: each non-root vertex {v} is assigned the level of its stack. Observe that the level of {v} in {G_{\mathcal S}} is the number of times that {\mathcal S_v} has been popped. In the same way, we regard cycles encountered by the algorithm as leveled cycles, and we can regard the tree produced by the algorithm (if indeed one is produced) as a leveled tree.

The analysis of the algorithm relies on the following key lemma (Theorem 4 in Wilson’s paper), which tells us that the order in which the algorithm pops cycles is irrelevant.

Lemma 2 For a given set of stacks, either the cycle-popping algorithm never terminates, or there exists a unique leveled spanning tree {T} rooted at {r} such that the algorithm outputs {T} irrespective of the order in which cycles are popped.

Proof: Fix a set of stacks {\{ \mathcal S_v \}_{v \neq r}}. Consider a leveled cycle {C} that is pop-able, i.e.~there exist leveled cycles {C_1, C_2, \dots, C_k=C} that can be popped in sequence. We claim that if the algorithm pops any cycle not equal to {C}, then there still must exist a series of cycles that ends in {C} and that can be popped in sequence. In other words, if {C} is pop-able then it remains pop-able, no matter which cycles are popped, until {C} itself is actually popped.

Let {C'} be a cycle popped by the algorithm. If {C'=C_1} then the claim is clearly true. Also, if {C'} shares no vertices with {C_1, \dots, C_k}, then the claim is true again. So assume otherwise, and let {C_i} be the first in the series {C_1, \dots, C_k} to share a vertex with {C'}. Let us show that {C'=C_i} by contradiction.

If {C_i \neq C'}, then {C_i} and {C'} must share a vertex {w} that has different successors in {C'} and {C_i}. But by definition of {C_i}, none of the {C_1, \dots, C_{i-1}} contain {w}, and this implies that {w} has the same level in {C'} and {C_i}. Therefore its successor in both cycles is the same, a contradiction. This proves {C_i=C'}.

Moreover, the argument above proves that {C_i} and {C'} are equal as leveled cycles (i.e.~every vertex has the same level in both cycles). Hence

\displaystyle  C'=C_i, C_1, C_2, \dots, C_{i-1}, C_{i+1}, \dots, C_k=C

is a series of cycles that can be popped in sequence, which proves the original claim about {C}.

We conclude that given a set of stacks, either there is an infinite number of pop-able cycles, in which case there will always be an infinite number and the algorithm will never terminate, or there is a finite number of such cycles. In the latter case, every one of these cycles is eventually popped, and the algorithm produces a spanning tree {T} rooted at {r}. The level of each non-root vertex in {T} is given by (one plus) the number of popped cycles that contained {v}. \Box

Wilson summarizes the cycle-popping algorithm thusly: “[T]he stacks uniquely define a tree together with a partially ordered set of cycles layered on top of it. The algorithm peels off these cycles to find the tree.”

Theorem 3 The cycle-popping algorithm terminates with probability 1, and the tree that it outputs is a uniformly sampled spanning tree rooted at {r}.

Proof: The first claim is easy: {G} has a spanning tree, therefore it has a directed spanning tree oriented towards {r}. The stacks generated in the first step of the algorithm will contain such a tree, and hence the algorithm will terminate, with probability 1.

Now, consider a spanning tree {T} rooted at {r}. We’ll abuse notation and let {T} be the event that {T} is produced by the algorithm. Similarly, given a collection of leveled cycles {\mathcal C}, we will write {\mathcal C} for the event that {\mathcal C} is the set of leveled cycles popped by the algorithm before it terminates. Finally, let {\mathcal C \wedge T} be the event that the algorithm popped the leveled cycles in {\mathcal C} and terminated, with the resulting leveled tree being equal to {T}.

By the independence of the stack entries, we have {\Pr[\mathcal C \wedge T] = \Pr[\mathcal C] \cdot p}, where {p} is the probability that the algorithm’s output is a leveled version of {T}, a quantity which a moment’s reflection will reveal is independent of {T}. Now,

\displaystyle  \Pr[T] = \sum_{\mathcal C} \Pr[\mathcal C \wedge T] = p \sum_{\mathcal C} \Pr[\mathcal C]

which, as desired, is independent of {T}. \Box

2. Conclusion

We have shown that the cycle-popping algorithm generates spanning trees rooted at {r} uniformly. It remains to observe that the LERW algorithm is nothing more than an implementation of the cycle-popping algorithm! Instead of initially generating the (infinitely long) stacks and then looking for cycles to pop, the LERW generates stack elements as necessary via random walk (computer scientists might recognize this as the Principle of Deferred Decisions). If the LERW encounters a loop, then it has found a cycle in the stack graph {G_{\mathcal S}} induced by the stacks that the LERW has been generating. Erasing the loop is equivalent to popping this cycle. We conclude that the LERW algorithm generates spanning trees rooted at {r} uniformly.

Categories: theory Tags: , ,