Archive

Author Archive

Probability Puzzles

February 16th, 2010 No comments

Today in a dinner with Thanh, Hu and Joel I heard about a paradox I haven’t heard so far. Probability is full of cute problems that challenge our understanding of the basic concepts. The most famous of them is the Monty Hall Problem, which asks:

You are on a TV game show and there are {3} doors – one of them contains a prize, say a car and the other two door contain things you don’t care about, say goats. You choose a door. Then the TV host, who knows where the prize is, opens one door you haven’t chosen and that he knows has a goat. Then he asks if you want to stick to the door you have chosen or if you want to change to the other door. What should you do?

Probably you’ve already came across this question in some moment of your life and the answer is that changing doors would double your probability of getting the price. There are several ways of convincing your intuitions:

  • Do the math: when you chose the door, there were three options so the prize is in the door you chose with {\frac{1}{3}} probability and in the other door with probability {\frac{2}{3}} (note that the presenter can always open some door with a goat, so conditioning on that event doesn’t give you any new information).
  • Do the actual experiment (computationally) as done here. One can always ask a friend to help, get some goats and perform the actual experiment.
  • To convince yourself that “it doesn’t matter” is not correct, think {100} doors. You choose one and the TV host open {98} of them and asks if you want to change or stick with your first choice. Wouldn’t you change?

I’ve seen TV shows where this happened and I acknowledge that other things may be involved: there might be behavioral and psychologic issues associated with the Monty Hall problem – and possibly those would interest Dan Ariely, whose book I began reading today – and looks quite fun. But the problem they told me about today in dinner was another: the envelope problem:

There are two envelopes and you are told that in one of them there is twice the amount that there is in the other. You choose one of the envelopes at random and open it: it contains {100} bucks. Now, you don’t know if the other envelope has {50} bucks or {200} bucks. Then someone asks you if you wanted to pay {10} bucks and change to the other envelope. Should you change?

Now, consider two different solutions to this problem: the first is fallacious and the second is correct:

  1. If I don’t change, I get {100} bucks, if I change I pay a penalty of {10} and I get either {50} or {200} with equal probability, so my expected prize if I change is {\frac{200+50}{2}-10 = 115 > 100}, so I should change.
  2. I know there is one envelope with {x} and one with {2x}, then my expected prize if I don’t change is {\frac{x + 2x}{2} = \frac{3}{2}x}. If I change, my expected prize is {\frac{x + 2x}{2} - 10 < \frac{3}{2}x}, so I should not change.

The fallacy in the first argument is perceiving a probability distribution where there is no one. Either the other envelope contains {50} bucks or it contains {200} bucks – we just don’t know, but there is no probability distribution there – it is a deterministic choice by the game designer. Most of those paradoxes are a result of either an ill-defined probability space, as Bertrand’s Paradox or a wrong comprehension of the probability space, as in Monty Hall or in several paradoxes exploring the same idea as: Three Prisioners, Sleeping Beauty, Boy or Girl Paradox, …

73a_humpty-dumpty

There was very recently a thrilling discussion about a variant on the envelope paradox in the xkcd blag – which is the blog accompaning that amazing webcomic. There was a recent blog post with a very intriguing problem. A better idea is to go there and read the discussion, but if you are not doing so, let me summarize it here. The problem is:

There are two envelopes containing each of them a distinct real number. You pick one envelope at random, open it and see the number, then you are asked to guess if the number in the other envelope is larger or smaller then the previous one. Can you guess correctly with more than {\frac{1}{2}} probability?

A related problem is: given that you are playing the envelope game and there are number {A} and {B} (with {A < B}). You pick one envelope at random and then you are able to look at the content of the first envelope you open and then decide to switch or not. Is there a strategy that gives you expected earnings greater than {\frac{A+B}{2}} ?

The very unexpected answers is yes !!! The strategy that Randall presents in the blog and there is a link to the source here is: let {X} be a random variable on {{\mathbb R}} such that for each {a<b} we have {P(a < X < b) > 0}, for example, the normal distribution or the logistic distribution.

Sample {X} then open the envelope and find a number {S} now, if {X < S} say the other number is lower and if {X > S} say the other number is higher. You get it right with probability

\displaystyle  P(\text{picked }A) P(X > A) + P(\text{picked }B) P(X < B) = \frac{1}{2} (1 + P(A < X < B))

which is impressive. If you follow your guess, your expected earning {Y} is:

\displaystyle \begin{aligned} &P(\text{picked }A) \mathop{\mathbb E}[Y \vert \text{picked }A] + P(\text{picked }B) \mathop{\mathbb E}[Y \vert \text{picked }B] = \\ & = \frac{1}{2} [P(X<A) A + P(X>A) B] + \frac{1}{2} [P(X<B) B + P(X>B) A] \\ &= \frac{1}{2}[A [P(X<A) + P(X>B)] + B [P(X>A) + P(X<B)]] > \frac{A+B}{2} \\ \end{aligned}

The xkcd pointed to this cool archive of puzzles and riddles. I was also told that the xkcd puzzle forum is also a source of excellent puzzles, as this:

You are the most eligible bachelor in the kingdom, and as such the King has invited you to his castle so that you may choose one of his three daughters to marry. The eldest princess is honest and always tells the truth. The youngest princess is dishonest and always lies. The middle princess is mischievous and tells the truth sometimes and lies the rest of the time. As you will be forever married to one of the princesses, you want to marry the eldest (truth-teller) or the youngest (liar) because at least you know where you stand with them. The problem is that you cannot tell which sister is which just by their appearance, and the King will only grant you ONE yes or no question which you may only address to ONE of the sisters. What yes or no question can you ask which will ensure you do not marry the middle sister?

copied from here.

Categories: puzzles Tags: ,

Walrasian Equilibrium I

February 15th, 2010 No comments

Currently I’ve been trying to understand more about the dynamics of markets and basic concepts of microeconomic theory and, as always, writing a blog post will help me to keep my ideas clear. First, why are markets interesting from a computer scientist/mathematician point of view?

  • Markets are multi-objective optimization problems: one can think of the possible state of a market some point in a space of possible {x = (x_1, \hdots, x_N) \in \Omega}. Each player of the market controls one variable, say {x_i} and is interested in maximizing one objective function {f_i(x)}. So, player {i} is trying to set {x_i = \text{argmax}_{x_i} f_i(x_i, x_{-i})}.
  • Markets are a computational model: one can think of a market as a way of performing a certain computation – as extracting some kind of information, as a prediction market, stock exchanges, … If we think of it as a computational device, we are asking the same questions: given those preferences which are implicit functions to each of the agents, calculate “fair” prices of items.
  • Markets are distributed systems where each part of the system has a selfish interest.

A market is composed by a set {L} of commodities, {I} of consumers and {J} of producers. Now, we describe how to characterize each of them:

  • Each consumer is defined by a set {X_i \subseteq {\mathbb R}^L} of commodities combinations he is interested (typically we take {X_i = {\mathbb R}^L_+}) and an utility function {u_i : X_i \rightarrow {\mathbb R}} expressing his interest for this bundle of commodities. Consumer {i} will try to maximize {u_i(\cdot)} in a further restricted {X_i}.
  • Each producer is define by a set {Y_j \subset {\mathbb R}^L} it has the capacity to produce.
  • Endowments: Each consumer comes to the market with an initial endowment {\omega_i \in {\mathbb R}^L}, so for {j \in L}, {\omega_{ij}} is the amount of commodity {j} that consumer {i} originally has. The initial total endowment of the market is given by {\overline{\omega} = \sum_{i \in I} \omega_i}, which is a vector indicating how much of each commodity originally exists in the market.
  • Shares: consumers have shares in the companies, so for {i \in I}, {j \in J}, consumer {i} has {\theta_{ij}} shares of company {j}, such that {\sum_i \theta_{ij} = 1}.

Something very crucial is missing in this picture: a way to compare commodities and something that makes exchanges possible: the answer to that is to attribute prices to the items. How to attribute prices to the items so that the market works fine? A price vector is a vector {p \in {\mathbb R}^L_+}. Consider the following scenario after prices {p_\ell, \ell \in L} are established to commodities:

  • by producing {y_j \in Y_j}, company {j \in J} gets profit {p \cdot y_j}, so each company will try to maximize its profit producing {y_j^* \in \text{argmax}_{y_j \in Y_j} p \cdot y_j}.
  • each consumer sells its initial endowment and gets the profit respective to the companies he owns. So, consumer {i} gets {w_i = p \cdot \omega_i + \sum_{j \in J} \theta_{ij} p \cdot y_j^*}.
  • now consumer {i} uses the money he has to buy the best bundle he can afford, which is {x_i^* = \text{argmax} \{u_i(x_i); x_i \in X_i, p \cdot x_i \leq w_i\}}.

The amount of commodities in the market must conserve, so that is possible only if we get:

\displaystyle \sum_{i \in I} x_i^* = \sum_{i \in I} \omega_i + \sum_{j \in J} y_j^*

First, it is not clear if such a price vector exists. If it exists, is it unique? If this is an equilibrium, is it the best thing for the consumers? How those prices can be set in practice without a centralized authority? Can people lie? Below, let’s collect a couple of questions I’ll try to answer (yes, no or unknown) in this and the following posts.

Question 1: Does a price vector {p \geq 0} always exist that generates an equilibrium?

Question 2: If it exists, is it unique?

Question 3: Can we describe an efficent method to find {p} ?

Question 4: Is it the best thing for the consumers in the following sense: if {(x^*,y^*,p)} is an equilibrium, are there feasible {(x,y)} such that {u_i(x_i) \geq u_i(x_i^*)} and for at least one consumer {u_i(x_i) > u_i(x_i^*)}? (This is called Pareto improvement)

Question 5: A central authority could use the knowledge about functions {u_i} and endowments {\omega_i} to calculate the price vector {p} using some method. Can consumers {i} be better off by lieing about their utility and endowments?

Question 6: How prices get defined without a central authority? Is there a dynamic/game-theoretical model to that?

For simplicity, let’s think of Exchange Economies, which are economies with no producers. Let’s define it formally:

Definition 1 An exchange economy is composed by a set of commodities {L} and a set of consumers {I} each with an utility {u_i : X_i \rightarrow {\mathbb R}} and an initial endowment {\omega_i \in {\mathbb R}^L_+}.

Definition 2 A price vector {p \in {\mathbb R}^L_+} is a Walrasian equilibrium for an exchange economy {(L, I, \{X_i, u_i, \omega_i\}_{i \in I})} if there is {x_i^* \in X_i} such that:

  1. {x_i^*} s.t. {u_i(x_i^*) = \max \{u_i(x_i); x_i \in X_i, p \cdot x_i \leq p \cdot \omega_i\}}
  2. {\sum_i x_i^* \leq \sum_i \omega_i}
  3. {p \cdot (\sum_i x_i^* - \omega_i)= 0}

The first condition says that each consumer is maximizing his utility given his prices, the second says that we can’t buy more commodities than what is available in the market and the third, called Walras’ Law, says that if there is surplus of a certain product, it should have price zero. It is by far the most unnatural of those, but it can be easily justifiable in some circumnstances: suppose we say that utilities are non-satiated if for each {x_i \in X_i} and {\epsilon > 0}, there is {x'_i \in X_i}, {\Vert x_i - x'_i \Vert < \epsilon} such that {u_i(x'_i) > u_i(x_i)}. If {u_i} are differentiable, that would mean {\nabla u_i \neq 0}, for example a linear function {u_i(x_i) = \sum_\ell u_{i\ell} x_{i\ell}} with some {u_{i\ell} > 0}. In that case, {\sum_i (p \cdot x_i^* - p \cdot \omega_i) < 0} and some player has money surplus and therefore he could increase his utility.

Now, we define for each price vector {p} the excess demand function {z_i(p) = x_i(p) - \omega_i} and {z(p) = \sum_i z_i(p)}. Now, under non-satiated utilities, by the last argument, we have that {p} is an equilibrium vector iff {z(p) \leq 0}. Actually, if {u_i} are also strong monotone, i.e., {u_i(x_i + \xi) \geq u_i(x_i)} for each {\xi \geq 0}, then it becomes: {p} is an equilibrium iff {z(p) = 0}, which means that the market clears:

\displaystyle \sum_i x_i^* = \sum_i \omega_i

The question that is easier to answer is Question 4 and it is sometimes refered as the First Fundamental Theorem of Welfare Economics:

Theorem 3 Given non-satiated preferences, each equilibrium {(x,p)} is Pareto, i.e. there is no other feasible allocation {x'} such that for all {i}, {u_i(x'_i) \geq u_i(x_i)} with the inequality strict for at least one component.

Proof: Suppose there were, since {u_i(x'_i) \geq u_i(x_i)} then {p \cdot x'_i \geq p \cdot \omega_i}, because if {p \cdot x'_i < p \cdot \omega_i} then we could improve the utility of {x'_i} still within the budget, contradicting the optimality of {u_i(x_i)} for that budget. And clearly {u_i(x'_i) > u_i(x_i)} implies {p \cdot x'_i > p \cdot \omega_i}.

Summing over {i}, we get {\sum_i p x'_i > \sum_i p \omega_i}, what is a contradiction, because since {x'} is feasible, {\sum_i x'_i \leq \sum_i \omega_i} and therefore {\sum_i p \cdot x'_i \leq \sum_i p \cdot \omega_i}. \Box

Now, let’s tackle Question 1. We assume linearly of utility: {u_i(x_i) = \sum_{\ell \in L} u_{i \ell} x_{i \ell}} for {u_{i \ell} > 0}. This gives us strong monotonicity and local nonsatiated preferences.

Theorem 4 Under linear utilities, there is always an equilibrium price vector {p}.

Consider the function {z} defined above: {z(p) = \sum_i (x_i(p) - \omega_i)} where {x_i(p)} is the bundle of best possible utility. Now, since we are using linear utilities we can’t guarantee there will be only one such bundle, so instead of considering a function, consider {x_i} and {z} as being correspondences: {x_i, z: {\mathbb R}^L_+ \rightarrow \mathcal{P}({\mathbb R}^L)}, i.e., {x_i(p) \subseteq {\mathbb R}^L_+} is the set of all allocations that maximize {u_i(x_i)} subject to {p\cdot x_i \leq p \cdot \omega_i}. Since {u_i} are linear functionals, we can calculate {x_i(p)} by a Fractional Knapsack algorithm: we sort commodities {\ell \in L} by {\frac{p_\ell}{u_{i\ell}}} and start buying in the cost-benefit order (the ones that provide more utility per buck spent). Most of the time there will be just one solution, but in points where {\frac{p_\ell}{u_{i\ell}} = \frac{p_k}{u_{i k}}}, then {x_i(p)} might be a convex region. This correpondence is upper hemicontinuous, which is the correspondence analogue to continuity for functions. As Wikipedia defines:

Definition 5 A correspondence {\Gamma : A \rightarrow B} is said to be upper hemicontinuous at the point {a \in A} if for any open neighbourhood {V} of {\Gamma(a)} there exists a neighbourhood {U} of a such that {\Gamma(x)} is a subset of {V} for all {x} in {U}.

It is not hard to see that {z} is upper hemicontinuous according to that definition. Our goal is to prove that there is one price vector {p} for which {\overline{\omega} \in x_i(p)} or: {0 \in z(p)}. To prove that we use Kakutani’s Fixed Point Theorem. Before we go into that, we’ll explore some other properties of {z}:

  • 0-Homogeneous: {z(\alpha p) = z(p), \forall \alpha > 0}
  • Walras’ Law: {p \cdot z(p) = 0}. For any {\sum_i (x_i - \omega_i) \in z(p)} we know {\sum_i (p \cdot x_i - p \cdot \omega_i) \leq 0} by the definition of {x_i}. So, if it not zero, some {i} has money surplus what is absurd given that preferences are strongly monotone.
  • Bounded: {z(p)} is bounded from below, i.e., {z_\ell(p) \geq -s, \forall p, \ell} for some {s > 0}. Simply take {s = \max \omega_i}
  • Boundary behavior: if {p^k \rightarrow p \neq 0} with {p_\ell = 0}, then {z_\ell(p^k) \rightarrow \infty}. That is clear from the fractional knapsack algorithm when one desirable item gets price zero.

Now, we are in shape for applying Kakutani’s Fixed Point Theorem:

Theorem 6 (Kakutani, 1941) If {f:A \rightarrow \mathcal{P}(A)} is an upper hemicontinuous correspondence such that {f(a)} is a convex non-empty set for all {a \in A} then {f} has a fixed point, i.e., {\exists x \in A} s.t. {x \in f(x)}.

Since prices are {0}-homogeneous, consider the simplex {\Delta = \{p \geq 0; \sum_\ell p_\ell = 1\}}, its relative interior {\Delta^0 = \{p > 0; \sum_\ell p_\ell = 1\}} and the boundary {\partial \Delta = \Delta \setminus \Delta^0}. Now we define the following price correcting correspondence {f:\Delta \rightarrow \mathcal{P}(\Delta)}.

If some price {p \in \Delta^0} is set, it generates demand {d \in z(p)}. For that demand, the price that would maximize profit would be {q}, i.e. {q\cdot d \geq q' \cdot d} for all {q' \in \Delta}. It is natural to re-adjust the prices to {q}. So we define for {p \in \Delta^0}:

\displaystyle f(p) = \{q \in \Delta; q \text{ is a best response price to some } d \in z(p)\}

and for {p \in \partial \Delta}:

\displaystyle f(p) = \{q \in \Delta; q \cdot p = 0\}

Now, I claim that this correspondence satisfies the conditions in Kakutani’s Theorem. We skip a formal proof of this fact, but this is intuitive for the interior {\Delta^0} – let’s give the intuition why this is true as we approach the boundary: if {\Delta^0 \ni p^k \rightarrow p \in \partial \Delta}, then {p^k_\ell \rightarrow 0}, therefore the demans explodes: {z_\ell(p^k) \rightarrow \infty} and as a result the best thing to do is to set the prices of those commodities much higher than the rest. Therefore, the price of the commodities whose demand explode are positive while the prices of the commodities where the price doesn’t get value zero.

Now, after waiving our hands about the upper continuity of {f}, we have by Kakutani’s Theorem a point {p \in \Delta} such that {p \in f(p)}. By the definition of {f} we must have {p \in \Delta^0} (because for {p \in \partial \Delta}, {p \notin f(p)}. Now, I claim {z(p) = 0}. In fact if {z(p) \neq 0}, still {p z(p) = 0} by Walras’ Law. So, if {z(p) \neq 0} then there is {\ell} with {z_\ell (p) < 0} and therefore {q_\ell = 0} for all {q \in f(p)}, and {f(p) \subseteq \partial \Delta}. For this reason {z(p) = 0}.

In the next blog post (or serie of blog posts, let’s see) we discuss issues related to the other questions: uniqueness, dynamics, game-theoretical considerations, …

Bounded Degree Spanning Tree and an Uncrossing Lemma

November 17th, 2009 No comments

I’ve been reading about the Bounded Degree Spanning Tree problem and I thought of writing some of what I am learning here. It illustrates a beautiful techique called Iterated Rounding and uses the combinatorial idea of uncrossing. I’ll try to give a high-level idea of the argument and give references on the details. The first result of this kind was given by Goemans (although there were previous results with weaker guarantees) by Goemans in Minimum Bounded Degree Spanning Trees, but the result based on iterated rounding and a subsequent improvement are due to Singh and Lau in a serie of papers. A main reference is Approximating minimum bounded degree spanning trees to within one of optimal.

The problem of bounded degree spanning tree is as follows: consider a graph {G = (V,E)} with edge weights and we for some nodes {W \subseteq V} a degree bound {b_v, v \in W}. We want to find, among the spanning trees for which the degree of {v} is {\leq b_v} the one with minimum cost. It is clearly a hard problem, since taking all weights equal to {1} and {b_v = 2} for all nodes is the Hamiltonian Path problem, which is NP-complete. We will get a different kind of approximation. Let OPT be the optimal solution: we will show an algorithm that gives a spanning tree of cost {\leq OPT} such that each node {v} has degree {\leq b_v + 2} (this can be improved to {b_v + 1} with a more sofisticated algorithm, also based on Iterated Rounding).

As always, the first step to design an approximation algorithm is to relax it to an LP. We consider the following LP:

\displaystyle \left. \begin{aligned} & \min \sum_{e\in E} c_e x_e \text{ s.t. } \\ & \qquad \left. \begin{aligned} & \sum_{e \in E} x_e = \vert V \vert - 1 \\ & \sum_{e \in E(S)} x_e \leq \vert S \vert - 1 & \forall S \subseteq V, \vert S \vert \geq 2 \\ & \sum_{e \in \delta(v)} x_e \leq b_v & \forall v \in W\\ & x_e \geq 0 & \forall e \in E \end{aligned} \right. \end{aligned} \right.

The first constraint expresses that in a spanning tree, there are at most {\vert V \vert - 1} edges, the second prevent the formation of cycles and the third guarantees the degree bounds. For {W = \emptyset} we have the standard Minimal Spanning Tree problem and for this problem the polytope is integral. With the degree bounds, we lose this nice property. We can solve this LP using the Ellipsoid Method. The separation oracle for the {\sum_{e \in E(S)} x_e \leq \vert S \vert - 1} is done by a flow computation.

Iterated Rounding

Now, let’s go ahead and solve the LP. It would be great if we had an integral solution: we would be done. It is unfortunately not the case, but we can still hope it is almost integral in some sense: for example, some edges are integral and we can take them to the final solution and recurse the algorithm on a smaller graph. This is not far from truth and that’s the main idea of the iterated rounding. We will show that the support of the optimal solution {E(x) = \{e \in E; x_e > 0\}} has some nice structure. Consider the following lemma:

Lemma 1 For any basic solution {x} of the LP, either there is {v} with just one incident edge in the support {E(x)} or there is one {v \in W} such that that at most {3} edges are incident to it.

If we can prove this lemma, we can solve the problem in the following way: we begin with an empty tree: then we solve the LP and look at the support {E(x)}. There are two possibilities according to the lemma:

  • If there is one node {v} with just one edge {e = (u,v)} incident to it in the support, we add it to the tree, remove {v} from {V}, decrease {b_u}, make {E = E(x)} (the trick is to remove in each iteration edges from {E} that are not in the support. Clearly, removing those edges doesn’t hurt the objective value) and run the algorithm again. Notice that the LP called in the recursion has value less or equal then the actual LP {- c_e}. So if by induction we get a spanning tree respecting the new degree bounds plus two and value less or equal than the new LP value, we can just add {e} and we have a solution with value less or equal than the one of the original LP respecting the degree bounds plus two.
  • Otherwise, there is one node {v \in W} that has degree {3} in the support. So, we just remove that degree bound on that vertex (i.e. remove {v} from {W}), make {E = E(x)} (again,eliminate the edges not in the support) and run the algorithm again. Clearly, if one node is still in {W}, it has {b_v \geq 1}, since there are only three edges in the support, there will be for the rest of the computation, just three edges incident to it, so there will be at most three edges more incident to it. So it will exceed its original {b_v} by at most {2}.

The algorithm eventually stops, since in each iteration we have less edges or less nodes in {W} and the solution is as desired. The main effort is therefore to prove the lemma. But before, let’s look at the lemma: it is of the following kind: “any basic solution of the LP has some nice properties, which envolve having a not too big (at least in some point) support”. So, it involves proving that the support is not too large. That is our next task as we are trying to prove the lemma. And we will be done with:

Theorem 2 The algorithm described above produces a spanning tree of cost {\leq Z^*} (the LP values and therefore {\leq OPT})in which each node {v \in W} has degree {\leq b_v + 2}.

Bounding the size of the support

We would like now to prove some result like the Lemma above: that in the solution of the LP we have either one {v} with degree {1} in {E(x)} or we have a node in {W} with degree {\leq 3}. First, we suppose the opposite, that {(V,E(x))} has all the nodes with degree {\geq 2} and all the nodes in {W} have degree {\geq 4}. This implies that we have a large number of edges in the support. From the degrees, we know that:

\displaystyle \vert E(x) \vert \geq \frac{1}{2} \left( 2( \vert V \vert - \vert W \vert ) + 4 \vert W \vert \right) = \vert V \vert + \vert W \vert

We want to prove that the support {\vert E(x) \vert} of the LP can’t be too large. The first question is: how to estimate the size of the support of a basic solution. The constraints look like that:

bst_fig1

A basic solution can be represented by picking {\vert E \vert} rows of the matrix and making them tight. So, if we have a general {Ax \leq b} LP, we pick some submatrix {A'} of {A} which is {n \times n} and the basic solution is just {x = A'^{-1} b'}. The lines of matrix {A'} can be of three types: they can be {\chi_S}, which are corresponding to {\sum_{e \in E(S)} x_e \leq \vert S \vert - 1}, {\chi_v} that correspond to {\sum_{e \in \delta(v)} x_e \leq b_v} or {\delta_e} corresponding to {x_e \geq 0}. There are {\vert E \vert} vectors in total. The size of the support {E(x)} is smaller or equal the number of rows of the form {\chi_S, \chi_v} in the basic solution. Therefore the idea to bound the size of the support is to prove that “all basic solutions can be represented by a small number of rows in the form {\chi_S, \chi_v}. And this is done using the following:

Lemma 3 Assuming {E = E(x)}, for any basic solution {x}, there is {Z \subseteq W} and a family {\mathcal{S}} of sets such that:

  1. The restrictions correspondent to {S \in \mathcal{S}} and {v \in Z} are tight for {x}
  2. {\{ \chi_S; S \in \mathcal{S} \} \cup \{ \chi_v; v \in Z \}} is an independent set
  3. {\vert \mathcal{S} \vert + \vert Z \vert = \vert E(x) \vert}
  4. {\mathcal{S}} is a laminar family

The first 3 items are straightfoward properties of basic solutions. The fourth one, means that for two sets {S_1, S_2 \in \mathcal{S}}, one of three things happen: {S_1 \subseteq S_2}, {S_2 \subseteq S_1} or {S_1 \cap S_2 = \emptyset}. Now, we based on the previous lemma and in the following result that can be easily proved by induction, we will prove Lemma 1.

Lemma 4 If {\mathcal{S}} is a laminar family over the set {V} where each set {S \in \mathcal{S}} contains at least {2} elements, then {\vert \mathcal{S} \vert \leq \vert V \vert - 1}.

Now, the proof of Lemma 1 is easy. Let’s do it and then we come back to prove Lemma 3. Simply see that {\vert E(x) \vert = \vert \mathcal{S} \vert + \vert Z \vert \leq \vert V \vert - 1 + \vert W \vert} what contradicts {\vert E(x) \vert \geq \vert V \vert + \vert W \vert}.

Uncrossing argument

And now we arrive in the technical heart of the proof, which is proving Lemma 3. This says that given any basic solution, given any feasible solution, we can write it as a “structured” basic solution. We start with any basic feasible solution. This already satifies (1)-(3), then we need to change that solution to satisfy condition (4) as well. We need to get rid crossing elements, i.e., {S,T \in \mathcal{S}} in the form:

bst_fig2

We do that by the means of the:

Lemma 5 (Uncrossing Lemma) If {S} and {T} are intersecting and tight (tight in the sense that their respective constraint is tight), then {S \cup T} and {S \cap T} are also tight and:

\displaystyle \chi_{S \cap T} + \chi_{S \cup T} = \chi_S + \chi_T

Which corresponds to that picture:

bst_fig3

Proof: First, we note that {x(E(S))} is a supermodular function, i.e.:

\displaystyle x(E(S)) + x(E(T)) \leq x(E(S \cap T)) + x(E(S \cup T))

We can see that by case analysis. Every edge appearing in the left side appears in the right side with at least the same multiplicity. Notice also that it holds with strict inequality iff there are edges from {S\setminus T} to {T \setminus S}. Now, we have:

\displaystyle \begin{aligned} (\vert S \vert - 1) + (\vert T \vert - 1) & = (\vert S \cap T \vert - 1) + (\vert S \cup T \vert - 1) \geq \\ & \geq x(E(S \cap T)) + x(E(S \cup T)) \geq \\ & \geq x(E(S)) + x(E(T)) = (\vert S \vert - 1) + (\vert T \vert - 1) \end{aligned}

where the first relation is trivial, the second is by feasibility, the third is by supermodularity and the lastone is by tightness. So, all hold with equality and therefore {S \cap T} and {S \cup T} are tight. We also proved that:

\displaystyle x(E(S \cap T)) + x(E(S \cup T)) = x(E(S)) + x(E(T))

so there can be no edge from {S\setminus T} to {T \setminus S} in {E(x)} and therefore, thinking just of edges in {E(x)} we have:

\displaystyle \chi_{S \cap T} + \chi_{S \cup T} = \chi_S + \chi_T

\Box

Uncrossing arguments are found everywhere in combinatorics. Now, we show how the Uncrossing Lemma can be used to prove Lemma 1:

Proof: Let {x} be any basic solution. It can be represented by a pair {(Y, \mathcal{C})} where {Y \subseteq W} and {\mathcal{C}} is a family of sets. We will show that the same basic solution can be represented by {(Y, \mathcal{L})} where {\mathcal{L}} is a laminar family and has the same size of {\mathcal{C}}.

Let {\mathcal{S}} be all sets that are tight under {x} and {\mathcal{L}} a maximal laminar family of tights sets in {\mathcal{S}}, such that {\{\chi_S; S \in \mathcal{L} \} \cup \{\chi_v; v \in Z \}} are independent. I claim that {\vert \mathcal{L} \vert = dim(span(\mathcal{S}))}.

In fact, suppose {\vert \mathcal{L} \vert < dim(span(\mathcal{S}))}, then there are sets of {\mathcal{S}} we could add to {\mathcal{L}} without violating independence – the problem is that those sets would cross some set. Pick such {S \in \mathcal{S}} intersecting fewer possible sets in {\mathcal{L}}. The set {S} intersects some {T \in \mathcal{L}}. Since both are tight we can use the Uncrossing Lemma and we get:

\displaystyle \chi_{S \cap T} + \chi_{S \cup T} = \chi_S + \chi_T

since {\chi_S \notin span(\mathcal{L})}, we can’t have simultaneously {\chi_{S \cap T}} and {\chi_{S \cup T}} in {span(\mathcal{L})}. Let’s consider two cases:

  1. {\chi_{S \cap T} \notin span(\mathcal{L})}, then {S \cap T \in span(\mathcal{S})} is in {span(\mathcal{S}) \setminus span(\mathcal{L})} and intersects fewer sets of {\mathcal{L}} than {S}, since all sets that intersect {S \cap T} in {\mathcal{L}} must intersect {S} as well (since no set can cross {T}).bst_fig4
  2. {\chi_{S \cup T} \notin span(\mathcal{L})}, then {S \cup T \in span(\mathcal{S})} is in {span(\mathcal{S}) \setminus span(\mathcal{L})} and intersects fewer sets of {\mathcal{L}} than {S}, since all sets that intersect {S \cup T} in {\mathcal{L}} must intersect {S}.

In either case we have a contradiction, so we proved that {\vert \mathcal{L} \vert = dim(span(\mathcal{S}))}. So we can generate all the space of tight sets with a laminar family. \Box

And this finishes the proof. Let’s go over all that we’ve done: we started with an LP and we wanted to prove that the support of each solution was not too large. We wanted that because we wanted to prove that there was one node with degree one in the support or a node in {W} with small ({\leq 3}) degree. To prove that the degree of the support is small, we show that any basic solution has a representation in terms of a laminar family. Then we use the fact that laminar families can’t be very large families of sets. For that, we use the celebrated Uncrossing Lemma.

Note: Most of this is based on my notes on David Williamson’s Approximation Algorithms class. I spent some time thinking about this algorithm and therefore I decided o post it here.