Almost Perfect Bisection 1

During the ICS 2011, Yuan Zhou from CMU introduced a new algorithm for finding bisections from an almost bipartite graph. He raised an interesting open question to me, which incentivizes me to go through his paper and two related ones. The topic is mainly as follows: given a graph which is guaranteed to have a bisection cutting 1-\epsilon fraction of the edges, how well can we approximately find some (possibly different) bisection cutting 1-g(\epsilon) of the edges? It has already been proved that g(\epsilon) cannot be smaller than \Omega(\epsilon^{1/2}) based on the UGC-hardness. Zhou et al.’s paper has given a result of g(\epsilon)=O(\epsilon^{1/3}\log{1/\epsilon}), while two previous results are focusing on a different benchmark of g(\epsilon)=O(\epsilon\cdot \log n). The latter will be provided in a different post tomorrow.

Algorithm

The main idea of the algorithm is as follows. After the preprocessing, we can assume that the input graph is already bipartite V=A\cup B where the size of two parts are not necessarily the same. This loses only \epsilon^{1/2} fraction of the edges and uses the well-known Goemans-Williamson SDP relaxation algorithm which guarantees to find a 1-\epsilon{1/2} MAX-CUT from a graph that exists some 1-\epsilon MAX-CUT. The algorithm later will make extensive use of this bipartite separation A,B to find a good bisection.

The first step is the decomposition step. The algorithm will partition the nodes V into V=\cup_i W_i, where each W_i is either a \lambda-expander graph or a small enough subgraph s.t. vol(W_i)\leq \delta vol(V). Here vol(W) is the total summation of the degrees of points in W. This decomposition will ensure that the edges acrossing different W_i‘s are on the magnitude of \sqrt{\lambda}\log{(1/\delta)}|E|. To achieve this requires Cheeger’s inequality and a subroutine which guarantees that if a graph G is of small conductance (low in \lambda_2), one can divide it into G=A\cup \bar{A}, |A|\leq |G| such that the cross edges are bounded by \sqrt{2\lambda_2}|A|. Then we simply need to run this subroutine again and again to achieve the desired decomposition.

In the second and the third step, I will assume that if the optimal bisection cut is V=S\cup T, then for each subgraph W_i, |S_i|:=|W_i \cup S| is known to the algorithm. In the fourth step, I will state that we can use dynamic programming to relax this constraint.

The second step is to process the \lambda-expanders W_i. The algorithm will guarantee to find in polynomial time some cut W_i=A_i \cup B_i such that this is an almost perfect cut and satisfies |A_i|=|S_i|, |B_i|=|T_i|. This is because we eventually need to find a almost-perfect bisection so we want this partial cut on subgraph W_i to be as close to S,T as possible; they must have the same number of nodes.

Using the definition of expander graph, one can show that if there are two 1-\delta cuts in some \lambda-expander graph, then the two cuts differ by at most O(\delta/\lambda) of the edges. Because we already have a perfect cut A|_{W_i} \cup B|_{W_i} for {W_i}, then since the global optimal bisection S\cup T induces another almost-perfect cut S_i\cup T_i, they cannot be too far away. The major lemma states that we can move some low-degree nodes from B|_{W_i} to A|_{W_i} (or possibly the other way), to make another almost-perfect cut A_i \cup B_i, but guaranteeing that |A_i|=|S_i|, |B_i|=|T_i|..

The third step is to process the small graphs vol(W_i)\leq \delta vol(V). Assume the optimal bisection satisfies that |S_i|\leq \theta |W_i|, then the algorithm in poly time finds some other cut A_i \cup B_i satisfying |A_i|\leq (\theta+\Delta)|W_i| and \mathrm{uncut}(A_i,B_i) \leq O(\sqrt{\mathrm{uncut}(S_i,T_i)} + \mathrm{uncut}(S,T)/\Delta. Notice that the latter will be bounded by \sqrt{\epsilon} + \epsilon/\Delta after summing up for all i‘s.

The technique to obtain this is similar to the original SDP of Goemans-Williamson for MAX-CUT, and the term \sqrt{\mathrm{uncut}(S_i,T_i)} is directly from that algorithm. The extra term of $latex \mathrm{uncut}(S,T)/\Delta$ is due to the requirement to make |A_i| as small as possible (in fact \leq (\theta+\Delta)|W_i|). Unlike the previous relaxation which makes x_i=+1 if \langle v_0, v_i\rangle>0 and -1 for the other case, this new algorithm divides the nodes to a bunch of pairs of sets Z_k^+,Z_k^-, according to the real number \langle v_0, v_i\rangle. For each pair Z_k^+,Z_k^-, they contain antipodals and the algorithm will pick the smaller one of them to be in the set A_i. Details ignored here.

The fourth step is a simply dynamic programming algorithm. Since we have assumed the exact number |S_i| (or equivalently |T_i|) in the previous steps, we actually need a dynamic programming technique to enumerate over all such possibilities. The state Q[a,b] contains the maximum number of cross edges if we consider the first a subgraphs W_1,\ldots,W_a, and having a cut that has exactly b nodes on one side. This DP algo is straightforward, and based on our assumptions in the previous two steps, there must exist a good one which is very close to the behavior of the optimal S,T.

Remark

The final theorem states that the fraction of uncut edges is bounded by \sqrt{\lambda}\log{1/\delta}+\sqrt{\epsilon}+\epsilon/\lambda+\epsilon/\Delta. The bottleneck is the first and the third term, which are introduced by the cross edges between decompositions and the uncut edges within an expander graph. Choosing appropriate parameters ends the proof.

The paper also provided an improved algorithm which states that for any constant tolerence \delta, if we allow our outputed cut to be A\cup B=V s.t. 1/2-\delta < \frac{|A|}{|V|} < 1/2+\delta, then we can obtain the 1-\sqrt{\epsilon} approximation, instead of 1-\epsilon^{1/3}. The rough idea is to decompose to (\alpha,\lambda)-expenders instead of \lambda-expanders. This definition roughly requires that for each bisection with tolerence \alpha, the cut edges is more than \lambda. This is a weaker definition because we no longer require every partition but only bisections with tolerence. Using the traditional SDP (with modifications) directly, one can find a good enough bisection for all such (\alpha,\lambda)-expanders. This technique removes the third term in the result above, but requires very complicated analysis, which I ignored here.

Advertisements
This entry was posted in Algorithm, Readings. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s