Department of Computer Science | Institute of Theoretical Computer Science | CADMO

Theory of Combinatorial Algorithms

Prof. Emo Welzl and Prof. Bernd Gärtner

Almost Uniform Sampling of Unique Sink Orientations

Towards The Peak - Problem Section

next up previous
Next: i-Niceness Up: Problem Section Previous: Summary

Almost Uniform Sampling of Unique Sink Orientations

Let $\varphi$ be a unique sink orientation on the d-cube. Take a direction i and consider any two vertices v,w of Cd such that $i\in v$ and $i\not\in w$ (that is, v lies in the ``top facet'' and w in the ``bottom facet'' with respect to direction i). Suppose that within the subcube they span, v and w have the same out-map except possibly for direction i, i.e.,

   \begin{displaymath}
\left((s_{\varphi}(v)\oplus s_{\varphi}(w))\cap (v\oplus
w)\right)\setminus {i} = \emptyset.
\end{displaymath}

Then, by necessity, the edges emanating from v and w, respectively, in direction i have to have the same orientation2. The transitive closure of this relation defines an equivalence relation on the edges in direction isuch that equivalent edges have to have the same orientation. Observe that this equivalence relation only depends on the unique sink orientations $\underline{\varphi}(i)$ and $\overline{\varphi}(i)$defined by restricting $\varphi$ to the two facets that are separated by direction i.

Conversely, if we fix two (d-1)-dimensional unique sink orientations $\underline{\varphi}$ and $\overline{\varphi}$on these facets, these orientations define an equivalence relation on the edges in direction i. Moreover, if we orient the edges in direction i, we obtain a unique sink orientation on the whole cube iff we give the same orientation to edges in a common equivalence class.

Based on these observations, Pavel Valtr and Jirí Matousek have designed the following Markov chain whose stationary distribution is the uniform distribution on the set $\USO_d$ of all unique sink orientations on the d-dimensional cube.

Let $\varphi$ be a given unique sink orientation on the d-cube. Choose a direction $i\in \{1\ldots d\}$ uniformly at random. Then the edges in direction i are partitioned into a certain number $k=k(\underline{\varphi}(i),\overline{\varphi}(i))$ of equivalence classes. With probability 1/2, we flip all the edges in one class, and we do so independently for each equivalence class.

This defines a Markov chain on $\USO_d$. For $\varphi,\psi\in\USO_d$, let $p_{\varphi\psi}$ denote the probability that starting from $\varphi$, we arrive at $\psi$ in one step as described above. Thus, $P=[p_{\varphi\psi}]$ is the transition matrix of our Markov chain.

Note that $p_{\varphi\psi}=0$, unless $\psi$ and $\varphi$ agree on all edges except possibly on some in a fixed direction, that is, unless $\underline{\varphi}(i)=\underline{\psi}(i)$ and $\overline{\varphi}(i)=\overline{\psi}(i)$ for some (unique) i. In this case,

\begin{displaymath}p_{\varphi\psi}=p_{\psi\varphi}=\frac{1}{dk},\end{displaymath}

where $k=k(\underline{\varphi}(i),\overline{\varphi}(i))$ denotes the number of equivalence classes into which the edges in direction iare partitioned.

In particular, it follows that $p_{\varphi\varphi}>0$ for all $\varphi\in\USO_d$. Translated into the terminology of random processes, the first observation means that our Markov chain is symmetric, while the second one implies that it is aperiodic. Moreover, it is also irreducible, i.e. for any two $\varphi,\psi\in\USO_d$, there is a positive probability of getting from $\varphi$ to $\psi$ in a finite number of transitions. This is the case because one can get from any $\varphi$ to the uniform orientation $\upsilon$ (and back, by symmetry): We can make all edges in a given direction i uniformly oriented by flipping certain equivalence classes of edges in this direction. Proceeding by one direction at a time, we obtain the uniform orientation.

These three observations imply that the Markov chain converges to a stationary distribution, i.e. a probability distribution $\pi=(\pi_\varphi)_{\varphi\in\USO}$ such that $P\pi=\pi$, and that $\pi$ is, in fact, the uniform distribution.

The question remains, of course, whether we can say anything about the mixing rate.


next up previous
Next: i-Niceness Up: Problem Section Previous: Summary
Ingo Schurr and Uli Wagner ({schurr,uli}@inf.ethz.ch)
2001-09-04