Up: Publication homepage

Linear and $O(n\log n)$ Time Minimum-Cost Matching Algorithms for Quasi-convex Tours 1

Samuel R. Buss 2. - Peter N. Yianilos 3.

Abstract:

Let $G$ be a complete, weighted, undirected, bipartite graph with $n$ red nodes, $n^\prime$ blue nodes, and symmetric cost function $c(x,y)$. A maximum matching for $G$ consists of $\min\{n,n^\prime\}$ edges from distinct red nodes to distinct blue nodes. Our objective is to find a minimum-cost maximum matching, i.e., one for which the sum of the edge costs has minimal value. This is the weighted bipartite matching problem; or as it is sometimes called, the assignment problem.

We report a new and very fast algorithm for an abstract special case of this problem. Our first requirement is that the nodes of the graph are given as a `quasi-convex tour'. This means that they are provided circularly ordered as $x_1,\ldots,x_N$ where $N = n + n^\prime$, and that for any $x_i, x_j, x_k, x_\ell$, not necessarily adjacent but in tour order, with $x_i,x_j$ of one color and $x_k,x_\ell$ of the opposite color, the following inequality holds:


\begin{displaymath}
c(x_i,x_\ell) + c(x_j,x_k) \le c(x_i,x_k) + c(x_j,x_\ell)
\end{displaymath}

If $n = n^\prime$, our algorithm then finds a minimum-cost matching in $O(N \log N)$ time. Given an additional condition of `weak analyticity', the time complexity is reduced to $O(N)$. In both cases only linear space is required. In the special case where the circular ordering is a line-like ordering, these results apply even if $n \ne
n^\prime$.

Our algorithm is conceptually elegant, straightforward to implement, and free of large hidden constants. As such we expect that it may be of practical value in several problem areas.

Many natural graphs satisfy the quasi-convexity condition. These include graphs which lie on a line or circle with the canonical tour ordering, and costs given by any concave-down function of arclength -- or graphs whose nodes lie on an arbitrary convex planar figure with costs provided by Euclidean distance.

The weak-analyticity condition applies to points lying on a circle with costs given by Euclidean distance, and we thus obtain the first linear-time algorithm for the minimum-cost matching problem in this setting (and also where costs are given by the $L_1$ or $L_\infty$ metrics).

Given two symbol strings over the same alphabet, we may imagine one to be red and the other blue, and use our algorithms to compute string distances. In this formulation, the strings are embedded in the real line and multiple independent assignment problems are solved; one for each distinct alphabet symbol.

While these examples are somewhat geometrical, it is important to remember that our conditions are purely abstract; hence, our algorithms may find application to problems in which no direct connection to geometry is evident.


\begin{keywords}
assignment problem,
bipartite weighted matching,
computation...
...time,
Monge property,
quadrangle inequality,
string comparison
\end{keywords}


\begin{AMS}
05C70, 05C85, 05C90, 52A37, 68Q20, 68R10, 68U15, 90C27
\end{AMS}

Introduction

The above abstract gives a short overview of the contents of the paper and we shall give an in-depth discussion of our definitions, results and algorithm below. However, we first give a quick review of prior related work on matching. We shall consider graphs $G$ which have $N$ nodes, the nodes are partitioned into a set of $n$ red nodes and $n^\prime$ blue nodes with $N = n + n^\prime$. $G$ is balanced if it has equal numbers of red and blue nodes. There is a symmetric cost function $c(x,y)$, which gives the cost of an edge from node $x$ to node $y$, with $x$ and $y$ of distinct colors. A matching is a set of edges with no endpoints in common that match all the nodes of one color with the same number of nodes of the opposite color. The cost of a matching is the sum of the costs of its edges. The problem of finding a minimal-cost matching for a general bipartite graph is known to have an $O(N^3)$ time algorithm (see Lawler [18] for this and other background on matching), and for graphs with nodes in the plane with the Euclidean distance as cost function, there is a $O(N^{2.5}\log N)$ time algorithm, due to Vaidya [22].

The minimum-cost matching problem is substantially easier in the case where the nodes are in line-like order or are circularly ordered. The simplest versions of line-like/circular orderings are where the points lie on a line or lie on a curve homeomorphic to a circle, and the cost $c(x,y)$ of an edge between $x$ and $y$ is equal to the shortest arclength distance between the nodes. The matching problem for this arclength cost function has been studied by Karp-Li [14], Aggarwal et al. [1], Werman et al. [23] and others, and is the `Skis and Skiers' problem of Lawler [18]. Karp-Li have given linear time algorithms for this matching problem; Aggarwal et al. have generalized the linear time algorithm to the transportation problem.

A more general version of the matching problem for graphs in line-like order has been studied by Gilmore-Gomory [10] (see [18]). In this version, the cost of an edge from a red node $x$ forward to a blue node $y$ is defined to equal $\int_x^y f$ and from a blue node $x$ forward to a red node $y$ to equal $\int_x^y g$, for some functions $f$ and $g$. This matching problem has a linear time algorithm provided $f+g\ge 0$.

Another version of the matching problem for line-like graphs is considered by Aggarwal et al.[1]: they use graphs which satisfy a ``Monge'' property which states that the inequality (1.1) below holds except with the inequality sign's direction reversed. They give a linear time algorithm for the matching problem for (unbalanced) Monge graphs.

In the prior work most closely related to this paper, Marcotte and Suri [20] consider the matching problem for a circularly ordered, balanced tour in which the nodes are the vertices of a convex polygon and the cost function is equal to Euclidean distance. This matching problem is substantially more complicated than the comparatively simple `Skis and Skiers' type problems; nonetheless, Marcotte and Suri give an $O(N \log N)$ time algorithm which solves this minimum-cost matching problem. For the case where the nodes are the vertices of a simple polygon and the cost function is equal to the shortest Euclidean distance inside the polygon, they give an $O(N\log^2 N)$ time algorithm.

The main results of this paper apply to all the above matching problems on circularly ordered or line-like tours, with the sole exception of unbalanced, Monge graphs. For the `Skis and Skiers' and the Gilmore-Gomory problems, Theorem 1.9 gives new linear time algorithms which find minimum-cost matchings which are different than the traditional minimum-cost matchings (and our algorithms are more complicated than is necessary for these simple problems). Our algorithms subsume those of Marcotte and Suri and give some substantial improvements: First, with the weak analyticity condition, we have linear time algorithms for many important cases, whereas Marcotte and Suri's algorithm takes $O(N \log N)$ time. Second, our assumption of quasi-convexity is considerably more general than their planar geometrical setting and allows diverse applications. Third, our algorithms are conceptually simpler than the divide-and-conquer methods used by Marcotte and Suri, and we expect that our algorithms are easier to implement.

All of our algorithms have been implemented as reported in [5]; a brief overview of this implementation is given in section 3.4.

We list some sample applications of our algorithms in Examples 1-8 below. One example of a matching problem solution is shown in Figure 1.1. For this figure, a 74 node bipartite graph was chosen with nodes on the unit circle. For this matching problem, the cost of an edge is equal to the Euclidean distance between its endpoints. The edges shown form a minimum-cost matching.

Figure 1.1: The minimum cost matching for a 74 node graph on the the circle with Euclidean distance as the cost function.
\scalebox{0.5}{\includegraphics{circle_fig.ps}}

Our quasi-convex property is equivalent to the ``inverse quadrangle inequality'' used, for instance, by [8], but is weaker than the similar ``inverse Monge property'' of [4]. In fact, we show below that any Monge matching problem may be trivially transformed into a quasi-convex matching problem, but not vice-versa.

Dynamic programming problems based on cost functions which satisfy the (inverse) quadrangle inequality and some closely related matrix-search problems have been studied by many authors, including [,,,,,,,,,,,,]. However, we have discovered no direct connection between our quasi-convex matching problem and the problems solved by these authors.

The notion of a Monge array [13] is related to that of quasi-convexity, but the Monge condition is a stronger (i.e., quasi-convexity is strictly more general). Because of the similarity between the definitions of both properties, we take the time to illustrate this point in detail. To understand the Monge property in our bipartite setting, imagine the cost function to be an array, and impose the restriction that its first argument select a red point, and the second a blue point. The array is Monge provided that for all $i,j,k,\ell$ satisfying $1 \leq i < j \leq n$ and $1 \leq k <
\ell \leq n^\prime$, we have:

\begin{displaymath}
c(R_i,B_k) + c(R_j,B_\ell) \leq c(R_i,B_\ell) + c(R_j,B_k)
\end{displaymath}

Now given a graph with a Monge cost array, we convert it (in linear time) to a quasi-convex tour by simply visiting the red vertices first, in Monge order, followed by the blue vertices, in reverse Monge order. The quasi-convexity inequality is then an immediate consequence of the Monge property and our reverse ordering of the blue vertices. This reversal is necessary because the sense of the Monge inequality is opposite that of quasi-convexity.

However not every quasi-convex tour can be rearranged to form a Monge array. We will now exhibit such a quasi-convex tour. Its nodes lie along the real line, and costs are given by the square root of inter-node distance; tour order is from left to right. Given a subtour of the form $R_i R_j B_a B_b$, then it is easily shown that in any Monge reordering, $B_a \prec B_b$ iff $R_j \prec R_i$. Similarly, given a tour of the form $R_j B_a B_b R_i$, $B_a \prec B_b$ iff $R_j \prec R_i$. Our counterexample then consists of any tour having a subtour of the form: $R B B R R B B$. To understand why, we attach subscripts resulting in $R_i B_a B_b R_j R_k B_c B_d$ and proceed to apply the two rules above to get the implications:

\begin{displaymath}
B_a \prec B_b ~~\Longrightarrow~~
R_i \prec R_j ~~\Longrightarrow~~
B_d \prec B_c ~~\Longrightarrow~~
R_j \prec R_k.
\end{displaymath}

Also

\begin{displaymath}
B_a \prec B_b ~~\Longrightarrow~~
R_k \prec R_j,
\end{displaymath}

which is a contradiction. Symmetrically the same conclusion is reached if one begins instead with $B_b \prec B_a$, whence no Monge rearrangement exists.

We now give the definitions necessary to state the main results of this paper. We think of the nodes of the graph $G$ as being either a line-like or circular tour of the graph; in the case of a circular tour, we think of the node $x_1$ as following again after $x_N$.


\begin{definition}
A sequence of nodes $x_{i_1},x_{i_2},\ldots,x_{i_\ell}$\ are...
...ots,x_{i_\ell},x_{i_1},\ldots,x_{i_{k-1}}$\ is in
input order.
\end{definition}


\begin{definition}
The nodes $x_1,\ldots,x_N$\ form a {\em quasi-convex} tour i...
...)-c(x_i,x_k) & \le & c(x_j,x_\ell) - c(x_j,x_k).
\end{eqnarray}\end{definition}

Reordering terms in (1.1) gives

\begin{displaymath}
c(x_i,x_\ell)+c(x_j,x_k) ~ \le ~ c(x_i,x_k)+c(x_j,x_\ell).
\end{displaymath}

To give a geometric intuition to quasi-convexity, note that when $x_i, x_j, x_k, x_\ell$ are the vertices of a quadrilateral, the inequality states that the sum of the lengths of diagonals is greater than or equal to the sum of the lengths of two of the sides.


\begin{definition}
The tour $x_1,\ldots,x_N$\ of~$G$\ is {\em line-like} if and...
...splaymath} if $x_k$\ is of opposite color from $x_i$~and~$x_j$.
\end{definition}

The property of quasi-convexity is defined independently of the starting point of the tour; i.e., the nodes of the tour can be `rotated' without affecting quasi-convexity. Obviously, the definition of line-like tours is sensitive to the choice of starting point of the tour.

Our main theorems give either $O(N \log N)$ or $O(N)$ time algorithms for all of the following examples, with the exception of example 7:

  1. Let the nodes $x_1,\ldots,x_N$ be sequentially ordered points on a line (e.g., they are real numbers indicating points on the $x$-axis), and let $\vert\vert x_j-x_i\vert\vert$ be the Euclidean distance from $x_i$ to $x_j$. Let $f$ be any concave down function, so $f^{\prime\prime}(x)\le 0$ for all $x$. If the cost function is defined by
    $\displaystyle c(x_i,x_j) ~=~ f(\vert\vert x_j-x_i\vert\vert),$     (1)

    then $x_1,\ldots,x_N$ form a quasi-convex tour. Prior work for examples 1 and 2 gave linear time matching algorithms only for the case where $f(x)$ is a linear function [14,1].

  2. Now let the points $x_1,\ldots,x_N$ lie on a smooth curve $C$ which is homeomorphic to a circle, with the points listed in, say, counterclockwise order. And let $\vert\vert x_j-x_i\vert\vert$ equal the shortest arclength along $C$ from $x_i$ to $x_j$. Again let $f(x)$ be any concave down function. With the cost function given by equation (1.2), the nodes $x_1,\ldots,x_N$ form a quasi-convex tour.

  3. Suppose $x_1,\ldots,x_N$ lie, in that order, on a circle. Let $c(x_i,x_j)$ equal the Euclidean distance from $x_i$ to $x_j$. Since Euclidean distance is a concave down function of the circular arclength, this is a special case of example 2 and the nodes form a quasi-convex tour. In this case, the weak analyticity condition always holds and Main Theorem 1.9 gives an $O(N)$ time algorithm. The best prior algorithm was $O(N \log N)$ time [20].

  4. More generally, if $x_1,\ldots,x_N$ are the vertices of a convex polygon listed in, say, counterclockwise order, and if the cost function is equal to Euclidean distance, then the nodes form a quasi-convex tour. The prior algorithm for this case was $O(N \log N)$ time [20] and our algorithms are either $O(N)$ or $O(N \log N)$ time depending on whether the weak analyticity condition holds.

  5. Some non-convex polygons also have vertices which form a quasi-convex tour. For example, in a polygon shaped as in Figure 1.2, the vertices $A, B, C, D$ will form a quasi-convex tour, provided the angle $\theta$ not too large. (This is reason we use `quasi-convex' instead of `convex' to describe tours which satisfy equation (1.1).)

    Figure 1.2: A quasi-convex polygon which is not geometrically convex.
    \scalebox{0.7}{\includegraphics{trek.ps}}

  6. Examples 4 and 5 are also quasi-convex under other distance metrics such as the $L_1$ and $L_\infty$ metrics.

  7. Marcotte and Suri consider graphs where the nodes are the vertices of a simple polygon and the cost function is equal to the length of the shortest connecting path inside the polygon. The nodes of such a polygon form a quasi-convex tour. The prior algorithm and the algorithm of this paper are $O(N\log^2 N)$ time for this example, since the cost function requires $O(\log N)$ time to compute.

  8. In string matching algorithms, the cost of shifting a character's position is specified as a function of the distance shifted. The authors have worked in the past on string matching algorithms [26,27] in which the cost function is a linear function of distance. These prior algorithms have been quite successfully used in commercial applications, especially natural language search; and, we expect that the use of a concave down distance function will significantly improve the matching quality. As we discuss in section 5, the setting of example 1 above is precisely what is needed to allow (near) linear time string matching algorithms with concave-down cost functions. A number of authors, including [,8], have studied concave down cost functions for string matching; their string matching algorithms are based on least-edit-distance and, in this regard, are quite different from ours. Least-edit-distance string matching algorithms are widely used because they provide rich and flexible string comparison functions; on the other hand, the best general algorithms for computing least-edit-distance require $O(N^2)$ time (see [21]). Our string matching algorithms are not as flexible, but can be tailored to work well for many applications: they have the advantage of being linear time computable.

Main Theorem  
\begin{romannum}
% latex2html id marker 144\item There is an $O(N\log N)$\ ti...
... minimum-cost matching problem for balanced quasi-convex
tours.
\end{romannum}

Remark:  The running times of the algorithms are given in terms of the number $N$ of nodes, even though the input size may in some cases need to be $\Omega(N^2)$ to fully specify the values of the cost function. However, in all the examples above, the input size is $O(N)$ since the cost function is specified by the nodes' positions on a line, on a curve, or in the plane. In any event, our runtime analysis assumes that any value $c(x_i,x_j)$ of the cost function can be computed in constant time. If this is not the case, then the runtimes are to be multiplied by the time needed to compute a value of the cost function; this is the situation in example 7 above.

We next define an ``weak analyticity'' condition which will allow yet faster algorithms.


\begin{definition}
Suppose that $x_i$\ and $x_j$\ are red (blue) nodes, that
$...
..._k$\ exists, then the $\delta$-crossover point does not
exist.
\end{definition}

It is not hard to see that the property of quasi-convexity implies that, if the $\delta$-crossover point $x_k$ exists, then $c(x_i,x_\ell)-c(x_j,x_\ell)\ge \delta$ whenever $x_i,x_j,x_\ell,x_k$ are in tour order and $c(x_i,x_\ell)-c(x_j,x_\ell)<\delta$ whenever $x_i, x_j, x_k, x_\ell$ are in tour order. Thus binary search provides an $O(\log N)$-time procedure which, given $x_i$, $x_j$ and $\delta$, will determine if $x_k$ exists and, if so, which node $x_k$ is. This is the approach taken in the algorithms of Theorem 1.4, and is the source of the $\log N$ factor in the runtime. However, in some cases, $x_k$ can be found in constant time and we define:


\begin{definition}
% latex2html id marker 155A quasi-convex tour satisfies the...
...r
the $\delta$-crossover point of $x_i$~and~$x_j$?''
\end{quote}\end{definition}

Even the analyticity condition is too strong to be satisfied in many situations, so we also define a `weak analyticity condition' as follows.


\begin{definition}
Let $x$\ be a node and $y$\ and $z$\ be denotations of nodes....
...
when condition~(iii) holds, $\Omega$~may output either answer.
\end{definition}


\begin{definition}
The {\em weak analyticity condition} is said to hold provided there is a
constant-time relative crossover procedure.
\end{definition}

Clearly the strong analyticity condition implies the analyticity condition, which in turn implies the weak analyticity condition. In most applications, we do not have the analyticity or strong analyticity conditions, but the weak analyticity condition does hold in many natural situations. In particular, examples 1, 2, 3 and 4 do satisfy the weak analyticity condition, provided that the concave-down function is sufficiently natural. Consider, for instance, example 1 with the concave-down function $f(x)=x$, $f(x)=\sqrt x$, or $f(x)=\log
x$, etc. For example 1, the input nodes $x_1,\ldots,x_N$ are given with a sequence of real numbers $r_1\le r_2 \le \cdots \le r_N$ which are the positions of the nodes on the real line. Given nodes $x_i$, $x_j$ and $\delta>0$, the first possible position for the $\delta$-crossover of $x_i$ and $x_j$ can be found by solving the equation $f(y-r_i) = \delta + f(y-r_j)$ for $y$; since we assume that arithmetic operations take constant time, the solution $y$ can be found in constant time. Note that $y$ is only the theoretical crossover point; the actual crossover is the first node $x_k$ such that $y\le r_k$. Unfortunately, even after $y$ is known, it will not be possible to determine $x_k$ in constant time, unless some additional information is given about the distribution of the nodes on the real line. Thus, the analyticity condition and strong analyticity conditions do not hold in general for example 1. The reason the analyticity condition does not hold is that, if the theoretical $\epsilon$-crossover point occurs after the theoretical $\delta$-crossover point, then the analyticity algorithm must output `No' if there is a node after the theoretical $\delta$-crossover point and before or at the theoretical $\epsilon$-crossover point, and must output `Yes' otherwise (because in the latter case the two actual crossover points coincide). Unfortunately, there is no general way to decide this in constant time, so the analyticity condition is false. However, the weak analyticity condition does hold, since the function $\Omega $ may operate by computing the theoretical $\delta$-crossover of $x_i$ and $x_j$ and the theoretical $\epsilon$-crossover of $x_j$ and $x_k$ and outputting ``Yes'' if the former is less than the latter.

For similar reasons, example 3 satisfies the weak analyticity condition: in this case, since the nodes lie on a circle and the cost function is Euclidean distance, the theoretical crossover position is computed (in constant time) as the intersection of a hyperbola and the circle. Likewise, the weak analyticity condition also holds for example 2 if the concave-down function is sufficiently nice, and it holds for example 6, where nodes lie on a circle under the $L_1$ and $L_\infty$ metrics. Example 4, where the nodes form the vertices of a convex polygon, does not seem to satisfy the weak analyticity condition in general; however, some important special cases do. For example, if the vertices of the convex polygon are known to lie on a polygon with a bounded number of sides, on an oval, or on a branch of a hyperbola, then the weak analyticity condition does hold.

The analyticity condition has been implicitly used by Hirschberg-Larmore [12] who defined a Bridge function which is similar to our $\Omega $ function: they give a special case in which Bridge is constant-time computable and thus the analyticity condition holds. Later, Galil-Giancarlo [8] defined a ``closest zero property'' which is equivalent to our strong analyticity condition.4 As we illustrated above, the analyticity and strong analyticity conditions rarely hold. Thus it is interesting to note that the algorithms of Hirschberg-Larmore and of Galil-Giancarlo will still work, with only minor modifications, if only the weak analyticity condition holds.

Our second main theorem implies that these examples which satisfy the weak analyticity condition have linear time algorithms for minimum-cost matching:

Main Theorem  
\begin{romannum}
% latex2html id marker 179\item There is an $O(N)$\ time alg...
...quasi-convex
tours which satisfy the weak analyticity condition.
\end{romannum}

Remark:  In order to achieve the linear time algorithms, it is necessary that nodes of the graph be input in their tour order. This assumption is necessary, since without it, it is possible to give a linear time reduction of sorting to the matching problem for line-like tours.

Our main theorems also apply to minimum-cost matchings for some non-bipartite quasi-convex tours. If a non-bipartite graph $G$ has $N$ nodes and has cost function $c$, then a matching for $G$ is a set of $\lfloor {\textstyle\frac{1}{2}} N \rfloor $ edges with all endpoints distinct. Parts (i) of Main Theorems 1.4 and 1.9 hold also for non-bipartite graphs which are line-like quasi-convex tours. And parts (ii) of Main Theorems 1.4 and 1.9 hold also for non-bipartite graphs which are quasi-convex tours with an even number of nodes. The non-bipartite cases are discussed in section 4; the algorithms are simple modifications of the algorithms for the bipartite tours.

It is apparent that our algorithms can be parallelized but we have not investigated the precise runtime and processor count that is needed for a parallel implementation. He [11] has given a PRAM implementation of Marcotte and Suri's algorithm which uses $N$ processors and $O(\log^2 N)$ time and it is clear that our algorithm can be computed with the same number of processors with the same time bounds using He's methods.

Our algorithms apply only to unbalanced tours only if they are line-like. This is because in the line-like case the leveling process, described in the next section, induced by choosing the first node as starting point, is guaranteed to decompose the problem into alternating color sub-problems, which may be independently solved and reassembled to produce an overall solution. Now some of these sub-problems may be unbalanced, but again using using the line-like property, we are able to force balance by adding a dummy node when necessary. These then are two different uses of the line-like property.

In balanced, unimodal tours5such as the circle, the leveling concept of section 2 holds in a weaker form. However, we have been unable to extend our results to the unbalanced unimodal case. As an example of the difficulty of this, consider the highly eccentric ellipse of Figure 1.3; the bipartite tour containing its four nodes is unbalanced and is neither line-like nor unimodal. Notice that no starting point induces a leveling which places $R_2$ and $B_1$ at the same level - despite the fact that the minimum cost matching consists of an edge between them. The path to extending our methods to such cases is therefore less clear.

Figure 1.3: A bipartite, unbalanced, unimodal tour for which no leveling process works.
\scalebox{0.7}{\includegraphics{oval.ps}}

Reductions and Lemmas

Reduction to tours of alternating colors

The first step to giving our minimum-cost matching algorithms is to reduce to the special case of tours in which the colors of the nodes alternate. In other words, we will be able to assume w.l.o.g. that $x_1,x_3,x_5,\ldots$ are red and that $x_2,x_4,x_6,\ldots$ are blue.


\begin{definition}
Let $x_i$\ and $x_j$\ be nodes. We write $[x_i,x_j]$\ to deno...
...ightarrow x_\ell$\ with $x_i, x_j, x_k, x_\ell$\ in
tour order.
\end{definition}


\begin{lemma}
Let $G$\ be either a line-like quasi-convex tour or a balanced
qua...
...imum-cost
matchings on the \hbox{$\sim$-equivalence} classes of~$G$.
\end{lemma}
To prove Lemma 2.2 we use:
\begin{lemma}
$G$~has a minimum-cost matching in which no jumpers cross.
\end{lemma}

proof: (Sketch) If a minimum-cost matching does have a pair of jumpers which cross, the quasi-convexity property allows them to be `uncrossed' without increasing the total cost. Repeatedly uncrossing jumpers will eventually yield a minimum-cost matching with no crossing jumpers. (See Lemma 1 of [1] for a detailed proof of this.)

Lemma 2.2 is proved by noting that a minimum-cost matching with no crossing jumpers must respect the $\sim$-equivalence classes. This is because, if a jumper $x_i\leftrightarrow x_j$ is in a crossing-free matching with $i<j$, then the nodes in the interval $(x_i,x_j)$ must be matched with each other and thus $(x_i,x_j)$ must have equal numbers of red and blue nodes. In the unbalanced, line-like case, this also depends on the fact that, w.l.o.g., there is no jumper which crosses an unmatched node (this is an immediate consequence of the line-like condition).$\;\Box$

By Lemma 2.2, in order to find a minimum-cost matching, it suffices to extract the $\sim$-equivalence classes, and find minimum-cost matchings for each equivalence class independently. It is an easy matter to extract the $\sim$-equivalent classes in linear time by using straightforward counting. Each equivalence class consists of an alternating color subtour: in the balanced case, there are an even number of nodes in each equivalence class, and in the line-like condition case, there may be an even or odd number of nodes. Thus, to give (near) linear time algorithms for finding matchings, it will suffice to restrict our attention to tours in which the nodes are of alternating colors.

In view of Lemma 2.3, we may restrict our attention to matchings which contain no crossing jumpers. Such a matching will be called crossing-free.

Finally, we can assume w.l.o.g. that the tour is balanced. To see why we can assume this, suppose that $x_1,\ldots,x_N$ is an unbalanced, line-like tour of alternating colors. This means that $x_1$ and $x_N$ are the same color, say red. We can add a new node $x_{N+1}$ to the end of the tour, label it blue, and let $c(x_i,x_{N+1})=0$ for all red $x_i$. These $N+1$ nodes no longer form a line-like tour; however, they do form a balanced quasi-convex tour. Solving the matching problem for the $N+1$ nodes immediately gives a solution to the matching problem on the original $N$ nodes.

Some important lemmas

Since we are now working only with balanced quasi-convex tours of alternating colors, we shall often change the names of the nodes to $R_1,B_1,\ldots,R_M,B_M$; so $R_i$ and $B_j$ refer to the $i$-th red node and the $j$-th blue node in the tour, respectively. (So $x_{2i-1}$ is the same as $R_i$ and $x_{2i}$ is the same as $B_i$.) Note that this means $N=2M$. To simplify notation, we define

\begin{displaymath}
\mbox{\rm$c_i~=~c(R_i,B_i)$\ \qquad and \qquad $c_i^\prime ~=~ c(B_i,R_{i+1})$.}
\end{displaymath}

A greedy matching is a matching which contains no jumpers, i.e., every node is matched to an adjacent node. There are two greedy matchings, namely, the one containing all edges $R_i\leftrightarrow B_i$ and the one containing all edges $B_{i-1}\leftrightarrow R_i$ and the edge $B_M\leftrightarrow R_1$. For $x_i$ and $x_j$ nodes of opposite color, a matching $\sigma $ is said to be greedy on $(x_i,x_j)$ provided it contains as a submatching the unique matching of adjacent nodes contained in the interval $(x_i,x_j)$. We similarly define the notion of $\sigma $ being greedy on a balanced interval $I$, where $I$ is one of the intervals $[x_i,x_j)$, $[x_i,x_j]$ or $(x_i,x_j]$, but with the additional provisos that $x_i\leftrightarrow x_{i+1}$ is in $\sigma $ in the first two cases and that $x_{j-1}\leftrightarrow x_j$ is in $\sigma $ in the second two cases.6

The notation $[R_i,B_j]$ has already been defined. In addition, the notation $[i,j]$ denotes the interval of integers $i,i+1,\ldots,j$ if $i<j$, or the (circular) interval $i,i+1,\ldots,M,1,2,\ldots,j$ if $j<i\le M$. We also use the notations $(i,j]$, $[i,j)$ and $(i,j)$ for the intervals with one or both of the endpoints omitted.


\begin{definition}
Let $R_i$\ and $B_j$\ be nodes; we write $R_i\rightarrow B_j$...
...,j)} c_i ~<~ \sum_{\ell \in [i,j)} c_i^\prime.
\end{displaymath}\end{definition}
Candidates always have endpoints of opposite colors and are directed. It is possible to have both $R_i\rightarrow B_j$ and $B_j\rightarrow R_i$ be (distinct) candidates, or to have one or neither of them candidates.

It is an easy observation that if there are no candidates, then the greedy assignment(s) are minimum-cost matchings. To prove this, suppose $\sigma $ is a minimum-cost matching which contains a jumper: by Lemma 2.3, $\sigma $ may be picked to contain no crossing jumpers. Since there are no crossing jumpers, $\sigma $ must contain a jumper $x_i\leftrightarrow x_j$ such that $\sigma $ is greedy on $(x_i,x_j)$ (namely, pick the jumper so as to minimize the tour-order distance from $x_i$ to $x_j$). Let $\sigma ^\prime $ be the matching which is the same as $\sigma $, except greedy on $[x_i,x_j]$. Clearly $\sigma ^\prime $ has one fewer jumper than $\sigma $, and since $x_i\rightarrow
x_j$ is not a candidate, $\sigma ^\prime $ has cost no greater than $\sigma $. Iterating this construction shows that at least one of the jumper-less greedy matchings must be minimum-cost. To show they are both minimum-cost, let $\sigma_0$ and $\sigma_1$ be the greedy matchings which contain the edges $x_1\leftrightarrow x_2$ and $x_1\leftrightarrow x_N$, respectively. Then $\sigma_0$ can not have cost lower than (respectively, higher than) the cost of $\sigma_1$ since otherwise, $x_2\rightarrow x_1$ ( $x_1\rightarrow x_N$, respectively) would be a candidate.


\begin{definition}
A candidate $x_i\rightarrow x_j$\ is a {\em minimal} candidat...
...w x_\ell$\ with $[x_k,x_\ell]$\ a proper subset
of $[x_i,x_j]$.
\end{definition}


\begin{lemma}
Consider a balanced quasi-convex tour of alternating colors. \hfil...
...} \leftrightarrow
B_\ell$\ for all $\ell \in (a,b)$.
\end{romannum}\end{lemma}

Note that Lemma 2.6 says only that the edges connecting adjacent nodes in the interior of the minimal candidate are in every minimum-cost matching; it does not say that the minimal candidate itself is a jumper in any minimum-cost matching. The proof of Lemma 2.6 is fairly involved and we postpone it until section 2.3. Lemma 2.6 also holds for line-like tours with alternating colors for candidates $x\rightarrow
y$ with $x,y$ in input order.

Lemma 2.6 suggests an algorithm for finding a minimum-cost matching. Namely, if there is a minimal candidate, greedily assign edges in its interior according to Lemma 2.6. This induces a matching problem on the remaining unassigned nodes, and it is clear that any minimum-cost matching on this smaller problem will lead to a minimum-cost matching for the original problem. Iterating this, one can continue removing nodes in the interiors of minimal candidates and reducing the problem size. Eventually a matching problem with no candidates will be reached; in this case, it suffices to greedily match the remaining nodes.

Unfortunately, this algorithm suggested by Lemma 2.6 is not linear time (yet); thus we need to refine Lemma 2.6 somewhat:
\begin{definition}
We define:
\begin{eqnarray*}
\mbox{\sf Bnft}[R_a,B_b] & = &
...
...r $x$\ and $y$\ the same color, $\mbox{\sf Bnft}[x,y]=-\infty$.
\end{definition}
It is immediate that $\mbox{\sf Bnft}[x,y]>0$ iff $x\rightarrow
y$ is a candidate; in fact, $\mbox{\sf Bnft}[x,y]$ measures the benefit (i.e., the reduction in cost), of using $x\leftrightarrow y$ as a minimal jumper instead of the greedy matching on $[x,y]$.

The next lemma forms the basis for the correctness of the algorithm given in section 3 for the serial transitive closure problem. The general idea is that the algorithm will scan the nodes in tour order until at least one candidate is found and then, according to Lemma 2.8, the algorithm will choose an interval $(x_\ell,x_k)$ to greedily match. Once the interval $(\ell,k)$ has been greedily matched, the algorithm need only solve the induced matching problem on the remaining nodes.
\begin{lemma}
Let $G$\ be a balanced quasi-convex tour matching problem. Let $1<...
...ry minimum-cost, crossing-free matching is greedy on $(x_\ell,x_k)$.
\end{lemma}

proof: The proof is, in essence, an iteration of Lemma 2.6. We argue by induction on $k$. Let $G$, $k$, $m$ and $\ell$ satisfy the hypothesis of the lemma. Let $s
= \max\{i<k:\mbox{\sf Bnft}[x_i,x_k]>0\}$, so $ x_s\rightarrow x_k$ is a minimal candidate. By Lemma 2.6, any minimum-cost, crossing-free solution for $G$ is greedy on the interval $(x_s,x_k)$. Hence, it will suffice to let $G^\prime$ be the matching problem obtained from $G$ by discarding the nodes $x_{s+1},\ldots,x_{k-1}$ and prove that any minimum-cost, crossing-free solution for $G^\prime$ is greedy on $(x_\ell,x_s]$. If $\ell = s$, there is nothing to prove, so we assume $\ell<s$. Note that $x_k$ is now the $(s+1)$-st node in the $G^\prime$ tour order. We use $\mbox{\sf Bnft}^\prime$ to denote the $\mbox{\sf Bnft}$ function for $G^\prime$.

We claim:
\begin{romannum}
\item If $1\le i<j\le s$, $\mbox{\sf Bnft}^\prime[x_i,x_j] = \...
...me[x_i,x_k] = \mbox{\sf Bnft}[x_i,x_k]-\mbox{\sf Bnft}[x_s,x_k]$.
\end{romannum}

Claim (i) is immediate from the definition of $\mbox{\sf Bnft}$. The intuitive meaning of (ii) is that the benefit of using the jumper $x_i\leftrightarrow x_k$ is reduced by the benefit already obtained from the jumper $x_s\leftrightarrow
x_k$. We formally prove (ii) for the case that $x_i$ and $x_s$ are red and $x_k$ is blue, the opposite colored case has a similar proof. Assume $x_i=R_a$, $x_s=R_b$ and $x_k = B_c$. Then

\begin{eqnarray*}
\mbox{\sf Bnft}^\prime[R_a,B_c] &= & \sum_{\ell\in[a,b)}c_i + ...
...ll\in[b,c] }c_i
- \sum_{\ell\in[b,c)}c_i^\prime
- c(R_b,B_c)
\end{eqnarray*}



From these three equations, Claim (ii) follows immediately.

Now let $m^\prime = \max\{\mbox{\sf Bnft}^\prime[x_i,x_k]: i<s\}$. By Claim (ii), $m^\prime = m-\mbox{\sf Bnft}[x_s,x_k]$; since $\ell<s$, $m^\prime>0$. Likewise, $\ell = \max\{i<s:\mbox{\sf Bnft}^\prime[x_i,x_k] =
m^\prime\}$. Thus, by the induction hypothesis, any minimum-cost solution for $G^\prime$ is greedy on $(x_\ell,x_s]$ and Lemma 2.8 is proved.$\;\Box$


\begin{definition}
The $\Delta$\ function is defined by:
\begin{eqnarray*}
\Del...
...n[a,b)}c_\ell^\prime - \sum_{\ell\in(a,b]}c_\ell
\end{eqnarray*}\end{definition}

\begin{lemma}
\mbox{\relax}
\begin{romannum}
\item
$\mbox{\sf Bnft}[R_a,B_c]...
... and only if
$c(B_a,R_c)-c(B_b,R_c)<\Delta[B_a,B_b]$.
\end{romannum}\end{lemma}
Lemma 2.10 follows immediately from the definitions.
\begin{lemma}
Let $u,v,x,y$\ be in tour order with nodes $u$~and~$v$\ of
one col...
...htarrow~~ \mbox{\sf Bnft}[u,y]>\mbox{\sf Bnft}[v,y]
\end{displaymath}\end{lemma}

proof: By Lemma 2.10, $\mbox{\sf Bnft}[u,x]>\mbox{\sf Bnft}[v,x]$ is equivalent to $c(u,x)-c(v,x)<\Delta[u,v]$, and $\mbox{\sf Bnft}[u,y]>\mbox{\sf Bnft}[v,y]$ is equivalent to $c(u,y)-c(v,y)<\Delta[u,v]$. Now, by quasi-convexity, $c(u,x)-c(v,x)\ge c(u,y)-c(v,y)$, which suffices to prove the lemma.$\;\Box$

Let $R_a$ and $R_b$ be distinct red nodes. The previous two lemmas show that if there is any node $B_c$ (with $R_a$, $R_b$ and $B_c$ in tour order) such that $\mbox{\sf Bnft}[R_a,B_c]$ is greater than $\mbox{\sf Bnft}[R_b,B_c]$, then the first such $B_c$ is the $\Delta[R_a,R_b]$-crossover point of $R_a$ and $R_b$. We shall denote this first $B_c$, if it exists, by $\chi[R_a,R_b]$; if it does not exist, then $\chi[R_a,R_b]$ is said to be undefined. Similarly, $\chi[B_a,B_b]$ is defined to the be the $\Delta[B_a,B_b]$-crossover point of $B_a$ and $B_b$, and, if defined, is the first $R_c$ where $\mbox{\sf Bnft}[B_a,R_c]$ is greater than $\mbox{\sf Bnft}[B_b,R_c]$.

We now assume that we have a procedure $\Omega(x,y,z)$, which given nodes $x$, $y$, $z$ in tour order returns ``True'' if $\chi[x,y]\prec_y \chi[y,z]$ and returns ``False'' if $\chi[y,z]\prec_y\chi[x,y]$. (If neither condition holds, then $\Omega(x,y,z)$ may return an arbitrary truth value.) If the weak analyticity condition holds, then $\Omega $ is constant time computable. Without this assumption, $\Omega $ is $O(\log N)$ time computable since Lemma 2.11 allows $\chi[-,-]$ to be computable by binary search.

The general idea of the algorithm given in section 3 below is that it will scan the nodes in tour order searching for candidates. Whenever a node is reached that is the head of candidate, the algorithm will take the candidate specified in Lemma 2.8 (the one that was denoted $x_\ell\rightarrow x_k$) and greedily match the nodes in its interior. The greedily matched nodes are then dropped from consideration and the algorithm resumes its search for a candidate. Suppose the $u$ and $v$ are two nodes already scanned in this process that are being remembered as potential endpoints of candidates. Lemma 2.10 tells us that if a node $x$ is found where $\mbox{\sf Bnft}[u,x]>\mbox{\sf Bnft}[v,x]$, then at all succeeding nodes $y$, $\mbox{\sf Bnft}[u,y]>\mbox{\sf Bnft}[v,y]$. By the criterion of Lemma 2.8, this means that after the node $x$ is found, there is no further reason to consider candidates that begin at node $v$, since any candidate $v\rightarrow y$ would be subsumed by the better candidate $u\rightarrow y$.

To conclude this section we describe the algorithm in very general terms; in section 3 we give the precise specification of the algorithm. The algorithm scans nodes (starting with node $x_1$, say) and maintains three lists. The first list, $\cal M$, contains the nodes in tour order which have been examined so far. The second list, $\mbox{\AMSeusm L}^{\mbox{-}1}$, contains all the red nodes that need to be considered as potential endpoints of candidates (so $\mbox{\AMSeusm L}^{\mbox{-}1}$ is guaranteed to contain all the nodes satisfying the criterion of Lemma 2.8). The third list, $\mbox{\AMSeusm L}^1$, similarly contains all the blue nodes that need to be considered as potential endpoints of candidates. At any point during the scan, the lists will be of the form:

\begin{eqnarray*}
{\cal M} & =& x_1,\ldots,x_{r-1} \\ [1ex]
{\mbox{\AMSeusm L}...
..._p} \\ [1ex]
{\mbox{\AMSeusm L}^1} & = & B_{b_1},\ldots,B_{b_q}
\end{eqnarray*}



with $\mbox{\AMSeusm L}^{\mbox{-}1}$ and $\mbox{\AMSeusm L}^1$ subsequences of $\cal M$. The following five conditions will be maintained during execution:


\begin{romannum}
% latex2html id marker 330\item
$x_1,\ldots,x_{r-1}$\ are t...
...}1}$\ or $\mbox{\AMSeusm L}^1$\ (depending on which color it is).
\end{romannum}
When scanning the next node $x_r$, the algorithm must do the following (we assume $x_r$ is blue, similar actions are taken for red nodes):

($\beta$)
While $p\ge 2$ and $\mbox{\sf Bnft}[R_{a_{p-1}},x_r] >\mbox{\sf Bnft}[R_{a_{p}},x_r]$, pop $R_{a_p}$ from $\mbox{\AMSeusm L}^{\mbox{-}1}$ and decrement $p$.
($\gamma$)
If $\mbox{\sf Bnft}[R_{a_p},x_r]>0$, greedily match nodes in the interval $(R_{a_p},x_r)$. The matched nodes are discarded from the lists $\cal M$, $\mbox{\AMSeusm L}^{\mbox{-}1}$ and $\mbox{\AMSeusm L}^1$ (the remaining nodes are to be implicitly renumbered at this point).
($\delta$)
While $q\ge 2$ and $\Omega(B_{a_{q-1}},B_{a_{q}},x_r)$, pop $B_{a_q}$ from $\mbox{\AMSeusm L}^1$ and decrement $q$.
Then push $x_r$ onto the end of $\mbox{\AMSeusm L}^1$ (and increment $q$).

Step ($\beta$) is justified by recalling that if $x_r$ is past $\chi[R_{a_{p-1}},R_{a_p}]$, then $R_{a_p}$ may be removed from consideration as an endpoint of a candidate (by Lemma 2.8).

Step ($\delta$) is justified as follows: suppose $R_i
=\chi[B_{a_{q-1}},B_{a_{q}}]$ equals or precedes $R_j =
\chi[B_{a_{q}},x_r]$ (using tour order, beginning at $B_{a_{q}}$). Then at any future candidate endpoint $x_k$, either $x_k$ follows or equals $R_i$, in which case $\mbox{\sf Bnft}[B_{a_{q-1}},x_k]$ is greater than $\mbox{\sf Bnft}[B_{a_{q}},x_k]$, or $x_k$ precedes $R_j$, in which case, $\mbox{\sf Bnft}[x_r,x_k]$ is greater than $\mbox{\sf Bnft}[B_{a_{q}},x_k]$. Thus $B_{a_{q}}$ will never be the starting endpoint of a candidate satisfying the criteria of Lemma 2.8, and we may drop it from consideration.

To justify Step ($\gamma$) we must show that the candidate $R_{a_p}\rightarrow
x_r$ satisfies the criteria from Lemma 2.8: in view of the correctness of the rest of the algorithm, for this it will suffice to show that $\mbox{\sf Bnft}[R_{a_i},x_r]\le \mbox{\sf Bnft}[R_{a_p},x_r]$ for all $1\le i<p$. For this, note that Step ($\beta$) and condition (iii) above ensure that $x_r$ precedes $\chi[R_{a_i},R_{a_{i+1}}]$ for all $1\le i<p$. This, in turn, implies $\mbox{\sf Bnft}[R_{a_i},x_r]\le\mbox{\sf Bnft}[R_{a_{i+1}},x_r]$ for all $i$, which proves the desired inequality.

After the algorithm has scanned all the nodes once, it will have found and processed all candidates $x_i\rightarrow
x_j$ where $i<j$. However, since the tour is circular, it is necessary to process candidates $x_i\rightarrow
x_j$ with $i>j$. At the end of the first scan, the list $\cal M$ consists of all nodes, $x_1,\ldots,x_n$ which have not been matched yet and $\mbox{\AMSeusm L}^{\mbox{-}1}$ and  $\mbox{\AMSeusm L}^1$ contain nodes $R_{a_1},\ldots,R_{a_p}$ and $B_{b_1},\ldots,B_{b_q}$, as usual. During the second scan, the algorithm is searching for any candidates of the form $R_{a_i}\rightarrow B_j$ with $j<a_i$ or of the form $B_{a_i}\rightarrow R_j$ with $j\le a_i$ (and only for such candidates). To process a node during the second scan, the algorithm pops $x_1$ off the left end of $\cal M$, implicitly renames $x_1$ to $x_n$ and the rest of the nodes $x_i$ to $x_{i-1}$, sets $r=n$, does Step ($\alpha$): (still assuming $x_r$ is blue)

($\alpha$)
If $x_r$ equals $B_{b_1}$, then pop $B_{b_1}$ from the list  $\mbox{\AMSeusm L}^1$ and implicitly renumber  $\mbox{\AMSeusm L}^1$, decrementing $q$,
and then does steps ($\beta$)-($\delta$), except that in Step ($\delta$), the node $x_r$ is not added to the end of  $\mbox{\AMSeusm L}^1$. The reason for Step ($\alpha$) is that once a node $B_{b_i}$ is encountered on the second scan, $B_{b_i}$ is no longer a possible starting endpoint for a candidate. The reason for not adding $x_r$ to the end of  $\mbox{\AMSeusm L}^1$ in Step ($\delta$) is that can not be the starting endpoint of a candidate, because any such candidate would have already been found earlier.

The second scan will stop as soon as both $\cal L$ lists become empty. At this point no candidates remain and a greedy matching may be used for the remaining nodes in the $\cal M$ list.

The actual description of the algorithm with an efficient implementation is given in section 3, and it is there proved that the algorithm is linear time with the weak analyticity condition and $O(N \log N)$ time otherwise. Although we described steps ($\alpha$)-($\delta$) only for blue $x_r$ above, the algorithm in section 3 uses a toggle $\psi$ to handle both colors with the same code. Finally, one more important feature of the algorithm is the way in which it computes the values of the $\mbox{\sf Bnft}$ function and of the $\Delta[x,y]$ function: it uses intermediate values $I[x]$ which are defined as follows.


\begin{definition}
The $I[x]$\ function is defined by
\begin{eqnarray*}
I[R_a] &...
...
\end{eqnarray*}Note that $I[R_{a+1}] = I[B_a]-c(B_a,R_{a+1})$.
\end{definition}
It is immediate from the definitions that, if $x,y$ are tour order (starting from $x_1$), then

\begin{eqnarray*}
\Delta[x,y] &=& I[y]-I[x] \qquad\qquad\qquad \mbox{\rm for $x$...
...& I[x]-I[y]-c(x,y) \qquad \mbox{\rm for $x$\ blue, $y$\ red.}\\
\end{eqnarray*}



These equalities permit the values of $\Delta$ and $\mbox{\sf Bnft}$ to be computed in constant time from the values of $I[-]$. Also, it is important to note that only the relative $I[-]$ values are needed; in other words, it is OK if the $I[-]$ values are shifted by a constant additive constant, since we always use the difference between two $I[-]$ values.

The $I[-]$ function is not only easy to compute, but also provides an intuitive graphical means of understanding the above lemmas and algorithm description. For example, in Figure 2.1, $R_1\rightarrow B_3$ is a (minimal) candidate whereas $R_1\rightarrow B_1$ and $R_1\rightarrow B_2$ are not candidates. In Figure 2.2(a), the node $B_3$ is the relative crossover, $\chi [R_1,R_2]$, of $R_1$ and $R_2$; on the other hand, in Figure 2.2(b), the relative crossover does not exist. Figure 2.3(a) shows an example where $\Omega (R_1,R_2,R_3)$ is true and Figure 2.3(b) shows an example where $\Omega (R_1,R_2,R_3)$ is false.

Figure 2.1: $R_1\rightarrow B_3$ is a candidate as $I[R_1]+c(R_1,B_3)<I[B_3]$, which is equivalent to $\mbox{\sf Bnft}[R_1,B_3]>0$.
\scalebox{0.7}{\includegraphics{samsfig1.ps}}

Figure 2.2: Illustrations of the relative crossover, $\chi [R_1.R_2]$, of $R_1$ and $R_2$. In (a), $B_3$ is $\chi [R_1,R_2]$, since it is the first node $x$ to satisfy $I[R_2]+c(R_2,x)>I[R_1]+c(R_1,x)$. In (b), the relative crossover does not exist.
\scalebox{0.7}{\includegraphics{samsfig2a.ps}}


\scalebox{0.7}{\includegraphics{samsfig2b.ps}}

Figure 2.3: $\Omega (R_1,R_2,R_3)$ is true in (a) and false in (b).
\scalebox{0.7}{\includegraphics{samsfig3a.ps}}




\scalebox{0.7}{\includegraphics{samsfig3b.ps}}

Proof of Lemma 2.6

We now prove Lemma 2.6.

proof: By symmetry, it will suffice to prove part (i). Since the lemma is trivial in case $a=b$, we assume $a\not= b$. Let $\sigma $ be a crossing-free minimum-cost matching: we must prove that $\sigma $ is greedy on $(R_a,B_b)$. By the crossing-freeness of $\sigma $ and by the fact that $R_a\rightarrow B_b$ is a minimal candidate, $\sigma $ does not contain any jumper with both endpoints in $[R_a,R_b]$, except possibly $R_a\leftrightarrow B_b$ itself. If $R_a\leftrightarrow B_b$ is in $\sigma $, then the same reasoning shows that $\sigma $ is greedy on $(R_a,B_b)$; so we suppose that $R_a\leftrightarrow B_b$ is not in $\sigma $. Since we are dealing (with no loss of generality) with balanced tours, we may assume that $b=N$, by renumbering nodes if necessary.

Claim (i): $R_a\leftrightarrow B_a$ is not in $\sigma $.
Suppose, for a contradiction, that $R_a\leftrightarrow B_a$ is in $\sigma $. Let $v$ be the least value such that $R_v\leftrightarrow B_q$ is in $\sigma $ for some $q<a<v$. Note that such a $v$, $a<v\le N$, must exist since there are no jumpers in $[R_a,B_N]$ and since $\sigma $ is not greedy on $[R_a,B_N]$ (it can not be greedy on $[R_a,B_N]$, since $R_a\rightarrow B_N$ is a candidate). By choice of $v$, $\sigma $ is greedy on $[R_a,R_v)$. These edges in the matching $\sigma $ are represented by edges drawn above the line in Figure 3.1(a). Since $R_a\rightarrow B_N$ is a minimal candidate, $\mbox{\sf Bnft}[R_a,B_N]>\mbox{\sf Bnft}[R_v,B_N]$, so Lemma 2.10 implies

\begin{displaymath}
\sum_{i\in[a,v)}c_i - \sum_{i\in[a,v)}c_i^\prime >
c(R_a,B_N) - c(R_v,B_N).
\end{displaymath}

Since $R_a$, $R_v$, $B_N$ and $B_q$ are in tour order, quasi-convexity implies

\begin{displaymath}
c(R_a,B_N) - c(R_v,B_N) \ge c(R_a,B_q) - c(R_v,B_q).
\end{displaymath}

Combining these inequalities yields
\begin{displaymath}
\sum_{i\in[a,v)}c_i + c(R_v,B_q) >
\sum_{i\in[a,v)}c_i^\prime + c(R_a,B_q) .
\end{displaymath} (2)

Let $\sigma ^\prime $ be the matching obtained from $\sigma $ by replacing the jumper $B_q\leftrightarrow R_v$ and the greedy matching on $[R_a,R_v)$ with the edge $B_q\leftrightarrow R_a$ and the greedy matching on $(R_a,R_v]$. The new edges in $\sigma ^\prime $ are drawn below the line in Figure 3.1(a). By (2.1), $\sigma ^\prime $ has cost strictly less than the cost of $\sigma $, which is a contradiction.

Claim (ii): $R_N\leftrightarrow B_N$ is not in $\sigma $.
Claim (ii) is proved by an argument similar to Claim (i). Alternatively, reverse the colors and the tour order and Claim (ii) is a version of Claim (i).

Claim (iii): The matching $\sigma $ is greedy on $(R_a,B_b)$.
Suppose, for a contradiction that $\sigma $ is not greedy on $(R_a,B_b)$. In view of Claims (i) and (ii) and since $\sigma $ has no jumpers in $[R_a,B_N]$, this means that there exist $u$ and $v$ such that $u$ is the least value such that $\sigma $ contains $B_u\leftrightarrow R_r$ with $r<a\le u$ and $v$ is the least value such that $\sigma $ contains $R_v\leftrightarrow B_q$ with $q<a\le v$. Namely, let $u$ be the least value $\ge a$ such that $B_u\leftrightarrow R_{u+1}$ is not in $\sigma $ and $v$ be the least value $>u$ such that $R_v\leftrightarrow B_v$ is not in $\sigma $. For these choices of $u$ and $v$, it must be that $q<r<a \le u \le v\le N$ and that $\sigma $ is greedy on $[B_a,R_u]$ and on $[R_{u+1},B_{v-1}]$. These edges in the matching $\sigma $ are represented by edges drawn above the line in Figure 3.1(b).

Since $R_a\rightarrow B_N$ is a candidate,

\begin{displaymath}
\sum_{i\in[a,N]} c_i > c(R_a,B_N) + \sum_{i\in[a,N)} c_i^\prime.
\end{displaymath}

And since it is minimal, neither $R_a\rightarrow B_u$ nor $R_v\rightarrow B_N$ are candidates; i.e.,

\begin{displaymath}
\sum_{i\in[a,u]} c_i \le c(R_a,B_u) + \sum_{i\in[a,u)} c_i^\prime
\end{displaymath}


\begin{displaymath}
\sum_{i\in[v,N]} c_i \le c(R_v,B_N) + \sum_{i\in[v,N)} c_i^\prime.
\end{displaymath}

Combining these three inequalities gives
\begin{displaymath}
\sum_{i\in(u,v)} c_i + c(R_a,B_u)+c(R_v,B_N) >
\sum_{i\in[u,v)} c_i^\prime + c(R_a,B_N).
\end{displaymath} (3)

Since $R_a,R_v,B_N,B_q$ and $R_r,R_a,B_u,B_q$ are in tour order, quasi-convexity implies the two inequalities

\begin{displaymath}
c(R_a,B_q)+c(R_v,B_N) \le c(R_a,B_N)+c(R_v,B_q) \end{displaymath}


\begin{displaymath}
c(R_r,B_q)+c(R_a,B_u) \le c(R_r,B_u)+c(R_a,B_q)
\end{displaymath}

which combine to yield:
$\displaystyle c(R_a,B_N) - c(R_a,B_u) - c(R_v,B_N) \ge$     (4)
$\displaystyle \qquad\qquad c(R_r,B_q) - c(R_r,B_u) - c(R_v,B_q).$      

Using (2.2) and (2.3) gives the inequality

\begin{displaymath}
\sum_{i\in(u,v)} c_i + c(R_r,B_u)+c(R_v,B_q) >
\sum_{i\in[u,v)} c_i^\prime + c(R_r,B_q).
\end{displaymath}

Let $\sigma ^\prime $ be the matching obtained from $\sigma $ by replacing the jumpers $R_r\leftrightarrow B_u$ and $R_v\leftrightarrow B_q$ and the greedy matching on $(B_u,R_v)$ with the edge $R_r\leftrightarrow B_q$ and the greedy matching on $[B_u,R_v]$. The new edges in $\sigma ^\prime $ are drawn below the line in Figure 3.1(b). The last inequality above says that $\sigma ^\prime $ has cost strictly less than the cost of $\sigma $, which is a contradiction.$\;\Box$

The Algorithm

In this section, we give the actual algorithm for the Main Theorems. The correctness of the algorithm follows from the development in section 2.2. With D. Robinson and K. Kanzelberger, we have developed efficient implementations in ANSI C of all the algorithms described below [5].7

Preliminaries

As mentioned above, the algorithm maintains three lists of nodes called deques (for ``double ended queues'', since we will have to access both ends of the lists). The three deques are the ``main'' deque ${\cal M}$, and two ``left'' deques $\mbox{\AMSeusm L}^1$ and $\mbox{\AMSeusm L}^{\mbox{-}1}$. The latter two are called ``left deques'' since they contain possible left endpoints for candidates. The deques will be updated by push-right operations which add a new node to the right end, by pop-right operations which pop the rightmost node off the deque, and by pop-left operations. However, push-left operations are never required. Deque operations can be efficiently implemented by using contiguous memory locations to store the deque elements and maintaining pointers to the left and right endpoints; each deque operation can then be performed in constant time. For our algorithm, it will suffice to reserve enough space for $2N$ deque elements (with no possibility that a deque will grow leftward since push-left's are not used).

Figure 3.1: Illustrations of Claims 1 and 3 from the proof of Lemma 2.6. The edges above the lines represent edges in the presumed minimum-cost matching $\sigma $; these are are replaced by the edges below the line in the lower-cost matching $\sigma ^\prime $.
\scalebox{0.5}{\includegraphics{claim13.ps}}

Subscripts $R$, $L$, and $R-1$ are used to select the rightmost item, leftmost item, and the item preceding the rightmost, respectively. So $\mbox{\AMSeusm L}^{\mbox{-}1}_L$ refers to the leftmost element of $\mbox{\AMSeusm L}^{\mbox{-}1}$, ${\cal M}_{R-1}$ refers to the item just before the rightmost member of ${\cal M}$, etc. Each deque element is actually a pair, for example, ${\cal M}_R=
(X,I)$; the first entry $X$ of the pair is a node and the second entry $I$ is a numerical value, namely $I=I[X]$ as defined in section 2.2. To simplify notation, we shall use the same notation for a deque element as for the node which is its first component. Thus, ${\cal M}_L$ also denotes the node which is its first component. We write $I[{\cal M}_L]$ to denote its second (numerical) component. Similar conventions apply to the $\mbox{\AMSeusm L}^{\pm 1}$ deques. To simplify our presentation of the algorithm we deal with boundary effects by augmenting the definition of primitive operations as necessary. For example, accessing a non-existent deque element will return an undefined indicator $\emptyset$ and, in general, functions of undefined operands are false or zero (in particular, the cost function $c(-,-)$ and the $I[-]$ functions return zero if they have $\emptyset$ as an argument).

Function Input() returns the next vertex from an imagined input tape which moves in the forward direction only; and is assumed to hold a balanced, alternating color tour. When the tape's end is reached, `undefined' is returned. Procedure Output() is used to write an individual matching to an imagined output tape. They are written as discovered; but can easily be output in tour order (with only an extra $O(N)$-time computation).

To use the same code for red nodes and blue nodes, a variable $\psi$ tracks vertex color by toggling between $-1$ and $1$. Our convention is that $\psi=1$ corresponds to blue, and $\psi=-1$ to red.

Narrative Description of the Algorithm

Initialization consists of setting the three deques to be empty and setting the color toggle $\psi := -1$.

The algorithm first reads nodes from the input and pushes them onto the right end of the ${\cal M}$-deque, and then twice scans the nodes in tour order. During the two scans, nodes are popped from the left end of ${\cal M}$ and then pushed onto its right end.8 In addition, while processing a node, some nodes may be popped off the right end of ${\cal M}$ to be matched. It will always be the case that ${\cal M}$ contains a sequence of contiguous nodes in tour order and that the node currently being scanned immediately follows the (formerly) rightmost element of ${\cal M}$.

The variable $\psi$ will be maintained as a color toggle, so that $\psi$ is equal to $-1$ if the node currently being processed is red and to $1$ if the current node is blue. The algorithm used for pushing an element onto the right end of ${\cal M}$ is:

Algorithm   This procedure pushes a vertex $X$ onto the right of the ${\cal M}$ deque, and computes the corresponding $I[X]$ value which is pushed along with $X$.


\begin{pseudocode}\em\par\Aproc Push\_Main(X) \Sblk
I \Aa I[${\cal M}_R] + \psi...
...M}_R, X) $ push-right (X,I) onto ${\cal M}$ return()
\Eblk
\par\end{pseudocode}

Algorithm 3.1 merely computes the $I[-]$ value for a node $X$ and pushes the node and its $I[-]$ value on the right end of $M$. To justify the computation of the value of $I[X]$, note that if $X$ is blue, then $\psi=1$ and $I[X]$ was defined to equal $I[{\cal M}_R] -
c({\cal M}_R,X)$; whereas, if $X$ is red then $\phi=-1$ and $I[X]$ equals $I[{\cal M}_R]+c({\cal M}_R,X)$. (Unless ${\cal M}$ is empty, in which case, $I[X]=0$.)

Once the current node has been pushed onto the right end of ${\cal M}$, the following code implements Step ($\beta$) from section 2.2:


\begin{pseudocode}\em \Awhile $c(\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R-1},{\cal M}...
...}])$\ \Sblk
pop-right $\mbox{\AMSeusm L}^{\mbox{-}\psi}$ \Eblk
\end{pseudocode}

To justify the correctness of the while condition, suppose that the currently scanned node is red, so $\phi=-1$. By Lemma 2.10, $\mbox{\sf Bnft}[\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R-1},{\cal M}_R]>\mbox{\sf Bnft}[\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R},{\cal M}_R]$ iff $c(\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R-1},{\cal M}_R)-c(\mbox{\AMSeusm L}^{\mbox...
...ta[\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R-1},\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R}]$. Furthermore, $\Delta[\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R-1},\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R}]$ is equal to $\psi\cdot(I[\mbox{\AMSeusm L}^{\mbox{-}\psi}_R] - I[\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R-1}])$ since $\mbox{\AMSeusm L}^{\mbox{-}\psi}$ contains blue nodes and $\psi=-1$ (by the equalities at the end of section 2.2). In this case, ${\cal M}_R$ is past the crossover point of $\mbox{\AMSeusm L}^{\mbox{-}\psi}_{R-1}$ and  $\mbox{\AMSeusm L}^{\mbox{-}\psi}_R$, so  $\mbox{\AMSeusm L}^{\mbox{-}\psi}_R$ may be discarded from consideration as a left endpoint of a candidate. A similar calculation justifies the case when the current node is blue.

To implement Step ($\gamma$), the following code is used:


\begin{pseudocode}\em \Aif $c({\cal M}_R,\mbox{\AMSeusm L}^{\mbox{-}\psi}_R)<%%
...
...x{-}\psi}_R$\ \Sblk
Match\_Pair()
\Eblk
Push\_Main(X)
\Eblk
\end{pseudocode}

where Match_Pair is defined below. The above if statement checks whether $\mbox{\AMSeusm L}^{\mbox{-}\psi}_R\rightarrow {\cal M}_R$ is a candidate; if so, the algorithm greedily assigns edges to node in the interior of the candidate (where `greedily' means with respect to the nodes that have not already been assigned). Before the greedy assignment is started, the rightmost entry is popped from ${\cal M}$ and is saved as $X$ to pushed back on the right end afterwards. There are two reasons for this: firstly, this gets the current node $X$ out of the way of Match_Pair's operation, and secondly and more importantly, when $X$ is pushed back onto ${\cal M}$, the $I[-]$ value for the current node is recomputed so as to be correct for the reduced matching problem in which the greedily matched nodes are no longer present. Match_Pair is the following procedure:


\begin{pseudocode}\em \Aproc Match\_Pair() \Sblk
Output(\lq\lq ${\cal M}_{R-1}\leftr...
...{\AMSeusm L}^\psi $ \Eblk
pop-right ${\cal M}$ return()
\Eblk
\end{pseudocode}

The procedure Match_Pair assigns a jumper ${\cal M}_{R-1}\leftrightarrow {\cal M}_R$ and discards a matched node from the deque $\mbox{\AMSeusm L}^\psi $ if it appears there. Because of the while condition controlling calls to Match_Pair, it is not possible for a matched node to occur in $\mbox{\AMSeusm L}^{\mbox{-}\psi}$, so we do not check for this condition.

To implement Step ($\delta$), the following code is used:


\begin{pseudocode}\em \Awhile $\Omega(\mbox{\AMSeusm L}^\psi _{R-1}, \mbox{\AMSe...
...AMSeusm L}^\psi $\ \hspace*{.5in}(without popping ${\cal M}_R$)
\end{pseudocode}

That completes the description of the how nodes are processed during the first scan. As mentioned earlier, the last instruction (the push-right) is omitted from Step ($\delta$) during the second scan. Other than this, the processing for Steps ($\beta$)-($\delta$) is identical in the two scans.

One potentially confusing aspect of the second scan is that the $I[-]$ values are no longer actually the correct $I[-]$ values: for example, it is no longer the case that $I[{\cal M}_L]$ is necessarily equal to zero. Strictly speaking, the $I[-]$ values all shift by an additive constant when an entry is popped from the left end of ${\cal M}$; however, it is not necessary to implement this shift, since the algorithm only uses differences between $I[-]$ values. The end result is that nothing special needs to be done to the $I$ values when we pop-left ${\cal M}$.

After both scans are completed, any remaining nodes may be greedily matched. As discussed above, there are two possible greedy matchings and both have the same (optimal) cost. Thus either one may be used: the algorithm below just calls Match_Pair repeatedly to assign one of these greedy matchings.

\begin{figure}
% latex2html id marker 519
\addtolength{\baselineskip}{-1pt}\begi...
...psi $ \Eblk
\par return
\Eblk
\par\end{pseudocode}\end{theorem_type}\end{figure}

The complete matching algorithm is shown as Algorithm 3.2. When interpreting Algorithm 3.2, it is necessary to recall our conventions that any predicate of undefined arguments is to be false. This situation can occur in the four while and if conditions of Process_Node. If $\mbox{\AMSeusm L}^{\mbox{-}\psi}$ is empty, then $\mbox{\AMSeusm L}^{\mbox{-}\psi}_R$ is undefined and the two while conditions and the first if condition are to be false. Similarly, if $\mbox{\AMSeusm L}^{\mbox{-}\psi}$ has $<2$ elements, then the first while condition is to be false; and if $\mbox{\AMSeusm L}^\psi $ has $<2$ elements, then the final while condition is to be false.

The runtime of Algorithm 3.2 is either $O(N)$ or $O(N \log N)$ depending on whether the weak analyticity condition holds. To see this, note that the initialization and the windup processing both take $O(N)$ time. The loops for the each of the two scans are executed $\le N$ times. Except for the while loops, each call to Process_Node takes constant time. The second while loop (which calls Match_Pair) is executed more than once only when edges are being output. If the first or third while loop is executed more than once, then vertices are being popped from the $\AMSeusm L$ stacks. Since $\lfloor {\textstyle\frac{1}{2}} N \rfloor $ edges are output and since $O(N)$ vertices are pushed onto the $\AMSeusm L$ stacks, each of these while loops are executed only $O(N)$ times during the entire execution of the algorithm. An iteration of the first or second while loop takes constant time, while an iteration of the third while loop takes either constant time or $O(\log N)$ time, depending on whether the weak analyticity property holds.

When the weak analyticity condition holds, the $\Omega $ predicate typically operates in constant time by computing two theoretical relative crossovers and comparing their positions. This happen, for example, when the tour consists of points lying a circle, with the cost function equal to Euclidean distance; section 3.3 outlines a constant-time algorithm for this example. Without the weak analyticity condition, the $\Omega $-predicate runs in logarithmic time, by using a binary search of the ${\cal M}$-deque. This general (not weakly analytic) case is handled by the generic $\Omega $ algorithm discussed in section 3.3.

There are a couple of improvements that can be made to the algorithm which will increase execution speed by a constant factor. Firstly, the calls to Match_Pair made during the ``Windup Processing'' do not need to check if ${\cal M}_R=\mbox{\AMSeusm L}^\psi _R$, since $\mbox{\AMSeusm L}^\psi $ is empty at this time. Secondly, if computing the cost function $c(-,-)$ is more costly than simple addition, then it is possible for Push_Main() to use an alternative method during the two scans to compute the cost $c({\cal M}_R,X)$ for nodes $X$ which have just been popped from the left of ${\cal M}$ (except for the first one popped from the left in the first scan). Namely, the algorithm can save the old $I[X]$ value for the node $X$ as it is left-popped off the deque ${\cal M}$. Then the cost function can be computed by computing the difference between the $I[-]$ value of $X$ and the $I[-]$ of the previous node left-popped from ${\cal M}$. This second improvement applies only to the first Push_Main call in Process_Node.

Algorithms for $\Omega $

During the first scan, the procedure $\Omega $ is called (repeatedly) by Process_Node to determine whether $\mbox{\AMSeusm L}^\psi _R$ should be popped before the current node ${\cal M}_R$ is pushed onto the right end of the $\mbox{\AMSeusm L}^\psi $-deque. During the second scan, ${\cal M}_R$ is never pushed onto the $\mbox{\AMSeusm L}^\psi $-deque; however, using the procedure $\Omega $ can allow the $\mbox{\AMSeusm L}^\psi $-deque to be more quickly emptied, thus speeding up the algorithm's execution. (However, the use of the $\Omega $ could be omitted during the second scan, without affecting the correctness of the algorithm.)

When $\Omega $ is called, $\mbox{\AMSeusm L}^\psi _{R-1}$, $\mbox{\AMSeusm L}^\psi _R$ and ${\cal M}_R$ are distinct nodes, in tour order, and of the same color. Let $\delta =
(I[\mbox{\AMSeusm L}^\psi _{R-1}]-I[\mbox{\AMSeusm L}^\psi _R])\cdot \psi$ and $\epsilon =
(I[\mbox{\AMSeusm L}^\psi _R]-I[{\cal M}_R])\cdot \psi$. Let $Y$ denote the $\delta$-crossover point of $\mbox{\AMSeusm L}^\psi _{R-1}$ and $\mbox{\AMSeusm L}^\psi _R$, and let $Z$ denote the $\epsilon$-crossover point of $\mbox{\AMSeusm L}^\psi _R$ and ${\cal M}_R$ (note $Y$ and/or $Z$ may not exist). By definition, $Y$ and $Z$ are both opposite in color from the other three nodes. The procedure $\Omega $ must return True if $Y$ exists and $Z$ does not, must return False if $Y$ does not exist, and, if both exist, must return True if $\mbox{\AMSeusm L}^\psi _R,Y,Z$ are in tour order, must return True if $\mbox{\AMSeusm L}^\psi _R,Z,Y$ are in tour order, and may return either value if $Y=Z$.

In this section, we discuss two algorithms for $\Omega $. We first discuss a ``generic'' algorithm that works for any cost function, regardless of whether the weak analyticity condition holds. This generic $\Omega $ procedure is shown as Algorithm 3.3 below. The generic $\Omega $ executes a binary search for a node which is at or past one of the crossover points, but is not at or past the other. Obviously, if the crossover $Y$ exists, then it exists in the range $(\mbox{\AMSeusm L}^\psi _R,\mbox{\AMSeusm L}^\psi _{R-1})$, and if the crossover $Z$ exists, it is in the range $({\cal M}_R,\mbox{\AMSeusm L}^\psi _R)$. Furthermore, $Y$ can not exist in the range $(\mbox{\AMSeusm L}^\psi _R,{\cal M}_R)$, since otherwise it would have been popped when $Y$ was reached (by the first while loop in an earlier call to Match_Pair). Hence, the binary search may be confined to the range $({\cal M}_R,\mbox{\AMSeusm L}^\psi _{R-1})$, provided that True is returned in the event that the binary search is unsuccessful.

In Algorithm 3.3, a new notation ${\cal M}_k$ is used -- this presumes that the ${\cal M}$ deque is implemented as an array, the elements of ${\cal M}$ fill a contiguous block of the array elements. When we write ${\cal M}_k$, we mean the $k$-th entry of the array. The $\AMSeusm L$ deques can contain pointers to ${\cal M}$ deque entries; in fact, in our preferred implementation, the $\AMSeusm L$ deque entries contain only a index for an ${\cal M}$ deque entry. Thus the value $h$ can be found in constant time for Algorithm 3.3.

The generic $\Omega $ algorithm shown in Algorithm 3.3 takes $O(\log N)$ time since it uses a binary search.

\begin{figure}
% latex2html id marker 556
\begin{theorem_type}[algorithm][theore...
...
return (TRUE)
\Ap
\par\Eblk
\par\end{pseudocode}\end{theorem_type}\end{figure}

Next we describe an example of a linear time algorithm for $\Omega $ where the weak analyticity condition holds. For this example, we assume that the nodes of the quasiconvex tour lie on the unit circle in the $xy$ plane, the cost function is equal to straight-line Euclidean distance, and the tour proceeds in counterclockwise order around the circle. The $\Omega $ algorithm either is given, or computes, the $xy$ coordinates of the three nodes $\mbox{\AMSeusm L}^\psi _{R-1}$, $\mbox{\AMSeusm L}^\psi _R$ and ${\cal M}_R$. It then uses a routine Circle_Crossover to find the theoretical $\delta$-crossover $Y$ of $\mbox{\AMSeusm L}^\psi _{R-1}$ and $\mbox{\AMSeusm L}^\psi _R$ and, the theoretical $\epsilon$-crossover of $\mbox{\AMSeusm L}^\psi _R$ and ${\cal M}_R$. If the theoretical crossover $Y$ exists, it will be in the interval $[\mbox{\AMSeusm L}^\psi _R,\mbox{\AMSeusm L}^\psi _{R-1}]$, and if $Z$ exists, $Z$ will be in the interval $[{\cal M}_R,\mbox{\AMSeusm L}^\psi _R]$. When $Y$ and $Z$ both exist, the $\Omega $ procedure returns True if $\mbox{\AMSeusm L}^\psi _R,Y,Z$ are in tour order or any two of these nodes are equal; and otherwise returns False.

\begin{figure}
% latex2html id marker 580
\begin{theorem_type}[algorithm][theore...
...$\par return (TRUE)
\par\Eblk
\par\end{pseudocode}\end{theorem_type}\end{figure}

Of course, the crucial implementation difficulty for the procedure $\Omega $ is the algorithm for Circle_Crossover: this is shown as Algorithm 3.4. Circle_Crossover takes two points $(x_1,y_1)$ and $(x_2,y_2)$ lying on the unit circle in the $xy$-plane and a real value $\delta$. The theoretical $\delta$-crossover point of $(x_1,y_1)$ and $(x_2,y_2)$ is found as an intersection point of the unit circle and the hyperbola consisting of those points which have distance from $(x_1,y_1)$ equal to $\delta$ plus their distance from $(x_2,y_2)$ (namely, the intersection which is not between $(x_1,y_1)$ and $(x_2,y_2)$ in tour order.) Letting the `half inner product', $hip$, equal $(x_1 x_2 + y_1 y_2)/2$, the distance between the two point is equal to $2c$, where $c= \sqrt{\frac12-hip}$. And, the midpoint of the line segment between the two points is distance  $\sqrt{\frac12+hip}$ from the origin. To conveniently express the equation for the hyperbola, we set up $uv$-axes as a rigid translation of the $xy$ axes, positioned so that the points $(x_1,y_1)$ and $(x_2,y_2)$ have $uv$-coordinates $(0,-c)$ and $(0,c)$, respectively. This makes the origin have $uv$-coordinates $(d,0)$, where $d=\pm \sqrt{\frac 12 +hip}$ with the sign being $+$ iff the angle from $(x_1,y_1)$ to $(x_2,y_2)$ is $\le 180\deg$. In the $uv$-plane, the hyperbola has equation

\begin{displaymath}
\frac{v^2}{a^2} - \frac{u^2}{c^2-a^2} = 1
\end{displaymath}

where $a=\delta/2$, and the unit circle has equation

\begin{displaymath}
(u+d)^2 + v^2 = 1.
\end{displaymath}

Eliminating $v^2$ from these equations and solving for $u$ and then for $v$, shows that the desired intersection point of the circle and the hyperbola has $uv$-coordinates

\begin{eqnarray*}
u & = & (1-d)\left(1-\frac{a^2}{c^2}\right) \\ \nopagebreak
v & = & a \sqrt{ 1 + \frac{u^2}{c^2-a^2}}.
\end{eqnarray*}



Given the $uv$-coordinates, it is an easy matter to find values $\alpha,\beta$ which allow the corresponding $xy$-coordinates to be computed. Algorithm 3.4 show two equivalent calculations of $\alpha,\beta$; the algorithm chooses the one which avoids division by zero, or division by a number close to zero. Algorithm 3.4, as shown, also checks for some error conditions that can arise from roundoff errors. In particular, it makes sure that $\vert hip\vert\le\frac12$, and that $a^2<c^2$. Amazingly enough, we found during extensive testing with randomly generated tours of points on the unit circle, that roundoff error occasionally caused these conditions to be violated, even for points on the unit circle and for $\vert a\vert<\vert c\vert$.


An ANSI-C Implementation

An efficient and highly portable ANSI-C implementation of our algorithms is described in [5], which includes complete source code, test programs for several interesting cases, benchmark results, and software to produce postscript graphical representations of the matchings found. To help ensure the correctness of our implementation, a straightforward $O(n^3)$ dynamic programming solution was also implemented, and the results compared for 4,000,000 pseudo-randomly drawn problems. Figure 1.1 shows an example of a matching produced by our software.

Benchmark results for a variety of RISC processors produced nearly identical results when normalized by clock rate. So timing results in [5] are given in units of RISC cycles. Graphs of up to $20,000$ nodes are included in this study.

Recall that $O(\log N)$ time is a worst case bound for generic $\Omega $. One interesting experimental result is that over the range of graph sizes considered, and for the specific settings implemented in the test programs, and given the uniform pseudo-random manner in which problem instances were generated, the generic $\Omega $ implementation exhibits very nearly linear-time performance. I.e., the experimentally observed runtime of $\Omega $ was nearly constant. We suspect that this is primarily a consequence of the uniform random distribution from which problems were drawn, and that it should be possible to demonstrate expected time results better than $O(\log N)$ for more structured settings.

The benchmarks included one line-like and two circular settings. Solving pseudo-randomly drawn matching problems of size $n$, required on average between $2000$ and $16000$ RISC cycles per node -- depending on the setting, and on whether a constant time or generic $\Omega $ was employed. It is interesting to note, that in all cases, the constant time $\Omega $ performed better by a factor ranging from roughly 1.5 to slightly over 3. Thus, for some problems, the linear time result of this paper may be of practical interest.

Despite our focus on efficiency, further code improvements and cost function evaluation by table lookup, may contribute to significant performance improvement.

Non-bipartite, quasi-convex tours

In this section we show how the earlier algorithms can be applied to non-bipartite, quasi-convex tours. The principal observation is that non-bipartite tours may be made bipartite by the simple construction of making the nodes alternate in color. This is already observed by Marcotte-Suri [20] in a more restrictive setting; we repeat the construction here for the sake of completeness.

First, it is apparent that the proof of Lemma 2.3 still works in the non-bipartite case, and thus any non-bipartite, quasi-convex tour has a minimum-cost matching in which no jumpers cross. This fact implies the following two lemmas:
\begin{lemma}
Let $x_1,\ldots,x_N$\ be a non-bipartite, quasi-convex tour with
$...
... is of the form $x_i\leftrightarrow x_j$\ with $i$~even and
$j$~odd.
\end{lemma}

proof: It will suffice to show that any crossing-free matching has this property. Suppose $x_i\leftrightarrow x_j$ is a jumper in a crossing-free matching, with $i<j$. Since $N$ is even, the matching is complete in that every node is matched. The crossing free property thus implies that the nodes in $(x_i,x_j)$ are matched with each other; so there are an even number of such nodes, i.e., one of $i$ and $j$ is even and the other is odd.         $\;\Box$


\begin{lemma}
Let $x_1,\ldots,x_N$\ be a non-bipartite line-like quasiconvex tou...
... is of the form $x_i\leftrightarrow x_j$\ with $i$~even and $j$~odd.
\end{lemma}

proof: If $N$ is even then this lemma is just a special case of the former lemma. If $N$ is odd, then add an additional node $x_{N+1}$ to the end of the tour, with $c(x_i,x_{N+1})=0$ for all $i$. The resulting tour is again quasi-convex and of even length; so the lemma again follows immediately from the former lemma. $\;\Box$

When Lemmas 4.1 and 4.2 apply, we may color the even nodes red and the odd nodes blue and reduce the non-bipartite matching problem to a bipartite matching problem. As an immediate corollary, we have that the two Main Theorems also apply in the non-bipartite setting; namely, for non-bipartite, quasi-convex tours of even length and for non-bipartite, line-like, quasi-convex tours, the matching problem can always be solved in $O(N \log N)$ time and it can be solved in $O(N)$ time if the weak analyticity condition holds.

We do not know whether similar algorithms exist for the case of general (i.e., non-line-like) quasi-convex tours of odd length. Similarly, we do not know any linear or near-linear time algorithms for bipartite, quasi-convex tours which are neither balanced nor line-like.


We conclude this section by mentioning a tantalizing connection between our work and the work of F. F. Yao[25]. Yao gave a quadratic runtime algorithm for solving the dynamic programming problem

\begin{displaymath}
d(i,j) ~ = ~ c(i,j) + \min\{ d(i,k-1)+d(k,j) : i < k \le j\}
\end{displaymath}

for line-like quasi-convex tours with cost function $c$ (improving on the obvious cubic-time algorithm). Our non-bipartite matching problem can be stated as a similar dynamic programming problem; namely, the minimum-cost, $MC(i,j)$, of a complete matching on the nodes in $[x_i,x_j]$ can be recursively defined to equal

\begin{displaymath}
\min\{ c(i,k) + MC(i+1,k-1) + MC(k+1,j) : i<k\le j \}.
\end{displaymath}

(A similar dynamic programming algorithm can be given for the bipartite matching problem.) The obvious naive algorithm for computing $MC(-,-)$ is cubic-time, however, our main results give (near) linear time algorithms for line-like quasi-convex tours. This raises the possibility that the dynamic programming problem considered by Yao may also have a near-linear time solution.

Applications to String Matching

As a final topic we briefly discuss the application of our matching results to string comparison. A full treatment is beyond the scope of this paper, but additional details and related algorithms may be found in [6]. Given two symbol strings $v =
a_1a_2\cdots a_n$ and $w=b_1b_2\cdots b_n$, our goal is to measure a particular notion of distance between them. Intuitively, distance acts as a measure of similarity, i.e., strings that are highly similar (highly dissimilar) are to have a small (large) distance between them. The purpose of such formulations is usually to approximate human similarity judgments within a pattern classification or information retrieval system.

Suppose $f(x)$ is a monotonely increasing, concave-down function with $f(0)=0$. Let symbols $a_1,\ldots,a_n$ in $v$ be a graph's red nodes, $b_1,\ldots,b_n$ in $w$ be its blue nodes, and consider bipartite matchings of these $2n$ symbols. In the simplest formulation we define the cost of an edge $a_i\leftrightarrow b_j$ as $f(\vert j-i\vert)$ if $a_i$ and $b_j$ are the same symbol, and as $f(n)$ if $a_i$ and $b_j$ are distinct symbols. The cost of matching unequal characters can also be set to be any other fixed value instead of $f(n)$. Our distance, $\sigma(v,w)$, between strings $v$ and $w$ is then the minimum cost of any such bipartite matching.

As an example, consider the two strings ``delve'' and ``level'' and let $f(x)=\sqrt x$. Then the distance between these two strings is $\sqrt5+\sqrt0+\sqrt2+\sqrt1+\sqrt1 \approx 5.65$.

As we have set up our problem above, the computation of $\sigma(v,w)$ is not directly an instance of the quasi-convex matching problem. However we can compute the $\sigma $ function by considering each alphabet symbol $\alpha$ separately, and solving the quasi-convex matching problem $\sigma_\alpha$ which results from restricting attention to occurrences of a single alphabet symbol at a time. To make this clear, we introduce a special symbol ``-'' which indicates the absence of an alphabet symbol. The value of $\sigma(\mbox{\lq\lq delve''},\mbox{\lq\lq level''})$ can be expressed as the sum

\begin{eqnarray*}
& & \sigma_d(\mbox{\lq\lq $ d \mbox{\rm -}\mbox{\rm -}\mbox{\rm -}\...
...-}$'',\lq\lq $\mbox{\rm -}\mbox{\rm -}v \mbox{\rm -}\mbox{\rm -}$''})
\end{eqnarray*}



To make the summed $\sigma_\alpha$ terms equal $\sigma $ as originally defined, each $\sigma_\alpha$ is defined to be the subproblem's minimum matching cost, plus $f(n)/2$ times the number of unmatched symbols.

We will loosely refer to distance functions that result from this kind of formulation as $\sigma $-distances. Assuming that $f(x)$ satisfies the weak analyticity condition, it is not too difficult to show that it is possible to compute $\sigma(v,w)$ in linear time. If the weak analyticity condition does not hold, then our results give an $O(n\log n)$ time algorithm.

A novel feature of our $\sigma $-distances is that distinct alphabet symbols are treated independently. This is in contrast to most prior work which has used `least edit distance' for string comparison (see [21] for a survey). As an illustration of the difference between our distance measure and the `edit distance' approach, consider comparing the word ``abcde'' with its mirror image ``edcba''. Our approach recognizes some similarity between these two forms, while the most standard `edit distance' approach sees only that the two strings have ``c'' in common -- in essence substituting the first two and last two symbols of the string without noticing the additional occurrences of the same symbols at the other end of the other string.

A special form of our $\sigma $-distance measure in which $f(x)=x$, and the optimal matching is only approximated, was introduced earlier by the authors and shown to have a simple linear time algorithm [26,27]. Its relationship to $\sigma $-distances is described in [6]. This earlier algorithm has been successfully used in commercial applications, especially for spelling correction in word processing software, typewriters, and hand-held dictionary devices (we estimate that that over 15,000,000 such software/hardware units have been sold by Proximity Technology, Franklin Electronic Publishers and their licensees). Other less prominent commercial applications include database field search (e.g., looking up a name or address), and the analysis of multi-field records such as mailing addresses, in order to eliminate near-duplicates. In both of these applications, the strict global left-right ordering imposed by $O(n^2)$ time `edit distance' methods, can be problematic. On the other hand, very local left-right order preservation seems to be an important part of similarity perception in humans. One simple adaptation of our $\sigma $-distance methods which goes a long way towards capturing this characteristic, consists of extending the alphabet beyond single symbols to include digraphs or multi-graphs. The result is increased sensitivity to local permutation. Another effective alphabet extension technique involves the addition of feature symbols to the alphabet to mark events such as likely phonetic transitions. We expect that the use of general concave-down distance functions (as opposed to $f(x)=x$) will improve the quality of the similarity judgments possible within the $\sigma $-distance framework.

The development above considers strings of equal length only. The unequal length case is not a difficult generalization; but considering it does highlight the issue of embedding. By this we mean that it is implicit in our formulation that the two strings are in a sense embedded into the real line. The particular rather natural embedding we've assumed so far, maps $a_i$ and $b_i$ to value $i$ on the real line - but others are possible.

A detailed comparison of our methods with `edit distance' approaches is beyond the scope of this paper. But we must point out that the `edit distance' formulation is in several senses richer than ours. First, the cost of matching different alphabet members need not be fixed. Also, our distance formulation depends on a designated embedding while the `edit distance' method requires no such specification. Finally, for some problems, left-right order preservation may be desirable. On the other hand, even the simplest `edit distance' approach is $O(n^2)$; compared with the $O(n)$ or $O(n\log n)$ complexity of our method. We therefore feel that additional work is needed to better understand the applications of our approach -- and perhaps extend it.

Acknowledgments

We wish to thank Dina Kravets, Dave Robinson, and Warren Smith for helpful discussions -- and Dave Robinson and Kirk Kanzelberger for implementing and testing the algorithms described above.

Bibliography

1
A. AGGARWAL, A. BAR-NOY, S. KHULLER, D. KRAVETS, AND B. SCHIEBER, Efficient minimum cost matching using quadrangle inequality, in Proceedings of the 33th Annual IEEE Symposium on Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, CA, 1992, pp. 583-592.

2
A. AGGARWAL AND M. KLAWE, Applications of generalized matrix searching to geometric algorithms, Discrete Applied Mathematics, 27 (1990), pp. 3-23.

3
A. AGGARWAL, M. KLAWE, S. MORAN, P. SHOR, AND R. WILBER, Geometric applications of a matrix-searching algorithm, Algorithmica, 2 (1987), pp. 195-208.

4
A. AGGARWAL AND J. PARK, Notes on searching in multidimensional monotone arrays, in Proceedings of the 29th Annual IEEE Symposium on Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, CA, 1988, pp. 497-512.

5
S. R. BUSS, K. G. KANZELBERGER, D. ROBINSON, AND P. N. YIANILOS, Solving the minimum-cost matching problem for quasi-convex tours: An efficient ANSI-C implementation, Tech. Rep. CS94-370, U.C. San Diego, 1994.

6
S. R. BUSS, P. N. YIANILOS, A Bipartite Matching Approach to Approximate String Comparison and Search, Tech. Rep., NEC Research Institute (1995), Princeton NJ.

(in preparation).

7
D. EPPSTEIN, Sequence comparison with mixed convex and concave costs, Journal of Algorithms, 11 (1990), pp. 85-101.

8
Z. GALIL AND R. GIANCARLO, Speeding up dynamic programming with applications to molecular biology, Theoretical Computer Science, 64 (1989), pp. 107-118.

9
Z. GALIL AND K. PARK, A linear-time algorithm for concave one-dimensional dynamic programming, Inf. Process. Lett., 33 (1990), pp. 309-311.

10
P. GILMORE AND R. GOMORY, Sequencing a one state-variable machine: A solvable case of the traveling salesman problem, Operations Research, 12 (1964), pp. 655-679.

11
X. HE, An efficient parallel algorithm for finding minimum weight matching for points on a convex polygon, Inf. Process. Lett., 37 (1991), pp. 111-116.

12
D. S. HIRSCHBERG AND L. L. LARMORE, The least weight subsequence problem, SIAM J. Comput., 16 (1987), pp. 628-638.

13
A. J. HOFFMAN, On simple linear programming problems, in Convexity: Proceedings of the Seventh Symposium in Pure Mathematics of the AMS, V. Klee, ed., American Mathematical Society, 1963, pp. 317-327.

14
R. M. KARP AND S.-Y. R. LI, Two special cases of the assignment problem, Discrete Mathematics, 13 (1975), pp. 129-142.

15
M. M. KLAWE AND D. J. KLEITMAN, An almost linear time algorithm for generalized matrix searching, SIAM J. Disc. Math., 3 (1990), pp. 81-97.

16
D. KRAVETS AND J. K. PARK, Selection and sorting in totally monotone arrays, Mathematical Systems Theory, 24 (1991), pp. 201-220.

17
L. L. LARMORE AND B. SCHIEBER, On-line dynamic programming with applications to the prediction of RNA secondary structure, Journal of Algorithms, 12 (1991), pp. 490-515.

18
E. LAWLER, Combinatorial Optimization: Networks and Matroids, Holt, Rinehart and Winston, 1976.

19
Y. MANSOUR, J. K. PARK, B. SCHIEBER, AND S. SEN, Improved selection in totally monotone arrays, in Proceedings, 11th Conf. on Foundations of Software Technology and Theoretical Computer Science, Springer-Verlag, 1991, pp. 347-359.

20
O. MARCOTTE AND S. SURI, Fast matching algorithms for points on a polygon, SIAM J. Comput., 20 (1991), pp. 405-422.

21
D. SANKOFF AND J. B. KRUSKAL, Time Warps, String Edits and Macromolecules: The Theory and Practice of Sequence Comparison, Addison-Wesley, 1983.

22
P. M. VAIDYA, Geometry helps in matching, SIAM J. Comput., 18 (1989), pp. 1201-1225.

23
M. WERMAN, S. PELEG, R. MELTER, AND T. KONG, Bipartite graph matching for points on a line or a circle, Journal of Algorithms, 7 (1986), pp. 277-284.

24
R. WILBER, The concave least-weight subsequence problem revisited, Journal of Algorithms, 9 (1988), pp. 418-425.

25
F. F. YAO, Speed-up in dynamic programming, SIAM J. Alg. Disc. Meth., 3 (1982), pp. 523-540.

26
P. N. YIANILOS, The definition, computation and application of symbol string similarity functions, Master's thesis, Emory University, 1978.

27
P. N. YIANILOS AND S. R. BUSS, Associative memory circuit system and method, continuation-in-part.
U.S. Patent #4490811, December 1984.

About this document ...

Linear and $O(n\log n)$ Time Minimum-Cost Matching Algorithms for Quasi-convex Tours 1

This document was generated using the LaTeX2HTML translator Version 2K.1beta (1.47)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -image_type gif -nonext_page_in_navigation -noprevious_page_in_navigation -up_url ../main.html -up_title 'Publication homepage' -numbered_footnotes quasi.tex

The translation was initiated by Peter N. Yianilos on 2002-07-06


Footnotes

... Tours1
A preliminary version of this paper appeared in the Proceedings of the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, January 1994, pp. 65-76.
... Buss2
Supported in part by NSF grants DMS-9205181 and DMS-9503247. Department of Mathematics, University of California, San Diego, La Jolla, California 92092-0112 (sbuss@ucsd.edu)
... Yianilos3
NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540; and Department of Computer Science, Princeton University, 35 Olden Street, Princeton, New Jersey 08544-2087 (pny@research.nj.nec.com)
... condition.4
The definition of the ``closest zero property'' is misstated in [8]: it should be defined as saying that it is possible to find the first $r$ such that $w(l,r)-w(k,r)-a\le 0$ (note their $w$ corresponds to our cost function $c$, and $a$ is a real). However, their algorithm explicitly uses the correct definition of ``closest zero property'' (see their Fact 2).
... tours5
In unimodal tours, the cost function from any node rises and then falls as the tour is traversed.
... cases.6
Note that, of the two greedy matchings for $G$, one is greedy on $[x_1,x_N]$ and the other is greedy on $[x_2,x_1]$.
...BKRY:quasiconvex.7
These C implementations are also available electronically from the authors, or can currently be obtained by anonymous ftp from math.ucsd.edu, or ftp.nj.nec.com
... end.8
For line-like tours, only the first scan is needed; however, we treat only the more general (circular) case.


Up: Publication homepage
Peter N. Yianilos
2002-07-06