Get Complete Project Material File(s) Now! »

**Chapter 3 ****Some Subclasses of McNaughton Games**

There are two interrelated algorithmic problems in the investigation of games played on finite graphs. The first is concerned with finding algorithms that de-termine the winner of the game. The second is concerned with extracting a win-ning strategy for the winner. These two problems have been investigated by McNaughton in [75, 76], Nerode, Yakhnis, Remmel in [84, 85, 86, 118, 119], Zelonka in [125], and Thomas in [109, 110, 111, 112]. In this chapter we investigate these two problems for a subclass of McNaughton games called update games. In these games Player 0 aims to visit every node of the game graph infinitely often. Update games can be viewed as models of various types of networks in which a mes-sage needs to be passed among all members of the network. We pursue several goals. One is to provide a procedure that generates all the update games in which Player 0 is the winner. This will be based on structural properties of the underlying game graphs. The second is to design an algorithm that, given an update game, explicitly constructs a winning strategy for the winner. We will provide such an algorithm that runs in quadratic time in the worst case. The algorithm is based on the contraction operator first introduced in [26]. The third goal is to find an algorithm that detects the winner of update games in linear time given that the number of nondeterministic nodes of Player 1 is bounded by a constant. We note this requires a new technique since the algorithm based on the contraction operator does not make use of non-deterministic nodes of Player 1. Using results proved for update games, finally we will extract winning strategies for another subclass of McNaughton games called fully separated games

**Contraction and Unfolding Operators**

** **Our goal is to introduce an operator that reduces the sizes of update games. This will be important for building finite state winning strategies in update games. Our definition follows [26].

**Update Games and Their Basic Properties**

Update games form a subclass of McNaughton games. Dinneen and Khoussainov defined update games in [26] and studied their basic properties. Our focus will be to give a refined analysis of the winning strategies in update games. We begin with the definition of update games.

**Lemma **

Let be an update game with a forced cycle C. Then is an update network if and only if (C) is an update network.

**Proof**

Let V^{} be the set of nodes in (C). Define the surjection h : V ! V^{} such that h(u) = x (resp. h(u) = y) if u 2 C \ V_{} (resp. u 2 C \ V_{1}) and h(u) = u if u < C.

Assume Player 0 wins the game (C) with winning strategy g. Suppose also that the players start to play the game from a node v0 2 V. Player 0 plays while simulating a play in (C) that is consistent with g and starts at h(v_{}). When the play reaches a node v 2 V_{}, Player 0 does the following:

Assume v 2 C. Player 0 will first dictate the play to visit every node in C at least once before coming back to v. Then if in this round moves to y (as instructed by g), Player 0 will move to the next node in C; if moves to a node u , y in (C), Player 0 will dictate the play to a node w 2 V_{} \ C such that u 2 wE and then move to u.

Assume v 2 V n C. If in the next round moves to a node u , y (as instructed by g), then in Player 0 also moves to u. If moves to y, then in Player 0 will move to any node in C \ vE.

In the next round when Player 1 makes a move, if Player 1 moves the game to a node v 2 V_{}, then in the simulation of , moves to the node h(v) 2 V^{}. It is clear that when visits x, the play in will visit all nodes in C, and when visits v 2 V^{} n fx; yg, the play in will also visit v. Since g is a winning strategy, the play will visit every node in V^{} infinitely often, and thus Player 0 wins the play in .

Hence is an update network.

Conversely, assume Player 0 wins the game with the winning strategy f .

Suppose also that the players start to play the game (C) from a node v0 2 V0. Then

Player 0 can win any play in (C) by simulating a play in consistent with f in the same way as above. The nodes that Player 0 chooses to move to in (C) are given by the h-images of the nodes that Player 0 chooses to move to in .

Suppose = (V_{}; V_{1}; E) is an update game. By iteratively applying the contrac-tion operator on forced cycles that are not spikes, we obtain a sequence of games

This resulting game is not uniquely determined and depends on the sets M^{}_{v}, M^{1}_{v} and M_{v}. For notational convenience we suppress the parameters C, M^{}_{v}, M^{1}_{v} and M_{v} and denote the resulting game by (x $ y). We call it an unfolded game of .

The following lemma shows that the unfolding operator inverses the contraction operator. The proof follows from the definitions:

Theorem 3.3.3 together with Lemma 3.3.5 now allow us to construct update net-works. Namely, start with a star network and consecutively apply the unfolding operation. All update games obtained in this way are update networks. Con-versely, for every update network there exists a sequence of unfolded games that starts from a star network such that the sequence produces . We single out this observation as the following theorem:

**Theorem 3.3.6 (Building Update Networks) **

All update networks can be obtained by consecutively applying the unfolding operation to star-networks

By Lemma 3.3.5, each _{i+1} is an unfolding of _{i} through a spike x_{i} $ y_{i} in the game _{i}. So, we will treat this sequence as both the sequence of unfolded games and the sequence of contracted games. Assume that the game _{k} is a t-star network. Our goal is to explicitly construct a finite state winning strategy for Player 0 by using this sequence.

Let S = (Q; q_{}; ) be a finite state strategy in game. We say that the spike x $ y is dependent on S if for all state q 2 Q we have (q; x) = (q0; y) for some q0 2 Q. Otherwise, we say that thespike is independent on S. For instance, when is a t-star network, Player 0 has an obvious t-state winning strategy that visits all the Player 1 nodes in cyclic order. Every spike in is dependent on the strategy. For simplicity, when the strategy S is clear, we drop the reference to S and simply say the spike is dependent or independent. In the following, we write (p; u) ! (q; v) to mean that (p; u) = (q; w) and in the current play Player 1 moves to v from w, where u; v 2 V , w 2 V_{1} . When it is clear which S is being used we omit S in the notation.

**Lemma 3.4.1 **Assume that Player 0 has a t-state winning strategy S in the update game

The following hold for the unfolded game (x $ y):

If the spike x $ y is dependent on S then Player 0 has a t-state winning strategy in (x $ y).

If the spike x $ y is independent on S then Player 0 has a (t + 1)-state winning strategy in (x $ y).

**Proof **To prove (1), assume that x $ y is dependent on S and the t states in S are q_{}; : : : ; q_{t} _{1}. We define a t-state winning strategy S^{} = (Q^{}; q^{}_{}; ^{}) for Player 0 in the unfolded game (x $ y) as follows:

**Proof **(1): In and in any of its unfolded games the node v will never have a incoming edge. This is because in all unfolded games the node v stays intact. Therefore v will not be visited infinitely often with respect to any memoryless strategy of Player 1.

(2): As in Lemma 3.2.7, consider a memoryless strategy g_{v} for Player 1 that for every w with (w; v) 2 E and w < Forced(v) selects v_{w} for which v_{w} , v and

(w; vw) 2 E. This is a winning strategy for Player 1 in . Let (x $ y) be unfolded game obtained from . The set Forced(v) is empty in (x $ y). If all the nodes vw

have remained intact in (x $ y) then g_{v} is still a winning strategy. Assume that some of v_{w} is a part of the spike x $ y (so, v_{w} = x). Then we slightly alter g_{v} by selecting a fixed Player 0’s node in the forced cycle that replaced the spike x $ y, that is, at w we set g_{v}(w) = u, where u is any fixed Player 0’s node in C and at all other nodes z , w the strategy remains the same. The changed g_{v} is a memoryless winning strategy for Player 1 in (x $ y).

(3): Assume that there exists a w < Forced(v) such that (w; v) 2 E. Define the strategy strategy g_{v} as in Part (2). The strategy then satisfies the lemma.

So, we now assume that Forced(v) coincides with the set of all in-going nodes into v. In game any memoryless strategy for Player 1 will be a winning strategy. Let (x $ y) be unfolded game obtained from . If v remains intact in the unfolding then any memoryless strategy for Player 1 will still be winning. If v belongs to spike x $ y (so, x = v) then y 2 Forced(v). Since the spike x $ y is replaced by a forced cycle, by the definition of the unfolding operation, we again see that any memoryless strategy will be a winning strategy for Player 1

Note that the winning strategies extracted by the procedure above depend on the number of contractions of forced cycles, and the number of states in this strategy may not be minimal. We remark that the problem of computing the minimal state winning strategies in update networks is NP-complete. For instance the Hamiltonian cycle problem can be reduced to finding a memoryless winning strategy problem in update networks. Indeed, given a directed graph G, subdivide every edge (u; v) into two edges (u; x) and (x; v) where x is a new node. This new graph is now the underlying graph for an update game (G). In (G), Player 0’s nodes are the original nodes in G and Player 1’s nodes are the new nodes. Clearly, the graph G has a Hamiltonian cycle if and only if Player 0 has a one-state winning strategy in the game (G).

** Update Games With a Fixed Number of Nondeter-ministic Nodes**

A natural question arises if there exists a better algorithm that solves update games than the algorithm described in Theorem 3.3.3. For instance, one would like to know if it is possible to solve update games in linear time on the size of the graph. In a game , we say a node u is nondeterministic if juEj > 1; otherwise, the node is deterministic. In this section we provide a linear-time algorithm to solve update games where there are at most k nondeterministic nodes of Player 1, where k 1 is fixed. We denote the class of all such games by U k.

Assume that is an update game from the class U k. It does not seem that the algorithm based on the contraction procedure can be adapted to provide more

e cient algorithm to solve update games in the class U k. Here we provide a new algorithm that solves update games based on a di erent idea. The algorithm applied to games in the class U k runs in linear time. The new algorithm embeds Tarjan’s method that detects strongly connected components [106].

Our algorithm takes a game from the class U k, and reduces the game to an equivalent game from the class U k 1. The process may eventually produce a game from the class U 0. The following is an obvious lemma that characterizes all the update networks from the class U 0.

**1 Introduction **

1.1 Background and Motivation

1.2 Summary of Results

**2 Games Played on Finite Graphs **

2.1 Arenas, Plays and Games

2.2 Reachability Games

2.3 Büchi Games

2.4 Reach-Avoid Games

2.5 McNaughton Games

2.6 Fully Separated Games

**3 Some Subclasses of McNaughton Games **

3.1 Preliminaries

3.2 Update Games and Their Basic Properties

3.3 Contraction and Unfolding Operators

3.4 Extracting Winning Strategies in Update Games

3.5 Update Games With a Fixed Number of Nondeterministic Nodes

3.6 Extracting Winning Strategies in Fully Separated Games

**4 Splay Trees **

4.1 Amortize Complexity

4.2 Three Methods

4.3 Self-balancing Data Structure

**5 Dynamic Reachability Games **

5.1 Basic Framework

5.2 The Auxiliary Data Structures

5.3 Trace Up and Change State

5.4 Update and Query Operations

5.5 Amortized Complexity

**6 Open Problems **

GET THE COMPLETE PROJECT

Special Topics on Automata, Structures and Games