Wednesday, September 30, 2015

Capture Time




Last class, we briefly spoke about the capture time when introducing the original cops and robbers game. Capture time determines how long it takes or how many moves are needed in order for a cop to capture a robber. To find the length of the game, we must find the number of rounds it takes for a cop to capture a robber, with one round being a move by a robber followed by a move by a cop.
The first theorem that explains the maximum time it will take for a cop to catch a robber is: If G is a cop-win graph of order n≥7, then the capt(G) ≤ n-4. This states that it will take no more than n-4 moves for a cop to capture a robber.
In class yesterday, we talked about the theorem that states that: G is a cop-win graph if and only if G is dismantlable. A graph is dismantlable if it can continue to be broken down or dismantled into a graph that is a cop-win graph. In case anyone has forgotten, a 2-dismantlable graph is a graph that has two separate corners or has fewer than 7 vertices. Having 2 separate corners means that each vertex that is a corner is dominated/followed by a separate vertex.

There exists another theorem that helps us figure out the capture time for any cop-win and 2-dismantable graph. This theorem is found in both of the readings that I used for this post.   

Theorem: If G is 2-dismantable of order n, then capt(G) ≤ n/2.
Here is a brief version of the proof, which is done by induction.
Proof:
Let a and b be 2 separate corners in a 2-dismantable graph or order n≥7 followed by a’ and b’ (a' and b' are vertices which dominate the 2 corners a and b)
Form a new graph, H, by deleting vertices a and b
H is 2-dismantable, so there is an optimal game on H of length at most (n-2)/2. This length is (n-2)/2, because we are taking away two vertices (a and b) and applying this to our induction hypothesis.
Cop plays this game in H and for x{a,b} whenever R is on x, then C plays as if he were on x
After no greater than (n-2)/2 moves either the Robber is caught, or R is on x and C is on x’, so the cop can capture the robber in just one more move
(n-2)/2 +1= n/2 moves
So we find that the capture time of any 2-dismantable graph is ≤ n/2
This theorem does not apply to all cop-win graphs though, only those that are 2-dismantable.


 
We can construct an infinite family of graphs of order n that will have a capture time no greater than n-4. We construct a graph so that all vertices are only neighboring other vertices that are 4, 3, or 1 values away from its value. We will use this graph to determine a strategy for capturing a robber in n-4 moves. This following strategy only works on this specific type of graph, but this graph is derived from our first theorem that the capture time of G is no greater than n-4 moves.

This strategy only works for a fixed integer n≥7 that have the following properties:
1)      The graph H(n) is planar
2)      The graph H(n) is cop-win and has a unique corner.
3)      Capt(H(n))=n-4

Each vertex in the graph H(n) that is characterized by 5≤ x≤ n-4 has neighbors {x+4, x+3, x+1, x-1, x-3, x-4}. This means that the cop and robber may move to vertices with index 1, 3, 4, more or less than their current index. The following strategy can be used in these specific types of graphs to find that the capture time will be n-4.
Strategy:
1)      In 0th round, place cop on vertex 1
2)      After the robber places himself on i>1, in the 0th round, move cop to jV(G(4)) with j{2, 3, 4, 5} so that i=j(mod4)
3)      This step will be repeated until the cop captures the robber
a.       If the robber moves from i to i+k, then the cop will move from j to j+k, where k=1, 3, or 4
b.      If the robber moves from i to i-1, then the cop moves from j to j+3.
c.       If the robber moves from i to i-3, then the cop moves from j to j+1
d.      If the robber moves from i to i-4, then the cop moves from j to j+4

I will show how this strategy works on a graph H(11) tomorrow in class, but basically by using this strategy, the cop should capture the robber in exactly n-4 moves. So in the case of H(11), it should take 7 moves for the cop to capture the robber.

There are many other graphs that are non-isomorphic to H(n) with capture of n-4. Figuring out the maximum capture time of a specific cop-win game is in P- which means it is solvable. These are just the basics for capture time in the original cops and robbers game. There are a lot of more interesting things about capture time in variants of cops and robbers- some which we talked about yesterday in class. If anyone is interested in looking further into capture time, there are many potential ideas that could be used for a final project!

References:
The Game of Cops and Robbers on Graphs by Bonato & Nowakowski, § 8.6
http://www.researchgate.net/publication/220185303_The_capture_time_of_a_graph

Monday, September 21, 2015

Misere Games

Hi Everyone,

In this post, I will be going over the ins and outs of Combinatorial Games that are played under Misere Rules. I will give a little background on recent discoveries in the Misere world, explain what makes misere games so interesting and excruciating, and then show a couple of examples of what really happens when you play with misere rules.

What Does Misere Mean?

As we all know by this point in class, misere refers to a game play option in which the rules are tinkered to change the outcome of the game. Under normal-play rules in an impartial combinatorial game, the last person to take his or her turn is the winner. This is flipped under misere-play, so you don't want to be the person with the last cookie, and this turns 0 into an N-position but more on that later.

Breaking Ground on Misere Games

It is pretty fair to say that the two names we've heard the most regarding combinatorial games are Sprague and Grundy, so it makes sense that we start here when discussing Misere discoveries. As we heard recently, Sprague and Grundy both essentially solved combinatorial games such as Nim in the 1930s. Through the Sprague-Grundy Theoerem, values, and function we can now analyze any position in the game to figure out the optimal play and who should win.This is because this theorem is the key to understanding the structure of the equivalence classes of games modulo equality. The issue regarding Misere play was described well by Thomas Plambeck and Aaron Siegel,

 "The Sprague–Grundy Theorem yields a remarkably simple structure for the normal-play equivalence classes of impartial games. In misere play, however, the situation is vastly more complex. For example, consider just those impartial games born by day 6. In normal play, there are precisely seven of them, up to equivalence: 0, ∗, ∗2, . . . , ∗6. By contrast, Conway has shown that in misere play there are more than 2^4171779"

Since misere games can be a bit of a black hole when it comes to solving the strategy, experts have been working to organize them by difficulty. Since misere nim has been solved, a misere game is known as tame if it can be explained in terms of misere nim, as in it is tame enough to explain quickly(ish). If not, the set of games is wild.



Misere Quotients

Misere Quotients are the essential part to understanding and solving the only  Misere games we can, and thus are fairly complicated. Misere quotients are described as commutative monoids that encode the additive structure of specific misere-play games. Basically what this means is that every situation in some game can be localized to the set of all positions that arise in the play of a particular, associating a game with one commutative monoid, which is the misere quotient. This monoid is essentially the key to the map and once the monoid is found, you can get a perfect strategy. Below is a misere quotient for a random octal game done by Thomas Plambeck:
As you can see, it analyzes every heap size until it can find a pattern, once it calculates the period then it creates the misere quotient (Q,P) as shown above. 

I wrote above about misere games that were classified as either tame or wild, and now that I have explained misere quotients a bit, we can see below these monoids are standard for a tame game: if a misere quotient is isomorphic to these, then it is tame. Fun fact, T2  is actually the quotient for games 0.23 0.31 and 0.52 




Let's Solve One

0.123

To understand how to solve this game, we will follow the steps of the Electronic Journal of Combinatorial Number Theory 5. 

This is the game we will try to analyze playing with misere-play. 0.123 is an octal game (take away with two players and is impartial) where there is heaps of coins, tokens, etc. The basic rules are as follows:
1. Three counters may be removed from any heap;
2. Two counters may be removed from a heap, but only if it has more than two counters; and
3. One counter may be removed only if it is the only counter in that heap.

So we first analyze the size of the heaps, and we see the period is of length 5.

Clearly, the first row is showing simple integers, and thus at those positions the game behaves identical to a misere nim heap of the same size. So this figure works for us to show outcome classes for single heap misere 0.123 positions but we need to expand this to get info about the best play of all misere 0.123 positions (arbitrary sum of arbitrarily-sized heaps). So we take the picture above and make it abstract, to this below:

To make very long proof short, the paper then proves the theorem that the misere quotient of 0.123 is isomorphic to the following 20-element commutative semigroup



The 20 elements (second line) can then be broken up into 15 N-positions and 5 P-Positions



So now that we have these positions, wherever we are in the game, we take whatever values we have in the heaps and convert them into the correct terminology. Then we multiply these together, reduce where we can, and then line up our result in the final table to find out what position we have.
These steps are all in the image below with an example:

So thus we have beaten 0.123!!

That's a wrap on this blog post, I will hopefully go over everything in a little more detail tomorrow, but this is one example of a misere game being solved!

-Harry Copeland

Sources:

http://www.emis.ams.org/journals/INTEGERS/papers/fg5/fg5.pdf
http://web.mit.edu/sp.268/www/nim.pdf
gameshttps://www.jstage.jst.go.jp/article/jmath1948/32/3/32_3_461/_pdf
http://arxiv.org/pdf/math/0609825v5.pdf
http://web.mit.edu/sp.268/www/nim.pdf










Red-Blue Hackenbush



Red-Blue Hackenbush
If you enjoy taking the chainsaw to the shrubbery, then Hack-en-bush is the game for you. Hackenbush is a combinatorial game. The game-board is preset, containing red and blue lines connected at vertices. All of these lines are somehow connected to the ground/bottom of the paper. There are numerous different playing-boards to choose from, an example of one playing-board is the Hackenbush girl depicted below in Figure 1. Players alternate removing lines. The Right player is allowed to remove only red lines, the Left player is allowed to remove on blue lines (Right=Red, Left=bLue). Once a line is removed, all of the lines above the removed line, not connect to the ground will be erased. For example, in Figure 1, if the blue arm line was removed, the blue hand on that arm and the red ball would also be erased. The player who removes the last line wins. I would highly recommend trying the game out for yourself: Play Red-Blue Hackenbush! I found after playing a few rounds it made a lot more sense. 


Figure 1: Hackenbush Girl

As we have seen, Red-Blue Hackenbush clearly fulfills the requirements for a combinatorial game:
1. There are exactly two players (Left and Right).
2. No chance is involved.
3. Both players have perfect information.
4. Game is finite.
5. Someone must win, and the winner is determined by who makes the last move. 


Who wins?
Under optimal play, we should be able to determine which player will win the game before they even begin. Let's start out with an easy example. Take a look at Figure 2 below. Who will win?

Figure 2: Girl and Boy

As we can see, Left player has $14$ edges and Right player has $11$ edges. Left will win because they have $3$ more moves, $(14-11=3)$. So no matter who starts the game, Left will always win. Now this is an extremely simple game seeing as Lefts moves will not effect Rights moves and vice versa. Let's look at a bit trickier of an example in Figure 3.

Figure 3: Tweedledee and Tweedledum
After close examination , we notice red edges have been exactly interchanged with blue edges. Both Left and Right have 19 edges. The second player will always win this game as long as they copy the first players strategy. (If Left takes its blue center stomach line, then Right should take its red stomach line). This strategy, of copying the moves of the first player is known as the Tweedledee and Tweedledum strategy. No matter who starts, the second player will always win, this is known as zero position.

Zero Position: A position in which whoever starts the game will lose the game.

Fractional Moves
How do you determine the value of a move? In Figure 2: Girl and Boy, every move was worth an integer value. But some moves have fractional values. Let's take a look at Figure 4.

Figure 4: Half Moves

What is the value of the position in Figure 4(a)? As you can see, no matter who goes first, Left will always win the game in 4a. But what happens when you add a red move of value $-1$. (Note: The value of moves are represented in terms of the value of the move to Left, so blue moves will be possitive, where red moves will be negative). As you can see in Figure 4(b), when a single red edge is added, Right will always win, no matter who goes first. So we know the value of the position in Figure 4(a), lets denote as $a$, must be $0<a<1$. In Figure 4(c), we finally have a zero game. After playing around, you will see that whoever goes first will lose. Now we know the value of the single red edge is $-1$ and we know the value of the game is $0$, so we know the value of $a$ can be determined by the equation $a+a-1=0$. Hence the value of the position $a$ is $\frac{1}{2}$.

What is the value of the position in Figure 4(d)? (Hint: whoever goes first will lose). That's right, you got it, Figure 4(d) is a zero position.

The value of particular moves can be difficult to determine and takes a lot of playing around with different scenarios. Figure 5 displays a table of already known values of certain positions.

Figure 5: Given Values of Hackenbush Positions


Once you know the values of certain positions it becomes easier to determine the values of other positions. It is even easier if you know the game is a zero game. Let's take a look at Figure 6.

Figure 5: A Zero Game
We know this game is a zero game. We know the value of the middle position is worth $-\frac{1}{2}$ to Left. We also know the value of the position on the far right is wort $-1$ to Left. We also know all the positions, when added together must equal $0$ since we know the game is a zero game. Let us call the position on the far left side (with two blues and a red) position $z$, we know $z-\frac{1}{2} -1 = 0$, thus we know $z=1\frac{1}{2}$. So the value of the far left position is $1\frac{1}{2}$.

Determining the Value of a Hackenbush Games

Figure 6: Tree and Stump

Examining the Hackenbush game in Figure 6. The tree has value $9$ and the stump has value $\frac{1}{2}$, so the entire position has value $9\frac{1}{2}$. Lets examine the possible moves. Right only has one move, and that is to a position of value $10$. The best move for Left would be to remove the blue edge of the stump, which would result in a position of value $9$.

The best move by Left is to move to a position of value $9$, the best move by Right is to move to position of value $10$. We can express this by the equation ${9|10}=9\frac{1}{2}$.

In general we have:
${n|n+1}=n+\frac{1}{2}$
${n| }=n+1$
${ | }=n+1$
${ |-n}=-(n+1)$

Simplicity Rule
Let v(G) represent the value of any Hackenbush position.
Let b be the largest value of any position blue can move to.
Let r be the smallest value of any position red can move to.

Then
a.) There exist an integer $x$, such that $b<x<r$. So $v(G)$ will be the closes integer to zero.
If no such integer exist, then
b.) There exist a rational number, $x$, such that $b<x<r$ whose denominator is the smallest possible power of two.

Figure _ depicts possible b and r values. Try and determine the corresponding x value (without looking first).

Figure 7: Practice with Simplicity Rule

Outcomes
Let $G$ represent the value of Hackenbush game.
If $G>0$ Left wins.
If $G<0$ Right wins.
If $G=0$ the second player wins.

What's your Next Best Move?
When games get complicated it is not always easy to tell your best next move. Luckily we can use the binary number tree for that. Once you find the value of the position of your current game, you can find the value of the position you should move to in order to win. (This will make more sense when I explain in class).
Figure 8: Binary Number Tree


Further Explorations
For this post I focused on Red-Blue Hackenbush. But there are numerous Hackenbush variations. Green Hackenbush is where all segments are green, and both players are allowed to choose any green segment. Red-Blue-Green Hackenbush is where there are red, green, and blue segments. Left player can choose any blue or green segments, while right player can choose any red or green segments. Although these seem like simple changes in the playing board, the effect on determining the best playing option is drastically different. 

References
https://en.wikipedia.org/wiki/Hackenbush
http://www-math.mit.edu/~rstan/transparencies/games.pdf
http://www.link.cs.cmu.edu/15859-s11/notes/Hackenbush.pdf
http://geometer.org/hackenbush/index.html
Game Theory, Thomas S. Ferguson
Winning Ways For Your Mathematical Plays, Volume 1, E.R. Berlekamp, J. H. Conway, R.K. Guy