Tuesday, September 8, 2015

Cookie Take-Away

POP QUIZ: Whats better than eating cookies and playing games? (Trick question) There is NOTHING better. 

In this post, I am going to talk about specific type of take-away game. Lucky for us, it deals with theories and examples from both "The game of Nim" (where two players take turns removing sticks from a pile, thank you Bayard) and the Sprague-Grundy theories (which allows us to find P-positions and N-positions if you are able to decipher the pattern, thanks you Sam).

Cookie Take-Away! (Yumm)

Cookie Take-Away is a combinatorial game where two players take turns removing a single cookie every day for a certain number of days. Some cookies may "go bad" while on the shelf and it becomes unsafe to eat, with each cookie having an expiration date written on it with frosting. A cookie is represented by a heap of n counters, where n is the number of days remaining before the cookie goes bad. So if a cookie has a heap size of 0, then that cookies time has expired.  The winner is not the player who eats the most amount of cookies, but the winner is the player who has the joy of eating that last, delicious, mouth watering, cookie.

A turn consists of removing a single, non-empty heap (cookie) and reducing all other heaps by 1 number (one day closer to expiration). Once again, the player who removes the last, non-expired cookie, is the winner. So if we were to represent each position in a non-descending order(which will be called the components of the position), a simple example would be [1 2 4] which signifies a three-heap set with heap sizes of 1,2, and 4. The game with [1 2 4] would be played out as follows:


Reduction Theorems


A game that starts with n-heaps must terminate in a finite number of (at most n) moves, one of the two players must have a winning strategy. If the game was just with the set [1], the first player would win no matter what. However, using the same initial set of [1 2 4], the second player would guarantee a win no matter what move the first player would do:

As a refresher, a P-position is a move that "ensures a route" to victory and an N-position is move that allows you to get back to a P-Position. 

In the game of Cookie Take-Away, the set of [1] would be an N-type (wins for the next player) and if a set if not an N-position, it is called a P-type (the terminal position or a win for the "previous" player / win for the player who got it to that position), as seen in the set of [1 2 4]. So if one could determine, as the game progressed, if the set was an N or a P type, then that player would be guaranteed a win.  If two arbitrary sets of G and H happen to be the same type, then we would say "G~H" or tantamount. So the set of [1 5 6 7] ~ [1 4 4 4] because the size of the heap does not matter if it is greater than the maximum possible moves in a game (the size of n), which is equivalent to the number of heaps at the start of the game. We can also see that the set [1 1 2] ~ [1 2] as only k heaps of size k will be relevant while developing a strategy, since the others will "tarnish" before they can be taken.

Depletion Theorem: If G were a set [ a1, a2, .... , an], then any component of size greater than n may be depleted down to size n without affecting the type of set.

Deletion Theorem: Let G be the set [a1,a2, ... , an]. If ak < k for some k, then the component of a1 may be deleted without affecting the type of set. A Surplus of G is the smallest non-integer s where k - ak < s for all 1 < k < n. So whenever s > 0, the set [ a1, a2, ... as] may all be deleted. This basically says to delete the s smallest components, where s is the surplus of the position.

So [1 2 3 3 6] ~ [ 1 2 3 3 5]. The set has surplus of 4 - a4 = 4 - 3 = 1, thus making it 
tantamount to [ 2 3 3 5] or [ 2 3 3 4] 

Proofs of these two theorems can be found at this journal entryhere

(I know this may not make much sense right now, but stay with me)

Using both depletion and deletion, one can convert each set into a reduced form of that same set and type. You may be asking "why would I want to do this?" - well hear me out. If we were to replace all the heaps of size > n by heaps that are effectively infinite (denoted by "*"), then we could have cookies that have an "infinite expiration date", which makes it much more clear as to why depletion and deletion is helpful. So when going through these steps, we want to DELETE as many times as possible, and then DEPLETE. 

So the set [ 1 2 3 3 6] ~ [ 1 2 3 3 *]. Then you delete the smallest component,
which gives us [ 2 3 3 6] ~ [2 3 3 *]. Then you deplete the largest component,
which gives us [ 2 3 3 4] 


So take a look at all of the possible reduced sets which have 4 or less components. You can see that if a set is in the N-position, then the winning move is the move that is circled. (Remember that a move is winning if that move/component leads to a P-position) 




Hmmm. This may be an interesting pattern. 


Here is a table that shows all sets from 1 2 .... n

If you haven't realized it yet, there are numerous interesting patterns that apply to each of these sets (with the exception of n = 1):

1) For n = 0 mod 3, the set is a P-type.
2) For n = 1 mod 3, the moves that would win you the game are 1, ... , 2/3 (n - 1)
3) For n = 2 mod 3, the moves that would win you the game are 2/3(n + 1), ... , n.

So....The table below gives a value for both a and n, where the winning moves are the ones shown. If it is blank, that means there is no possible winning move because that set is a P-type. 



**Consistent pattern all the way up to n = 20, stopped as author saw sufficient data**

The line shown in the table divides it up into three segments. There is one containing the set [1], and two that are infinite, and are determined by whether or not n is > 2a - 2 or < 2a - 2.

However, you can still follow the simple rule (although it may not be that clear) of: if the largest move is congruent to 0 mod 3, 1 mod 3, or 2 mod 3, then you would either select indifferently, move with the smallest number possible, or move with the largest number possible. (Further Explained in greater detail in Proof link) 


Sprague-Grundy Values! 





Above is what the graph that you just saw in table 3, but converted into the Sprague-Grundy values. Here are some things that you should note down from this new board for figuring out values that go beyond this board:
  1. Above the drawn in diagonal line (n > 2a - 2), you see that the values stay consistent on the northeast-to-southwest diagonal.
  2. Also above the drawn in diagonal line, you will notice that the same repeating periodic sequence (Shout out to Lauren Q.) of 0,1,2,0,1,3,...,0,1,2,0,1,3. 
  3. Below the diagonal drawn in line, the grundy values remain consistent as you go down a column. 

What do these values tell you?

First, you need to know what strict positions are. Strict positions are sets that range from a1 to an, they have grundy values of 0,1,2, or 3, and all of its components are distinct, excluding those components of size n. (This is the case because of the Depletion theory explained earlier. The set [ 2 3 4 ] ~ [ 2 3 3 ].)

Winning Conjecture! 

**variables with parentheses next to them should be depicted as subscripts
The winning conjecture says that: If G is a strict position of N-type, then the winning components of G are exactly those whose size lie in a specific "winning interval", which would be a(i), a(i+1), ...,  a(j), as long as that list of natural numbers are uninterrupted. But this ONLY works for strict positions. For example, the set [ 1 2 2 3 6 6 ] does not have a winning conjecture because the winning moves are both the 2's and the 6's. 



Final Thoughts?

And thats, that! Cookie Take-Away is a great fun game as it deals the factor of "expiration" to the already popular variation of the game Nim. If you're able to think ahead to where you could set yourself up to eat the last, healthy cookie, then you could be quite good at this game. But its interesting to think about if the same patterns on the grundy table follow all the way out to infinity, I mean who knows right? Anyways, Ive been writing this blog about cookies for a solid amount of hours now so Im kind of hungry. What are your favorite flavors of cookie? Let me know below!




Best of wishes,

                      Jon D.


References:
http://faculty.uml.edu/jpropp/cookie1.pdf
http://faculty.uml.edu/jpropp/cookie2.pdf


A Very Brief History of Combinatorial Games

          Now that we have learned about the basics of combinatorial game theory, we can learn a little bit about the history of the subject. Although it has roots in 16th and 17th century gambling and probability studies, the majority of our understanding of combinatorial game theory comes from work done in the past century. In its nascency, combinatorial game theory concentrated on impartial games under normal play. It wasn’t until the publication of J.H. Conway’s On Numbers and Games (1976) and E.R. Berlekamp, J.H. Conway and R.K. Guy’s Winning Ways that the depth and breadth of the subject was fully considered.  Since this early work, combinatorial game theory has truly blossomed into a sophisticated field of study.   
          The history of combinatorial game theory can be divided into three main “threads,” as Richard Nowakowski calls them, which include: impartial games, partisan games, and impartial misère games. Let’s start with a glance at the history of impartial games. The study of impartial games likely began with Charles L. Bouton, who published an early solution to the game of Nim in 1902. Bouton’s work soon sparked an interest in Nim among other mathematicians.
Several variations of the game soon followed. For example, Willem A. Wythoff developed a Nim variant which came to be known as Wythoff’s Nim. In Wythoff’s Nim, there are two heaps of counters, and a player can either take as many counters as they wish from one pile, or they may take an even number of counters from each pile. Rufus Isaacs created his own spinoff, known as Wythoff’s Queens, in which a chess queen is moved closer to one corner of a grid. These games gave rise to elegant mathematical findings, but game theory purists—Nowakowski among them—argue that these variations were detrimental to the theory because the games could not be reduced to components.
Emanuel Lasker, however, sent combinatorial game theory in the “right” direction with the development of Lasker’s Nim—the same as Nim, except a player may elect to split a heap in two, rather than take counters. Lasker set the stage for Roland Sprague and Patrick Grundy to develop the notorious Sprague-Grundy Theorem in 1939. The Sprague-Grundy theorem demonstrated that all impartial games are equivalent to heaps in Nim. This theory has obvious utility in impartial games, but it can also be applied to other types of games.
Now let’s turn our attention to partizan games under normal play. Recall that partisan games are games in which rules are different for each player. For example, in tic-tac-toe someone designates their moves with Xs while their opponent uses Os, or in Checkers red and black pieces are used (these aren’t combinatorial games, of course).  In 1953, John Milnor published the first theoretical paper on partizan play, which marked the beginning of a burgeoning subject in game theory.  Milnor explored desirable “hot” positions, and tried to explain the “mean value” of a given position. Olof Hanner’s own version of the mean value of a game followed shortly thereafter in 1957, based on his interest in the combinatorial game Go.
Milnor and Hanner’s early work helped John H. Conway to develop the concept of disjunctive sums of games, as well as the surreal number system. Conway published his findings in On Numbers and Games in 1976. It wasn’t long before Berlekamp, Conway and Guy co-authored and published Winning Ways in 1982. These two publications serve as the foundation for combinatorial game theory as we know it today.
This leads us to the last “thread” in the history of combinatorial game theory: impartial misère games. The story behind impartial misère games is actually quite short. Misère games have long been an enigmatic branch of combinatorial game theory. Both On Numbers and Games and Winning Ways touch on the subject and a paltry number of papers followed. The trouble with misère games is that almost all games are unique; this means that misère games seldom share common strategies, unlike games under normal play.
In the last decade or so, Thane Plambeck and Aaron Siegel have shed new light on misère games. The two have published a number of publications on the subject. According to Nowakowski, “the history of misère games is only now being written."


Works Consulted:

Nowakowski, R.J. (2009). The History of Combinatorial Games. 1-13.

N.A. (2015). “Combinatorial Game Theory.”

Monday, September 7, 2015

Periodicity (focus on subtraction games)

What is periodicity?
The tendency of a function to repeat itself in a regular pattern at established intervals. To be periodic there must be a consistent repetition of values over the entire domain of the function

Periodicity of Subtraction Games 
I will focus on periodicity of nim-sequences of subtraction games, denoted SUBTRACTION(S).  In order to be a subtraction game the following conditions must be satisfied:
  two-player game involving a pile of chips.
• a finite set S of positive integers called the subtraction set. Subtraction sets S = s1, s2, ..., sk will be ordered s1 < s2 < < sk.
• The two players move alternately, subtracting some s chips such that s  S.
• The game ends when a position is reached from which no moves are possible for the player whose turn it is to move. The last player to move wins

finite subtraction game denoted SUBTRACTION(­­­­a1, a2, . . . , ak), if S = {a1, a2, . . . , ak} is finite

all-but subtraction game denoted ALLBUT({a1, a2, . . . , ak}, if S = {1,2,3,…} | {a1, a2, . . . , ak} consists of all the positive integers except a finite set

subtraction game is periodic as we will prove later and ALLBUT() is another name for NIM and is arithmetic periodic with saltus 1

Types of periodicity:
1)   periodic – if there is some l >= 0 and p > 0 so that G(n +p) = G(n) for all n>=1
2)   arithmetic periodic – if there is l >=0 , p > 0, and s > 0 so that G(n +p) =
G(n) + s for all n >= l
3)   sapp regular (split arithmetic periodic/periodic)- if there exist integers
            l >= 0, s >0, p > 0, and a set S  {0, 1, 2, . . . , p − 1} such that for all n ≥ l,
            G(n + p)  = ( G(n) if (n mod p)  S,
                                    G(n) + s if (n mod p) not in S.
 purely periodic, purely arithmetic periodic, or purely Sapp regular just infers that there is no pre-period along with previous stated conditions

Subsequence G(0), G(1),..G((l-1) is called pre-period and its elements are the exceptional values l  is pre-period length, p is period length, and s is saltus

Practice:
Match each sequence on the left one-to-one to a category on the right:
1231451671 . . . periodic
1123123123 . . . purely periodic
1122334455 . . . sapp regular
0123252729 . . . arithmetic periodic
0120120120 . . . purely sapp regular
1112233445 . . . purely arithmetic periodic
In each case, identify the period and pre-period

Finding the pre-period and period of subtraction games:
Ex.1: S = {2,4,7}
The Grundy Sequence starts G = 01122003102102102102, this means G(1)=0, G(2) = 1, G(3)=1, and G(4)=2 etc.
From the pattern forming we can see G = 0112203$\overline{102}$so the pre-period is 0112203 and it has a period of 3, 102

Ex.2: S = {1,9,10} G =$\overline{101010101232323232}$,} this example has no pre-period and a period of 19, it can also be written as G = $\overline{(10)^41(23)^420}$

Theorem:  The nim-sequences of finite subtraction games are periodic.
Proof: Consider the finite subtraction game subtraction(a1, a2, . . . , ak) and its nim-sequence. From any position there are at most k legal moves. So, using Observation 7.22, G(n) ≤ k for all n. Define a = max{ai}. Since G(n) ≤ k for all n there are only finitely many possible blocks of a consecutive values that can arise in the nim-sequence. So we can find positive integers q and r with a ≤ q < r such that the a values in the nim-sequence immediately preceding q are the same as those immediately preceding r. Then G(q) = G(r) since G(q) = mex{G(q − ai) | 1 ≤ i ≤ k} = mex{G(r − ai) | 1 ≤ i ≤ k} = G(r). In fact, for such q and r and all t ≥ 0, G(q + t) = G(r + t). This is easily shown by induction — we have just seen the base case, and the inductive step is really just an instance of the base case translated t steps forwards. Now set l = q and p = r − q and we see that the above says that for all n ≥ l, G(n + p) = G(n); that is, that the nim-sequence is periodic.

Corollary: Let G = subtraction(a1, a2, . . . , ak) and let a = max{ai}. If l and p are positive integers such that G(n) = G(n + p) for l ≤ n < l + a, then the nim-sequence for G is periodic with period length p and pre-period length l.

Why it’s important :
Periodicity is important because by representing a game as a periodic nim sequence we can compute the value of any single heap efficiently. The value of any sums of heaps and therefore a perfect winning strategy can be computed using nim-addition rule. Once the game is known to be periodic it is solved



Sources:



























Sums of Combinatorial Games

Sums of Combinatorial Games
Now that we know a little bit more about combinatorial games such as subtraction games, we can start looking into some more applications and games using nim-sum and the Sprague-Grundy Theorem. For example, we can start to form new games by combining multiple games. You can have many nim-pile subtraction games going on at once between two players. Players will play the subtraction games as usual, but will continue playing until all subtraction games have reached a terminal position, when there are no more moves to be made in any of the games. This is called the disjunctive sum of combinatorial games. These games will still be played in normal play, so the player to make the last move out of all the games combined will be the winner. The different games combined will not interact as each player must choose only one component to play in per turn.

                Here is an example of sums of subtraction games:
                Say we have three piles.
                One pile would have 10 chips with subtraction set S={1,2}
                The second pile would have 12 chips with subtraction set S = {1, 2, 3}
                The last pile has 13 chips with subtraction set S = {1, 2, 3, 4}

There are still only two players, but the first player might takes chips from the first pile while the second player could take chips from the second pile. The players would continue playing like this until each separate game within the disjunctive sum game were all at terminal positions. In this case, there are three terminal positions at 0 chips for each separate game. In order to analyze the sums of games further we need to learn a bit more about the Sprague-Grundy values of disjunctive sums and how we go about finding them.

The Sprague-Grundy Theorem
The Sprague-Grundy Theorem is also used to find the Sprague-Grundy function for a disjunctive sum of graph games as long as the Sprague-Grundy functions are known for the component games. The proof of this theorem is explained by Ferguson on page 22 of the reading. First, the disjunctive sum of games is denoted as the sum of all the graphs G1….Gn by G = G1 + G2 + …. + Gn for n amount of games or graphs that are being combined.

The theorem that is used is Theorem 2 of Sprague-Grundy which is pretty similar to Theorem 1 which Sam spoke briefly about the other day when he was presenting on Sprague-Grundy values. Here is a summary of how theorem 2 goes:

Theorem: If g_i is the Sprague-Grundy function of G_i, i=1,....n then G = G_1 + .... + G_2  has a Sprague-Grundy function g(x_1, ... x_n)=g_1(x_1) ⊕ ... ⊕ g_n(x_n).

Proof:
Basically, the proof shows two things for the function g(x_1,...x_n) when we let x=(x_1,...x_n) be an arbitrary point of X and we let b=g_1(x_1) ⊕ ... ⊕ g_n(x_n).
1)      For every non-negative integer a<b, there is a follower of any of the points in X that has a g-value of a.
2)      No follower of (x_1, .... x_n) has a g-value of b
I’ll explain the proof more in class when I present, but it’s important to know that the proof proves the above things. Ferguson ends the explanation of the proof by saying that the theorem implies that every progressively bounded impartial game is equivalent to some nim-pile. This means that it could be replaced by a nim-pile of similar size without changing the outcome of the game.

Applications of Sums
Ferguson listed three applications for sums of combinatorial games, and there are a lot of other ones too. I will only be going into depth of one of them- the sums of subtraction games. This is something that we have focused most on and are most familiar with, so I thought I would give an example of sums of subtraction games.

                Ferguson gave the example that G(m) is the one pile subtraction game with subtraction set S_m = {1,2,…,m} in which from 1 to m chips may be removed. Suppose we summed together three subtraction games. The three games are:
 m=3 and the pile has 9 chips
 m=5 and the pile has 10 chips
m=7 and the pile has 14 chips

The game we are playing is G(3) + G(5) + G(7) and the initial position is (9, 10, 14). In order to find the Sprague-Grundy value of the sum of games, we must first find the Sprague Grundy values of each individual game. For each game, the SG value is xmod(m+1). So for G(3) and initial position at 9, the value is 1. When the initial position is 10 for G(5), the SG value is 4. Then when the initial position is 14 for G(7), the SG value is 6. Then, by theorem 2, we know that the SG value for the sum of the subtraction games is the nim sum of each individual Sprague Grundy Value. In this case, the Sprague Grundy Value = 1 ⊕ 4 ⊕ 6 = 3. A couple of classes ago we proved that when the nim-sum is equal to 0, the person is in a p-position. So, we want to try to make a move that gives a nim-sum of 0 in order to put our opponent in a p-position. Ferguson gives us one optimal move which he says is removing one chip from the pile of 14 and which gives a Sprague-Grundy value of 5. He then says there is another optimal move. After playing around with the nim-sum of these values, I found that another optimal move would be to make the Sprague-Grundy value of 2 to the G(3) pile. We can make this happen by leaving 6 chips in the pile that initially had 9, so we do this by taking away 3 chips. Using theorem 2, can any of you find the optimal moves to put your opponent in a p-position in the first example that I gave at the beginning of the blog? I think that there are also two of them.

                There are many other applications that use the second Sprague-Grundy theorem and involve the disjunctive sums of combinatorial games. I think that many of these applications would be interesting to look further into for a potential final project if anyone was interested!

Wednesday, September 2, 2015

Sprague-Grundy

The Sprague-Grundy Theorem

“Every progressively bounded, impartial combinatorial game G with starting position x under normal play is equivalent to a single Nim pile of size g(x) ≥ 0, where g(x) is the Sprague-Grundy function evaluated at the starting position x.”

In English, this means that these specific games usually have a function called *drumroll* Sprague-Grundy function, which allows us to find P-positions and N-positions provided we are able to decipher the pattern. It also means that any game with a Sprague-Grundy function can be represented by its function g(x) given a starting position of x. An example that will sound astonishingly familiar is the subtraction game S {1, 2, 3} where two players can take 1, 2 or 3 chips out of a pile per turn until the pile is empty, with the winner being the rock star who empties the pile.  Let’s look at it in a table where x represents the number of chips and g(x) the Sprague-Grundy values:

x
0
1
2
3
4
5
6
7
8
g(x)
0
1
2
3
0
1
2
3
0

In this case, the Sprague-Grundy function would be g(x) = x mod 4. Applying the theorem, we see that any such game starting with x chips would be equivalent to a single Nim pile with x (mod 4) chips. The single part in single Nim pile is of critical importance because this will also apply when we are summing games together (we can add Sprague-Grundy functions together too to create a new function for summed games!), a topic which someone else will have the pleasure to explain to us later on.

Sprague-Grundy in Graph Games

In this post, we will look at Sprague-Grundy values and functions with a focus on Graph Games. Graph Games are combinatorial games played on a directed graph. A directed graph, name it G, is a pair (X, F) where X is a nonempty set of vertices (points of intersection between lines on a graph, those are the positions) and F is a function that give a subset of vertices in X for each x X . The function represents the positions to which a player can move from x, which are defined as the followers of x.

Graph Types

In order for Graph Games to satisfy the finite game condition, the graph may be progressively bounded or progressively finite. A progressively bounded graph is one where X, or the set of vertices, is finite, thus allowing every path from x_0 (initial position) to have a length lesser or equal to n, where n is the number of vertices.  A path is a sequence of positions x_0, x_1, x_2, x_3…, x_m such that x_i F(x_i - 1) for all i = 1, 2, …, m, where m is the length of the path. A progressively finite graph is one where every path is finite in length but the length has no upper limit. Additionally, the number of possible paths has no upper limit. It is not progressively bounded because the length of a path from the initial position is not limited. An example of such an instance would be a subtraction game where the initial move is the pick the number of chips to remove. Both these conditions provide the required restriction that there be no cycle. Simply put, a cycle is a path that will never yield a P-position in a graph game (in technical terms it is a path x_0, x_1, x_2, …, x_(m-1)  where m ≥ 3).

Progressively Bounded Graph Game

A two player game played on a progressively bounded graph would see the first player move from x_0, then players would take turns, which means that from a position x a player would move to a position y F(x) until confronted to a terminal position. A terminal position would be where F(x) is empty, thus making a next move is impossible.

The Sprague-Grundy function of a graph, (X, F), is a function on X taking non-negative integer values. g(x) is the least non-negative integer not present among the Sprague-Grundy values of the followers of x. Where g(x) = z, the Sprague-Grundy value is z. The function can be defined as follows:
g(x) = min{n ≥ 0 n ≠ g(y)  for y F(x)}.
Or
g(x) = mex{g(y) y F(x)}.

Here mex is defined as the minimum excludant, or smallest integer absent from the set. Note that in the second function, g(x) is defined in terms of the followers of x, labelled y, in g(y). Terminal vertices x will have a Sprague-Grundy value of 0 (g(x) = 0). For non-terminal positions where the following vertex is terminal, the Sprague-Grundy value will be 1, since the smallest integer not included in the followers {0} is 1. Using the Sprague-Grundy function of a graph game, we see that every position where the Sprague-Grundy value is 0 is a P-position, otherwise it is an N-position. In order to win, a player must select moves putting him in P-positions. At a position x where g(x) = 0, any follower y of x is such that g(y) ≠ 0 and at a position x where g(x) ≠ 0, at least one follower y of x is such that g(y) = 0.

To determine Sprague-Grundy values in a progressively bounded graph, we can start from the terminal values, which are all 0, then work our way up using g(x) = mex{g(y) y F(x)}.
Here is an example where I took a shot at one of Ferguson’s Exercise in section 3.5:


P-position logic

There is another item that will be helpful to prove in order to further our understanding of Sprague-Grundy theory. Let's look at the following statement "Let G be a progressively bounded, impartial combinatorial game with normal play rules. A position x in G is a P-position if and only if the Sprague-Grundy value of x is 0."

Given the SG function g(x) = mex{ g(y) : y in F(x)} where g(y) represents all followers y of x, we see that the SG value of a position will be 0 only if it follows positions having SG values other than 0 and if its followers have SG values other than 0. Playing from a position with SG value of 0, you will automatically move to a position with some SG value a (for a > 0). From this position, the player will have at least one opportunity to move to a position with SG value = 0. Terminal positions have a SG value of 0 at all times, because their set of followers is empty. In order to win, you must reach the terminal position, by definition. You cannot win if you move to a position with SG value > 0, because then the next player would have the opportunity to move to a position where SG value = 0, and eventually this position will be the terminal position. Thus, the only way for a player to win is to put himself in a position where the SG value is 0, where the previous player to move would win. Therefore, a position x in a game is a P-position if and only if the SG value of x is 0.


Progressively Finite Graph Game

Sprague-Grundy functions exist in such graphs, but requires allowing transfinite numbers as Sprague-Grundy values, since the number of finite paths from the first position along with the length of these paths have no upper limit. However, all paths must eventually lead to a terminal position. In such games, the starting position would have a Sprague-Grundy value corresponding to the smallest integer greater than all the integers in the paths. This is usually denoted ω (lower case Greek letter omega). Nim positions may also be defined to have transfinite Sprague-Grundy values.

Graph Games with Cycles

Things start to get a little bumpy if graphs are allowed to have cycles. If you recall the quick overview mentioned above, it is easy to understand why: such graphs do not satisfy the ending condition of combinatorial games. Cycles are such that players would follow the same moves over and over without ever winning or losing the game (hence the name cycle). If a player broke the cycle, he would immediately put himself in an N-Position, a losing position, and therefore would never do such a thing (right?).
Here is an example of a cyclic graph:


In this example, a player confronted to position d would never move to position e, because then the next player could win the game by going to the terminal position f, which is the only one with a Sprague-Grundy value of 0. The game would always follow the cycle abcdabcd… without ever reaching the terminal position. In such an instance, the Sprague-Grundy function does not exist.  Some cyclic graph do however have Sprague-Grundy functions, but it may be complicated to find it, and even when it is known and you are able to move to P-positions, you may find yourself back in a previous position many times before things unfold. You think this is interesting? You want to solve the answer to complicated cyclic graph games? I heard we had to do a final project…

Final Thoughts

Sprague-Grundy theory is very useful to establish systematic rules to win combinatorial games. Here we focused on Graph Games, but it can be used for all types of combinatorial jousts. The theorem mentions only progressively bounded games and it would be interesting to see how it fares in instances where the game is only progressively finite, especially when it comes to the property that allows the sum of games and their Sprague-Grundy functions (which, once again, we will discuss later). Furthermore, it is also restricted to normal play. What about misère play? These are questions we can all take interest in and hopefully we will come up with answers!

I leave you with this final challenge, which is an exercise from Thomas S. Ferguson’s Game Theory material: what would be the Sprague-Grundy function of the subtraction game with subtraction set S = {1, 3, 4}?

Cheers,
Sam