Reimplementing Blondie24: Convolutional Version

by Justin Skycak on

Using convolutional layers to create an even better checkers player.

This post is a chapter in the book Introduction to Algorithms and Machine Learning: from Sorting to Strategic Agents. Suggested citation: Skycak, J. (2022). Reimplementing Blondie24: Convolutional Version. Introduction to Algorithms and Machine Learning: from Sorting to Strategic Agents. https://justinmath.com/reimplementing-blondie24-convolutional-version/


Fogel and Chellapilla followed up their 1999 Blondie24 paper with another paper, Evolving an Expert Checkers Playing Program without Using Human Expertise, published in 2001.

Convolutional Layer

This paper was very similar to the 1999 paper, but it had one key difference that improved the performance of the evolved players: they inserted a convolutional layer between the input layer and first hidden layer in their neural network.

  • Input Layer: $32$ linearly-activated nodes and $1$ bias node (checkers board has $64$ squares but only half of them are used)
  • Convolutional Layer: one tanh-activated node for each $N \times N$ subsquare of the checkers board with $N = 3, 4, 5, 6, 7, 8.$ These nodes also receive input from the bias node in the input layer. Also, this convolutional layer contains $1$ bias node that connects to the next layer.
  • First Hidden Layer: $40$ tanh-activated nodes and $1$ bias node
  • Second Hidden Layer: $10$ tanh-activated nodes and $1$ bias node
  • Output Layer: $1$ tanh-activated node. Note that this node also receives input from the piece difference node.

Recall that the checkers board has dimensions $8 \times 8.$ So, there is a single $8 \times 8$ subsquare (namely, the entire board). Likewise, there are four $7 \times 7$ subsquares, nine $6 \times 6$ subsquares, sixteen $5 \times 5$ subsquares, sixteen $5 \times 5$ subsquares, twenty-five $4 \times 4$ subsquares, and thirty-six $3 \times 3$ subsquares. Including the bias node, the total number of nodes in the convolutional layer is

$\begin{align*} 1 + 4 + 9 + 16 + 25 + 36 + 1 = 92. \end{align*}$


These nodes receive $945$ weights from the input layer (including the input bias node):

$\begin{align*} &1 \left( \dfrac{8^2}{2}+1 \right) + 4 \left( \dfrac{7^2}{2}+1 \right) + 9 \left( \dfrac{6^2}{2}+1 \right) \\ & \quad + 16 \left( \dfrac{5^2}{2}+1 \right) + 25 \left( \dfrac{4^2}{2}+1 \right) + 36 \left( \dfrac{3^2}{2}+1 \right) \\ &= 945 \end{align*}$


The convolutional layer is also known as a spatial preprocessing layer because it allows the network to perceive spatial characteristics of the board at different levels of “zoom”. Today, most modern image classification systems leverage convolutional neural networks.

With the addition of the convolutional layer, the total number of weights in the Blondie24 neural network increases to $5047,$ including the weight from the piece difference node to the output layer. These weights are all variable and are learned through the process of evolution.

King Value Update

There was one more minor difference in the 2001 paper. The king value was updated in a slightly different way:

$\begin{align*} K^\text{child} = K^\text{parent} + \delta \end{align*}$


where $\delta$ is randomly chosen from ${ -0.1, 0, 0.1 }.$ The updated value of $K$ is still constrained to range $[1,3].$

Performance Curve

Generate a performance curve the same way you did for the previous (non-convolutional) implementation of Blondie24, playing your evolved networks against your heuristic strategy. The curve should look fairly similar but should ideally level off to a slightly higher level of performance.


This post is a chapter in the book Introduction to Algorithms and Machine Learning: from Sorting to Strategic Agents. Suggested citation: Skycak, J. (2022). Reimplementing Blondie24: Convolutional Version. Introduction to Algorithms and Machine Learning: from Sorting to Strategic Agents. https://justinmath.com/reimplementing-blondie24-convolutional-version/