skip to Main Content

To play the solo mode, you will need to upgrade to Version 1.9 of the app (or an older beta version):

Download Solo Rules
(version 0.6.5)

  • Players: *1-4
  • Ages: 13+
  • Time: 60-75 Minutes

* App-Powered Solo Mode

We are excited to announce that we are including a solo mode for The Search for Planet X in the companion app!


2019 February
Alex plays the game at DundraCon, and he develops proof-of-concept scripts that creates a game state and solves it. These scripts calculate that there are 1,138,272 arrangements of objects that follow all the logic rules.

2019 May
We meet at KublaCon to brainstorm and discuss approaches for developing the digital bot and integrating it into a solo mode for the app.

2019 August
We meet at Gen Con to finalize the approach and discuss the design. We officially hire Alex to create the bot.

2019 September
Alex submits the first complete working version of the bot, and we playtest against it.

2019 October
We launch the Kickstarter campaign for The Search for Planet X. Alex revises the bot based on our playtesting feedback, creating numerous configuration parameters we can use to tune the game play and the difficulty of the bot.

2020 February
We design the format that we will use to store the bot’s moves (what actions it will take and what theories it knows) in the maps. We modify Alex’s bot code to generate its output in this format.

2020 March
We add the solo buttons to the app interface. Two new buttons appear on the main action menu (“Bot: Take Turn” and “Bot: Submit Theory”) that perform the appropriate move based on where the bot’s player pawn should be on the time track. We track whether the bot or the player has located Planet X and adjust the buttons accordingly. If the player finds Planet X first, we determine what the bot will do with its final scoring opportunity. Version 1.5.0 with these changes was uploaded to the stores on March 10 for beta playtesting.

2020 April
We published a draft of the rules sheet for the solo online for open beta testing. Version 1.6.0 was uploaded to the stores on April 19, which includes a link to this draft rules sheet. We worked with Dan King to make sure that the instructions for the solo mode would be included in the How To Play video.

2020 May
We continued to test and tweak the algorithms for the bots actions and theory submissions. We added 40 more game codes to the Standard Mode of play. We fulfilled rewards to backers in the United States, and many began to playtest the solo mode and send us feedback.


By Alexander Mont

Here are some notes from Alex about the design of the bot and how its output will be integrated with the app:

AI Design

The AI starts out by enumerating all possible sector maps that are consistent with the game’s Rules of Astronomy (there are 1,148,272 such maps for the “expert version” with 18 sectors, and 4,446 such maps for the “normal version” with 12 sectors.). It stores this as a “knowledge state” that represents everything it knows so far. Then, each time it receives information (from a conference clue, a scan/target, or prior knowledge) it removes from the knowledge state all sector maps that are not consistent with that information.

A knowledge state can be “scored” based on how valuable it is to the AI. We use a formula that rewards being able to safely place theories, finding the location of Planet X, and reducing the overall number of possible remaining sector maps. The AI will only place a theory if it is sure that it is correct (i.e. all possible maps in its knowledge state are consistent with the theory)

When the AI decides what scan to do, it does the following for every possible scan:

  • Divides the current knowledge state into groups based on what the result of the scan would be if that were the true sector map. Thus, the groups represent the possible future knowledge states after the scan is completed.
  • Score each of the potential possible future knowledge states.
  • Figure out the average score of those knowledge states, weighted by the probability that the AI will end up in each state. Determine how much the score is expected to improve from the score before the scan.

Then it selects the scan with the highest ratio of expected score improvement to the time cost of the scan.

Integration With The App

The AI we are currently developing is “non-interactive”: that is, it does not take into account the human player’s decisions. In fact, when we generate the maps to put into the app, we will also run our AI to get the sequence of scans that the AI will send and theories that the AI will place. Note that the AI does not “cheat” – that is, the AI always bases its decisions only on the same information that a human player would have.

Even though the AI is non-interactive in the way it chooses its scans, it might be possible to add interactivity in the way that it chooses which theories to put down. For instance:

  • If the AI knows what is in three or more different sectors, so it has to choose which two theories to put down, we might have it prioritize sectors where a human player’s theory is on its way toward peer review (so it doesn’t miss out on its chance)
  • If the AI doesn’t know what is in a sector but does have a good guess, and that sector is about to get peer-reviewed, and the AI has a free theory placement, it could make its guess so that it doesn’t lose out on the chance to score that sector.

Implementing the above would be much easier than implementing a full interactive version, because we don’t need to keep track of the full knowledge state and simulate every possible scan (this might be too computationally expensive for a phone app); we just have to pre-compute which theories the AI has figured out and where it has good guesses, and have the user input where he or she has put down theories.

Back To Top
%d bloggers like this: