Creating AI for GameBoy Part 3: Automating (Awful) Gameplay

If you missed out on either of the first two parts, you can find part 1 here and part 2 here.

As always, my GitHub is up to date with the most recent edition of this project.

In the 3rd part of this project, we will be automating the gameplay with random inputs, hence the title.

This gameplay must be generating data for us while it is playing so that we can have something to train our machine learning model in the future.

The data we need to train an algorithm come in 3 varieties: States, Actions, and Metrics.

The States data refers to the state of the game- where is our character, who is around us, what is our character’s health level.

The Actions, similarly, refer to the action we take given the state we are in.

States and Actions are connected data points that will yield a result that we will need to measure to see how good the action was.

There are a few algorithms to use in an endeavor like this (Q learning, Genetic Algorithms) where the Metric data have different names, like Reward, Fitness, or Q(s,a).

I don’t want to pigeonhole this project on one particular algorithm just yet, so I will be calling our measures of success our Metrics.

To fully automate gameplay, we will need 2 functions: one to play the game and one to restart the emulator if something goes wrong or the run ends.

The gameplay function will be more complicated, as it will be implementing the controller we constructed in part 1 while generating data using the image processing we coded in part 2.

Each time our unit makes a move or decision, we will capture it and store the data.

Overall, the workflow of the gameplay function for each turn will be:Collect the gamestate — recording the number of enemy & player units, turn count (State)The gamestate also gives us information for the value we want to minimize, which is enemy units remaining + turn count (Metric)Find our character — recording the name (Actions)Move our character — recording the coordinates we moved her to (Actions)Select an option — recording if we attacked, used an item, or waited (Actions)I will show a truncated version of the gen_data function below to show the overall feel and logic of the ideas outlined above.

In the context of Fire Emblem, there are a few key presses and sleep lines that I omitted for clarity and readability.

The restart function is a lot simpler in comparison.

In its essence, it presses the keys it takes to restart the emulator and navigate through the menus to beginning the game again.

Combining these two functions as I have below generates data that will be used in Part 4 of this series to learn from this data to play better.

With a for loop specifying the number of runs to complete, we can have our game playing overnight to generate data for the learning aspect of this challenge.

#s, a, m are lists containing states, actions and metrics valuesreset_to_prologue()states, actions, metrics = gen_data_prologue()s.

extend(states)a.

extend(actions)m.

extend(metrics)As a quick note to anyone taking on a similar project, Fire Emblem is different from a lot of other games in that the controls change with every command, complicating this part of the project when compared to other games.

In games like Mario, Sonic, and many others, ‘Right’ always takes the character to the right, ‘A’ is almost always jump, and ‘Start’ will bring up the menu.

In Fire Emblem, ‘Down’ can mean a number of things depending on the screen.

It can move the cursor down a square, move a unit down a square, move a selection from one option to another, and even change units if one is being examined.

Because of this complication, I have written functions to sensibly address this in gameplay when none are needed for many games.

If you are trying to code an AI for a different game, it is likely you can omit the functions and use an image of the screen as the state and the controller input as the action.

.

. More details

Leave a Reply