This is the codebase for John Woods' K-State Spring 2025 CIS 598 project.
It is recommended to run this codebase on Windows. The necessary packages and libraries needed to run the code are provided in the setup/environment.yaml conda file. If you do not have conda installed on your machine, download it here. Once it is installed, run the following command to set up the environment:
conda env create -f setup/environment.yaml
To run the graph algorithm experiments, use the following command:
python run_ga.py
The four graph algorithms currently implemented are Breadth-First Search, Depth-First Search, A*, and Flood Fill. It takes around 20 minutes to run all four algorithms on all 373 mazes. Once complete, the results will be saved to an Excel spreadsheet at results\graph_algorithm_results.xlsx.
To train the PPO RL agent, use the following command:
python train.py
Once training is completed, the saved model will be saved to the saved_models directory.
To run an RL algorithm, use the following command:
python run_rl.py
To run a statistical analysis on the Graph Algorithm results, including a one-way ANOVA and Tukey's HSD, use the Jupyter Notebook eval.ipynb.
To plot the training results of the Reinforcement Learning algorithms, use the following command:
python plot.py
The plots will be saved to the plots directory.
Here is an overview of the file structure for this codebase:
-
graph_algorithms- A package containing Stable Baselines 3 compatible implementations of four graph algorithms: Breadth-First Search, Depth-First Search, A*, and Flood Fill. -
logs- Tensorboard logs from training Reinforcement Learning agents. -
mazefiles- Mazefiles from the micromouseonline/micromouse_maze_tool repository, filtered to include only competition-legal mazes. -
micromouse_maze_tool- A Git submodule for the the micromouseonline/micromouse_maze_tool repository, contains all of the mazes. -
plots- Plots of Reinforcement Learning training performance. -
presentation- The PowerPoint slides for my final presentation. -
results- The numberical results for the graph algorithm experiments. -
rl- A package containing custom Stable Baselines 3 modules I created. -
saved_models- The models saved after training Reinforcement Learning agents. -
setup- Setup files for the project. -
src- Implementation of the MicroMouse Gymnasium environment and PyGame simulator. -
tools- Various tools for dealing with mazes.