self-driving MarioKart with TensorFlow
Driving a new (untrained) section of the Royal Raceway:
Driving Luigi Raceway:
The model was trained with:
With even a small training set the model is sometimes able to generalize to a new track (Royal Raceway seen above).
pip install -r requirements.txt
mupen64plus(install via apt-get)
mupen64plus) and run Mario Kart 64
mupen64plusis using the sdl input plugin
python utils.py viewer samples/luigi_raceway to view the samples
python utils.py prepare samples/* with an array of sample directories to build an
y matrix for training. (zsh will expand samples/* to all the directories. Passing a glob directly also works)
X is a 3-Dimensional array of images
y is the expected joystick ouput as an array:
 joystick x axis  joystick y axis  button a  button b  button rb
train.py program will train a model using Google's TensorFlow framework and cuDNN for GPU acceleration. Training can take a while (~1 hour) depending on how much data you are training with and your system specs. The program will save the model to disk when it is done.
play.py program will use the
gym-mupen64plus environment to execute the trained agent against the MarioKart environment. The environment will provide the screenshots of the emulator. These images will be sent to the model to acquire the joystick command to send. The AI joystick commands can be overridden by holding the 'LB' button on the controller.
-1per time-step, which gives the AI agent a metric to calculate its performance during each race (episode), the goal being to maximize reward and therefore, minimize overall race duration.
Open a PR! I promise I am friendly :)