For my Computer Science course we’re making AI’s that play a simplistic game called “The Robot Game” (two player strategy game, you can look it up, there’s a whole subreddit dedicated to it). Once we’ve finished our AIs we’ll pit them all against each other and see whose is the smartest.
Right before we were assigned this, I had discovered a page about machine learning in games (http://satirist.org/learn-game/). There was an interesting section(http://satirist.org/learn-game/systems/starcraft/) discussing the problem of making AI’s for StarCraft. Turns out, there are competitions where people make bots for StarCraft (the first one) and pit them against each other (On a side note, I also found out people make AI’s that speedrun games, thought that was cool). There are a plenty of challenges to making these AI’s, as StarCraft is a really complex game, but what caught my attention is that no bot has been made that can challenge a decently skilled human player.
This reminded me of Google’s DeepMind AI, which is currently about to take on the world-champion of Go (sort of the Chinese version of chess, if you don’t know what that is). It recently beat a professional Go player (the first time a computer had ever beaten a professional at Go), and it was trained using machine learning and neural networks.
I’m curious to see how machine learning will be used in the future for games. Right now, it’s not really necessary, as AI is pretty basic in most games, with simple state machines providing fun bots. But, in the future, games on the level of complexity of StarCraft might become more common, and developers will have to start experimenting with machine learning. I’m really looking forward to seeing where that will go.