You should have heard of the news of AlphaGo defeating Lee Sedol by now, and Starcraft may be the next game that DeepMind (AlphaGo) want to challenge. This is a great news for Starcraft.
Disclaimer
I must declare my “proficiency” in artificial intelligence (AI), go and Starcraft, before I discuss further.
I have only layman understanding of AI, so correct me if I have misunderstood anything. Also, let me know if there is any important information that general public should know about.
My skill in go is around amateur 1 dan. Although I cannot give you a good analysis of the moves made in the game, I know what I am saying about go in general.
This blog defines my understanding of Starcraft.
Significance of AlphaGo’s victory
Computer beats human in many things, which includes chess. Deep Blue famously defeated Garry Kasparov in 1998, so it seems normal for AI to excel in go too. However, this milestone has only been achieved recently by DeepMind, and it is significant to both the discipline of AI and the go community. Perhaps it is more significant to the development of AI than to the meaning of go.
It is difficult to not compare go and chess, as they are both mainstream board games with a long history and a modern professional system. While there are many similarities between the two games, go remained a holy grail for AI development. There wasn’t a program good enough to defeat a professional go player, at least not until recently when Fan Hui was defeated by AlphaGo. To be honest, I actually never heard of him before the news, so it is understandable that the go community was sceptical of AlphaGo’s strength before the match against Lee Sedol. Given the fact that AlphaGo has now defeated Lee Sedol, the question is no longer whether AI can defeat top go professional, but it is the other way round (As of today, the score is 3-0 in AlphaGo’s favor).
Lee Sedol can be compared to Roger Federer, who is arguably the most accomplished tennis player in the last decade (people are arguing whether he is the greatest of all time). Thus, it is fair to say that AI has truly defeated human on the go board after Lee Sedol’s defeats. Of course, the current world number one go player is Ke Jie, who you can compare with Novak Djokovic in tennis, so you can argue that AI has yet defeated the current best. However, the difference between these two players is relatively marginal in regards to the significance of AI’s victory over a top human player. Further, Ke Jie is likely to be the next challenger if AlphaGo is to compete against another top go player. In my opinion, and perhaps the opinion of many top professionals, AlphaGo should win against Ke Jie. This should rest the case that AI has conquered the go board, and this leads us to the next likely target of DeepMind. Starcraft.
Challenging Starcraft
Demis Hassabis, who is the co-founder of DeepMind, has signalled his intention to challenge Starcraft. Before we get all hyped and debate who should represent human against DeepMind, it is uncertain if it will even happen.
“Maybe. We’re only interested in things to the extent that they are on the main track of our research program. So the aim of DeepMind is not just to beat games, fun and exciting though that is.” – Demis Hassabis
I think there is no doubt that a computer does better in macro and micro in Starcraft, so there is little to discuss about mechanically. The two examples below may be too extreme and a little too situation specific, but the inhuman control should definitely be taken into account.
The key discussion should instead be revolving around how AI reacts in a game of imperfect information, and this is one of the two key differences that sets Starcraft apart from go (the other being a real time strategy game, instead of a turn-based one). This has been brought up by both Demis Hassabis and Flash.
“Strategy games require a high level of strategic capability in an imperfect information world — “partially observed,” it’s called. The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers.” – Demis Hassabis
Imperfect information is what makes Starcraft interesting. You have to gather information, and make use of it to make choices. Even for someone who has only layman understanding of AI, I can only think that it is an extremely difficult challenge. There are many implications based on imperfect information, for example hiding information, making blind counter, positioning of units and so on.
However, time after time in human history, computer has proven to be more capable than what we think at that point of time. One of the key breakthroughs for AlphaGo is the application of deep learning, which allows it to grasp seemingly abstract information and partial out what are important and what aren’t. This also partly explains why chess is relatively easy for AI as the value of the pieces and moves is more “concrete” and “accountable”. The value of each stone changes after each move in go. Often in go, top professionals will give you a vague answer on the rationale behind certain moves, and to them it just “feels” right. Since AlphaGo has defeated Lee Sedol, it suggests that AI is performing better than human in understanding the abstract pattern and making the best out of it (at least in go). Therefore, it seems naive to totally dismiss the possibility that AI can defeat a human in Starcraft strategically in the near future.
With that being said, BoxeR appears to be very confident in human’s ability to defeat AI in Starcraft. While I understand his optimism, I think he is showing the common sign of “unwarranted” confidence displayed by many experts who are challenged by computers in human history. Experts of different fields think that there are factors which cannot be accounted for by computers, and they have been proven wrong time after time. Prediction based on statistical modelling is the best example for this attribution error, and Billy Beane‘s moneyball case study is the most celebrated example. Results have proven convincingly that computers can predict better than human even though many experts insist computers cannot account for many factors. I recommend reading Super Cruncher by Ian Ayres if you are interested in this topic.
In fact, such confidence can be observed in the go community before Lee Sedol’s match. I have to admit that I was one of those who think Lee Sedol will defeat AlphaGo (not saying I am an expert), and the thought that Lee Sedol (he’s godly in the heart of go players) can be defeated by a computer seems unthinkable. This is partly because even a player of my level does not find AI a challenge on a go board, but AlphaGo has proven to be something else.
Interestingly, some may even argue that Lee Sedol had made too many sub-optimal moves in game 1, because he was overconfident. Clearly, it is unfair to attribute these moves to overconfidence, but there is little doubt that he was shocked in game 1. The video below shows Lee Sedol’s reaction to AlphaGo’s move 102.
It is obvious that Lee Sedol was shaken by it. While it is relatively normal to see a Starcraft player showing such signs, it is rare for go players. Go players are famous for keeping a poker face in game, so this is a clear sign of shock. There was a famous move called “ear reddening move” in go history. Long story short, a very strong move was played in a game between two top go players at that time. To the surprise of other observing top players, a doctor, who did not know about go, commented that it was a great move. The doctor later explained this was due to the fact that the opponent’s ears turned red even though he maintained his poker face.
“Yesterday I was surprised but today it’s more than that — I am speechless” – Lee Sedol in the post game 2 interview
The video below shows Lee Sedol’s reaction right before he conceded in the first game, and I have never seen him like this before.
What does this challenge mean to Starcraft?
This is huge even if the challenge doesn’t happen.
Let’s put things in perspective. How many of you know about go before this AlphaGo versus Lee Sedol match? You may know the basic rules, or may even know only the existence of the game. I bet you have a better understanding of go after the news you have read, and your interest in the game may have increased. The mainstream media has been discussing this match, and this provides go with an unprecedented publicity. Many try to give a brief explanation of go, and discuss the significance of the milestone. But few actually did a good job in my opinion. For example, The Verge had actually mistaken gomoku (a connect five game that is usually played on a go board) with go by inserting a image of gomoku in this article. They have also sneakily corrected the image now.
Many people, who have no idea what go is, have also jumped onto the cool bandwagon and discuss about the moves in the games. As a marketing research student who works on sharing behavior, this is both pleasing and annoying at the same time. There is little doubt how pivotal word-of-mouth is to the success of a product or service, and its impact can be observed beyond just “sales”. The number of followers on Ke Jie’s Weibo (Chinese Twitter) has increased by 140,000 during this event.
The fact that opinion leaders of different fields talk about go may just revive the popularity of the game. Hikaru no Go, which is a Japanese anime about go, was actually considered a savior of the game as it helped to promote go to the young generation that is more interested in video games than go.
Simply by suggesting that Starcraft may be the next project, DeepMind is already contributing to the popularity of Starcraft. If DeepMind formally challenges Starcraft one day, Starcraft will be talked about by everyone everywhere. I actually saw someone in my office watching AlphaGo’s match live yesterday, and I talked to him about it. He stared back at me awkwardly and said “I don’t know what is go, but this is interesting.” Go is not a good spectator game, and yet people who don’t even know the rules say they find it interesting. Of course, they may not really find it interesting, and they just say so to conform to social norm. However, there is no doubt that go has benefited immensely from this event. This also means that a spectator game like Starcraft may benefit even more.
This is an incredible opportunity for Blizzard to co-brand with Google even though DeepMind may not challenge Starcraft eventually. Recently, many big platforms are investing in eSports, but they consider Starcraft as a secondary game. For example, both Yahoo and ESPN do not have Starcraft on the topic menu at the top of their sites. A key reason is the popularity of the game itself, and DeepMind can help to change this.
Now, let’s move away from the benefit of exposure, and think about how DeepMind may improve our understanding of the game. I think there is a good chance that DeepMind will challenge our current understanding of Starcraft the same way it has done with go. For example, in game 2, AlphaGo made an unorthodox move 37, and it got many pros discussing the underlying meaning of the move. While the intention of the move is clear, it contradicts the general understanding of go. That move is called a “shoulder hit”, and it is usually played on the fourth line against a stone on the third line. However, it is often not used on the fifth line against a fourth line stone as shown in the actual game. I will not go any further to explain why it challenges the current understanding of go, but the key point is the AI learns by itself without being restricted by the assumed consensus. Therefore, it seems extremely plausible that the AI may bring something new into Starcraft, and we have to question our assumptions. For example, maybe scouting is overrated or what we think the value of upgrade is.
The mention of Starcraft has already benefited the popularity of the game. If DeepMind really challenges Starcraft, we need to embrace this opportunity to improve the popularity of the game and our understanding of in game assumptions.
If you enjoyed this article, I’d love you to share it with one friend. You can follow me on Twitter and Facebook. If you really like my work, you can help to sustain the site by contributing via PayPal and Patreon. See you in the next article!
Thank you for the insightful articles as always.
To me the critical question is what kind of constraints will be put on the AI when it plays against humans? If there are no restrictions whatsoever, then I believe AI can already beat humans by doing perfect micro and macro as you suggest. Perfect blink stalkers, perfect marine splitting, etc would be incredibly powerful without having to do any fancy AI stuff.
One format that I would personally find interesting is to have an AI “assistant” that does nothing but watch the screen and whisper in your ear helping you decide what to do. That would completely remove the factor of mechanical superiority and make the contest all about superior decisionmaking. Everyone *already* knows that computers can do superior micro and macro, and AI has little to do with that. But if a lower-ranked player can beat a higher-ranked player by doing better stuff, that would be pretty damn interesting.
Indeed, it is really hard to pin point the boundary of what the AI can do for fairness. Your idea of AI assistant is quite interesting, but it is a bit like coop mode without a second person clicking. May be setting an apm limit, and ensure the AI has to operate with a mouse cursor thing. This should balance it off for a fairer match mechanically speaking.
An update (probably old news for go players). AlphaGo has defeated Ke Jie : http://arstechnica.com/information-technology/2017/01/alphago-is-back-and-secretly-crushing-the-worlds-best-human-players/
There are many things said about this topic but one interesting thing was brought up by Stephano during an interview with Scarlett and Major (I think during WESG). He mentioned that a computer would have no trouble deducing what its opponent could have build, based on the amount of gas and minerals that were mined. If the AI was trained in that direction, namely trying to get as much information as possible, you could see ‘strange’ tactics from the AI, at least in the eyes of normal human beings. For example scanning the main of its opponent or flying an overlord in every 40 seconds. You might get games where a human would probably attack and the AI would try to get its calculated win% from lets say 80% to 95% by taking an extra base instead of attacking. Units like scouting overlords would be sitting ducks since the AI would calculate, after seeing one, where it could be after x seconds (this is something the navy uses for keeping ‘track’ of submarines, a circle that gets bigger while time progresses). Pylon/cannon, sporecrawler and turret placement will be perfect. Question remains if the AI thinks it needs static defense at all. I actually think starcraft will be easier to crack than go. The biggest problem will be finding acceptable restrictions to the AI. The SC community is so used to shouting ‘imba’ and ‘op’.
Is Ke Jie defeated by “Master”, the updated alphago? Anyway, I think many underestimated how well AI learn, and I have a feeling it plays like a human after it has “mastered” StarCraft. Those weird moves may appear in the process, but eventually I believe AI will get the better of human. The mechanics is definitely a tough issue from a balance point of view.
Well: “It played 51 games in total against some of the world’s best players, including Ke Jie, Gu Li, and Lee Sedol—and didn’t lose a single one” and the facebook link in the article says he lost at least twice in a row.