The open blogging platform. Say no to algorithms and paywalls.

Gaming Success Shows AI landmarks

image

There has been plenty of information and misinformation about AI in the US press over recent weeks. The story of AI lying to a human to achieve a goal has led many to believe that AI is getting a little too smart a little too fast.

That's an understandable view, and it is certainly a topic that needs to be given attention. But most would agree that the ability to deceive is not the optimum method for assessing intelligence. Place it in front of a chess board or at a poker table, on the other hand, and we are ready to be impressed, not horrified by just what mankind's created intelligence can do.

Over the years, it is prowess in different types of games that has marked distinct steps in just how intelligent artificial intelligence is becoming. But beyond that, when AI shows it can play us - and even beat us - at a game, it makes us pause for thought much more than, for example, seeing it recognize a human from a mugshot - something any of us can do.

Chess - from Deep Blue to Stockfish

February 10th, 1996 was a red letter day for AI. IBM's famous chess-playing computer Deep Blue came out on top against Gary Kasparov to become the first AI to defeat a reigning chess champion under championship conditions.

Deep Blue and Kasparov had a series of games that year, and they were all close. Ultimately, the AI won three, Kasparov won four and there were five ties. The world looked on in awe. Suddenly, people who had no interest in chess and who had only vaguely heard of Gary Kasparov were beguiled by this battle of man versus machine.

Today, Deep Blue, or what is left of it, is just another exhibit in the Computer History Museum at Mountain View. Better chess computers have come along since. Deep Blue was able to calculate 40 moves ahead, while Kasparov said he could think 10--15 moves ahead, so we can see that he had to use all his wits. But Stockfish, the latest chess computer, thinks 80 moves ahead and it would be practically impossible for any human to beat it.

Watson, the undisputed king of Jeopardy!

Still flushed, perhaps, from Deep Blue's performance against Kasparov, two of the top IBM executives in the 1990s hit on the idea for Watson while watching a TV gameshow one lunchtime. Named after IBM founder Thomas J Watson, who was CEO until his death in 1956, Watson was designed to take on Jeopardy prodigy Ken Jennings. This was during Jennings' record run of 74 wins.

By the time Watson was ready in 2011, Jennings' run had come to an end, but after taking on and beating the incumbents, the showdown that everybody wanted came about. Watson didn't just beat Jennings, it also brushed aside another of the game's best ever players, Brad Rutter. At the conclusion of the contest, Watson had won more than $77,000, compared to the other two players who each won less than $25,000.

Pluribus takes on the poker pros

Poker is a more complex game, as players must not only consider the cards they are dealt and the poker hand probability as the game progresses, but also make judgements on how these compare with their competitors hands. There is also the question of how much to bet, whether to fold or continue, and so on.

In 2019, a poker-playing bot called Pluribus took on five professional poker players in a 12-day trial. Pluribus could not attempt to "read" the other players in the conventional sense, but it learned the timing of larger bets and executed a style of play that combined unpredictability and occasional bold moves that the human players found difficult to counter. Pluribus sometimes lost, but over 10,000 hands, it won on average $1,000 per hour, which researchers described as a decisive margin of victory.




Continue Learning