Artificial Intelligence, better known as AI, is the area of study involved in developing complex software designed to mimic human intelligence. There are many applications of AI, including aiding in manufacturing, driving, solving problems, and data analysis. In the last few years, AI technology has improved dramatically, and modern AI systems are far more capable of solving complex problems than ever before.
Previously, AI has also been programmed to play games such as GO and chess. While these games are complex, they involve a single opponent and don’t include luck as a factor. Poker is far more challenging for AIs to excel at because there are so many different variables, multiple opponents, and an element of luck.
Playing poker well requires understanding the mathematics and theory behind poker while also requiring strategy, intuition, and reasoning based on hidden information. AI programs have been developed to play online poker in the past, but getting them to succeed and beat top players was always difficult for this reason. It’s challenging to create a piece of software capable of balancing these factors. However, new progress was made in 2019 with Pluribus.
Pluribus: A Poker Playing AI
In 2019, it was announced that a computer program had finally done what many believed was impossible. Pluribus, created by a team of scientists at Carnegie Mellon University in Pennsylvania, beat several top-ranked poker players in a six-player game of Texas hold ’em. This was one of the first projects where AI was competing against multiple players and where simply understanding the strategy of the game wasn’t enough to win.
Going from two to six players might not seem like a big deal, but it has a huge impact on how the game is played and the number of calculations that the AI has to make. Pluribus was an updated version of a previous AI known as Libratus, which had previously beaten poker professionals in a two-player game. Although it was performing a more complex task, Pluribus was designed to use less computing power and be more efficient in how it made its calculations.
The outcomes of Pluribus were outstanding. It competed in 10,000 hands of poker against five different players from a group of top-ranked poker professionals. On pace with what professional poker players aim for, Pluribus won $480 from its human opponents for every 100 hands on average.
How Does Pluribus Work?
The research team developed Pluribus by expanding on what it had discovered with Libratus. The search algorithm was completely revised. When playing against an opponent in strategic games, it is typical for AI to use decision trees until the very end of the game before making a move.
However, this technique was impractical in a multiplayer game due to the abundance of concealed data and the sheer volume of processing possibilities. The answer for Pluribus was that it simply considers the next few movements when deciding what action to take rather than considering every move up until the end of the game.
Similar to AlphaZero, DeepMind’s Go AI, Pluribus uses reinforcement learning to teach itself from scratch. It begins by playing poker at random and gets better as it learns which decisions make the most profit. Each hand is reviewed afterward to determine whether alternate moves, such as raising rather than maintaining a stake, might have resulted in a greater profit. If these alternatives would have produced better results, they’re more likely to be chosen in the future.
Pluribus developed a fundamental poker strategy that it uses in matches by playing countless hands of poker against itself. It compares the current state of the game to its core strategy at each decision point and looks at a few moves ahead to see the likely outcomes. It then determines if it can make improvements. Because it learns without human input, it can make decisions that a human might not make, and it’s more likely to be unpredictable.