

It turns out that tech has a pretty good poker face after one AI model emerges victorious in a brutal victory over five other poker professionals, winning $1,760,000 in the process.
It's definitely not a surprise to see artificial intelligence improve dramatically against certain key tests, as while some models are exhibiting alarming behavior in dangerous scenarios, the tech has reached the point of mastery in most games.
It might not still be enough to beat out a chess grandmaster - who recently shared his astounding victory against ChatGPT on X - but a recent study published in Science has revealed how one AI model named 'Pluribus' brutally dominated seasoned Poker professionals.
As reported by Newsweek, the Pluribus AI was victorious in two different poker scenarios, both of which were situated within the six-player no-limit Texas Hold'em rules.
Firstly, the researchers placed one professional player against five separate instances of Pluribus, and then the other key scenario involved a single Pluribus model facing against five human professional players.
Advert
The research was conducted across 10,000 hands of poker, at which scientists evaluated that Pluribus performed significantly better than its human counterparts on average.
While poker is a game that is somewhat defined by luck, as the cards you gain and play against are entirely random, there is still a skill in understanding different odds, the best hands to play for, and the risk of betting certain amounts.
Many would argue that poker is won and lost in your interaction with others, alongside your ability to bluff your opponents and perceive their current hand, and you might imagine that AI would struggle with that.
Advert
However, it proved to not be too much of a factor in both scenarios as the model worked entirely from an adaptable strategy that changes alongside the game state.
"Pluribus' self-play produces a strategy for the entire game offline, which we can refer to as the blueprint strategy," the study details. "Then during actual play against opponents, Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations in which it finds itself during the game."
Continual play only serves to improve Pluribus' capabilities too, as it enhances its skills and knowledge with each subsequent round, learning new strategies and outcomes as it goes.
Advert
"Pluribus' success shows that despite the lack of known strong theoretical guarantees on performance in multiplayer games, there are large-scale, complex multiplayer imperfect-information settings in which a carefully constructed self-play-with-search algorithm can produce superhuman strategies," the study concludes.
It's doubtful that this will make its way into the Las Vegas casinos any time soon, although it's almost guaranteed that the house would still find a way to walk away with the money, regardless of the success of professionals.