In previous columns, I’ve explained that there’s a lot of hype surrounding the incremental improvements of the decades-old programming techniques collectively identified under the marketing buzzword “Artificial Intelligence” aka “AI.”
What’s NOT hype is that those programming techniques (pattern recognition, neutral nets, ect.) have gotten incrementally more effective than they were in the past at playing games and performing speech recognition, automated translation, and so forth.
What IS hype are the all-too-common and all-too-visible claims that AI will soon be able to perform complex tasks that involve anything resembling common sense, such as negotiating business deals, customer support and selling products.
Don’t believe me? Well, maybe you’ll believe a team of AI experts at Stanford University that is measuring the progress of AI. The press release issued last month announcing the index makes the following, startling (but not to me) admission:
“Computers continue to lag considerably in the ability to generalize specific information into deeper meaning, [while] AI has made truly amazing strides in the past decade… computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”
As you’re probably aware, AI is very good at playing games like poker, GO, and (most famously) chess. Chess programs now play the game at a level that could reasonably be described as “superhuman.”
When it comes to anything that requires common sense, however, AI is almost helpless. To illustrate this, examine these three chess pieces carefully:
An AI program might be able to figure out (by image comparison) that the piece on the left is a knight and the piece in the middle is a queen. I say “might” because the AI program might also think that they’re simply statues or knick-knacks.
However, even if the AI identified the two objects as chess pieces and correctly identified their rank, it could never figure out what’s immediately obvious to anyone who plays chess: that the piece on the right combines the moves of the knight and the queen.
Furthermore, without being reprogrammed by a human, no chess program could play and win a game using that piece. By contrast, a human chess player would and could immediately adapt to game play using that piece.
Here’s another example. Carnegie Mellon has a poker program, Libratus, that can play Texas Hold ’em at a tournament level and win against human opponents. This is impressive because, unlike chess or GO, poker involves unknowns.
More precisely, it contains “known unknowns” in the sense that the number of cards and their values are known but their specific position within the deck is unknown. Also, while a specific wager is unknown, the nature of the wager is within known bounds.
But what happens if we introduce unknowns that are not known to the program? If the players decided, for example, to make “suicide cards” wild or to play with a Tarot deck rather than a standard deck, Libratus wouldn’t even be able to identify a winning hand.
This is an important point because many of the wilder claims surrounding AI conflate games like chess and poker with human behaviors and institutions that are infinitely more complex.
Put simply, playing a game with pre-defined rules never requires common sense. Playing in real life always requires common sense.
For example, the co-creator of Libratus founded a firm that’s will apply the technology to “business strategy, negotiation, cybersecurity, physical security, military applications, strategic pricing, finance, auctions, political campaigns and medical treatment planning.”
Some of those applications, such as business strategy and negotiation, are unbounded human behaviors that have flexible rules that constantly change. They require common sense.
Consider: the rules for poker and Texas hold’em can printed on three sheets of paper using standard fonts. By contrast, Amazon currently lists 32,163 books on “business negotiation.” That’s a lot of complexity!
While poker seems like a good metaphor for business negotiations, such negotiations are far more complex and involve numerous “unknown unknowns.”
For example, I heard a rumor that during the negotiations for IBM’s acquisition of Lotus Development Corporation in the mid-1990, an IBM executives displayed a loaded gun during a meeting in a conference room.
The story may apocryphal but I’ve encountered behaviors equally weird and emotion-laden, if perhaps not quite as dramatic. Unlike games, functioning in the real world requires “the ability to generalize specific information into deeper meaning.”
Which AI still can’t do and where there has been no progress or breakthroughs.
By the way, many of the systems and applications advertised as “AI” in fact use humans, sometimes hundreds of them, as backups, according to a recent, aptly-title article in the Wall Street Journal, “Without Humans, Artificial Intelligence Is Still Pretty Stupid.”
In short, there’s a huge amount of hype surrounding AI, most of it coming from AI experts and executives who stand to profit if the business world, in general, believes that AI is a huge leap forward rather than just the repackaging of well-established tech.
Join To Our Newsletter
You are welcome