Artificial Intelligence Will Crush Humans, “It’s Not Even Close”
From the Dept. of People You Should Really, Really Listen To.
Endgame, Set, Match
(Chase Immortality) It’s common knowledge, at this point, that artificial intelligence will soon be capable of outworking humans — if not entirely outmoding them — in plenty of areas. How much we’ll be outworked and outmoded, and on what scale, is still up for debate. But a new interview published by The Guardian over the weekend, Nobel Prize winner Daniel Kahneman had a fairly hot take on the matter: In the battle between AI and humans, he said, it’s going to be an absolute blowout — and humans are going to get creamed.
“Clearly AI is going to win [against human intelligence]. It’s not even close,” Kahneman told the paper. “How people are going to adjust to this is a fascinating problem.”
Why listen to Daniel Kahneman? His 2011 book, “Thinking, Fast and Slow” — over two million copies sold — is one of the most influential tomes in the field of behavioral economics, exploring how and why humans think the way they think (the “fast” thinking of the title being intuitive; the “slow” thinking being rational), and what leaves us prepared (or unprepared) to make decisions about our future. But moreover, he won his 2002 Nobel Prize for pioneering “prospect theory,” which explains how people rationalize the difference between gains and losses, and how their thresholds for risk aversion and risk appetite work.
And why, according to Kahneman, are we so unprepared for the forthcoming takeover of artificial intelligence? Speaking to the way the pandemic overtook an unprepared world, Kahneman cited the exponentials growth of the virus, and the way human minds are essentially unequipped to do the basic math behind the way something like that can spiral out of control.
“Exponential phenomena are almost impossible for us to grasp,” he told The Guardian. “We are very experienced in a more or less linear world. And if things are accelerating, they’re usually accelerating within reason. Exponential change [as with the spread of virus] is really something else. We’re not equipped for it. It takes a long time to educate intuition.”
Gird Your Maladaptive Loins
Winding up into the discussion about AI, Kahneman notes the issue with human minds: “There is going to be massive disruption. The technology is developing very rapidly, possibly exponentially. But people are linear. When linear people are faced with exponential change, they’re not going to be able to adapt to that very easily.” Kahneman cites medicine as one place humans are going to be replaced, “certainly in terms of diagnosis.” And elsewhere, he issues a stark message to the boardrooms of the world: “There are rather frightening scenarios when you’re talking about leadership. Once it’s demonstrably true that you can have an AI that has far better business judgment, say, what will that do to human leadership?”
If nothing else, Kahenman’s quotables feel canny — like maybe if the people in the C-suite are scared for their jobs, someone who can do something about any of this might actually listen.
READ MORE: Daniel Kahneman: ‘Clearly AI is going to win. How people are going to adjust is a fascinating problem’ [The Guardian]
As a Futurism reader, we invite you join the Singularity Global Community, our parent company’s forum to discuss futuristic science & technology with like-minded people from all over the world. It’s free to join, sign up now!
Richard Feynman on Artificial General Intelligence
Do you think there will ever be a machine that will think like human beings and be more intelligent than human beings?
Below is a structured transcript of Feynman’s verbatim response. With the advent of machine learning via artificial neural nets, it’s fascinating to hear Feynman’s thoughts on the subject and just how close he gets, even 35 years ago.
Estimated reading time is 8 minutes. Happy reading!
Richard Feynman’s Answer
First of all, do they think like human beings? I would say no and I’ll explain in a minute why I say no. Second, for “whether they be more intelligent than human beings” to be a question, intelligences must first be defined. If you were to ask me are they better chess players than any human being? Possibly can be, yes, “I’ll get you, some day”.
By 1985, of course, human chess grand masters were still stronger than machines. Not until the legendary six-game matches between world chess champion GM Garry Kasparov and the IBM supercomputer Deep Blue in 1996 and 1997 did a computer beat a world-class chess champion. Even then, the score was 3 1/2 to 2 1/2, and Kasparov ended up disputing the loss, claiming the IBM team had somehow intervened on behalf of the machine between matches.
The AI Effect
“As soon as it works, no one calls it AI anymore” — John McCarthy
Feynman next addresses the so-called “AI effect”, namely the discounting that has been observed to occur when a programmed machine is instructed to perform a task and actually performs it, by onlookers arguing that what the AI achieved is not “real” intelligence:
They’re better chess players than most human beings right now! One of the things, by the way we always do is we want the darn machine to be better than ANYBODY, not just better than us. If we find a machine that can play chess better than us it doesn’t impress us much. We keep saying “and what happens when it comes up against the masters?”. We imagine that we human beings are equivalent to the masters in everything, right? The machine has to be better in everything that the best person does at the best level. Okay, but that’s hard on the machine.
On Building Artificial Machines
Feynman next addresses the question of mental models by analogy to the differences between a naturally evolved mode of locomotion (e.g. the running gait of a mammal with ligaments, tendons, joints and muscle) and mechanically designed modes of locomotion (using wheels, wings and/or propellers):
With regard to the question of whether we can make it to think like [human beings], my opinion is based on the following idea: That we try to make these things work as efficiently as we can with the materials that we have. Materials are different than nerves, and so on. If we would like to make something that runs rapidly over the ground, then we could watch a cheetah running, and we could try to make a machine that runs like a cheetah. But, it’s easier to make a machine with wheels. With fast wheels or something that flies just above the ground in the air. When we make a bird, the airplanes don’t fly like a bird, they fly but they don’t fly like a bird, okay? They don’t flap their wings exactly, they have in front, another gadget that goes around, or the more modern airplane has a tube that you heat the air and squirt it out the back, a jet propulsion, a jet engine, has internal rotating fans and so on, and uses gasoline. It’s different, right?
So, there’s no question that the later machines are not going to think like people think, in that sense.
With regard to intelligence, I think it’s exactly the same way, for example they’re not going to do arithmetic the same way as we do arithmetic, but they’ll do it better.
Superhuman Narrow AI
As an example of the superiority in performance of a mental task by a designed mechanical machinery versus a naturally evolved organ, Feynman next describes the differences between a superhuman narrow AI (such as e.g. a calculator) and the human brain:
Let’s take mathematics, very elementary mathematics. Arithmetic. They do arithmetic better than anybody. Much faster and differently, but it’s fundamentally the same because in the end, the numbers are equivalent, right? So that’s a good example of.. We’re never going to change how they do arithmetic, to make it more like humans. That would be going backwards. Because, the arithmetic done by humans is slow, cumbersome, confused and full of errors. Where, these guys (machines) are fast.
If one compares what computers can do, to the human beings, we find the following rather interesting comparisons. First of all, if I give you, a human being, a problem like this: I’m going to ask you for these numbers back, every other one, in reverse order, please. Right? I’ve got a series of numbers, and I want you to give them to me back, in reverse order, every other one. I’ll tell you, I’ll make it easy for you. Just give me the numbers back the way I gave them to you. You ready?
1, 7, 3, 9, 2, 6, 5, 8, 3, 1, 7, 2, 6, 3
Anybody gonna be able to do that? No. And that’s not more than twenty or thirty numbers, but you can give a computer 50,000 numbers like that and ask it for any reverse order, the sum of them all, do different things with them, and so on. And it doesn’t forget them for a long time.
So there are some things a computer does much better than a human, and you’d be better remember that if you’re trying to compare a machines to humans.
The Problem of Pattern Recognition
But, what a human has to do for his own.. Always, they always do this. They always try to find one thing, darn-it that they can do better than the computer. So, we now know many, many things that humans can do better than a computer.
She’s walking down the street and she’s got a certain kind of a wiggle, and you know that’s Jane, right? Or, this guy is going and you see his hair flip just a little bit, it’s hard to see, it’s at a distance but the particular funny way that the back of his head looks, that’s Jack, okay?
To recognize things, to recognize patterns, seems to be something we have not been able to put into a definite procedure. You would say, “I have a good procedure for recognizing a jacket. Just take lots of pictures of Jack” –by the way, a picture can be put into the computer by this method here, if this were very much finer I could tell whether it’s black and white at different spots. You know, you in fact get pictures in a newspaper by black and white dots and if you do it fine enough you can’t see the dots. So, with enough information I can load pictures in so you put all the pictures of Jack under different circumstances, and there is a machine to compare it.
The Bias–Variance Tradeoff
Feynman moves on to essentially address the problem of variance in data training sets, and so implicitly also address the so-called bias-variance tradeoff. In statistics and machine learning, the bias–variance tradeoff is the property of a set of predictive models whereby models with a lower bias in parameter estimation have a higher variance of the parameter estimates across samples, and vice versa. The bias-variance dilemma describes the optimization problem whereby one tries to simultaneously minimize bias errors from erroneous assumptions in a learning algorithm and the variance from sensitivity to small fluctuations in the training set.
The trouble is that the actual new circumstance is different. The lighting is different, the distance is different, the tilt of the head is different and you have to figure out how to allow for all that. It’s so complicated and elaborate that even with the large machines with the amount of storage that’s available and the speed that they go, we can’t figure out how to make a definite procedure that works at all, or at least works anywhere within a reasonable speed.
So, recognizing things is difficult for the machines at the present time, and some of those things that are done in a snap by a person.. So, there are things that humans can do that we don’t know how to do in a filing system. It is recognition, and that brings me back to something I left which is what kind of a file clerk that has some special skill which requires recognition of a complicated kind.
For instance a clerk in the fingerprint department which looks at the fingerprints and then makes a careful comparison to see if these finger prints match, has not been.. It’s just about ready to be.. It’s hard to do, but almost possible to do it by a computer.
The Current State of Artificial Intelligence (1985)
In his last comment, Feynman discusses the difficulties humans at the time still had with designing machines for the purposes of fingerprint matching:
You’d think there’s nothing to it, I look a the two fingerprints and see if all the blood dots are the same, but of course, it’s not the case. The finger was dirty, the print was made at a different angle, the pressure was different, the ridges are not exactly in the same place. If you were trying to match exactly the same picture it would be easy, but where the center of the print is, which way the finger is turned, where there’s been squashed a little more, a little bit less, where there’s some dirt on the finger, whether in the meantime you got a wart on this thumb and so forth are all complications. These little complications make the comparison so much more difficult for the machine, for the “blind filing clerk system”, that is too much. Too slow, certainly to be utterly impractical, almost, at the present time.
I don’t know where they stand but they’re going fast trying to do it. Whereas a human can go across all of that somehow, just like they do in the chess game. They seem to be able to catch on to patterns rapidly and we don’t know how to do that rapidly and automatically.
Video of Feynman’s full response is available at the link below:
This essay is part of a series of stories on math-related topics, published in Cantor’s Paradise, a weekly Medium publication. Thank you for reading!