A Conversation with Gary Marcus – Gigaom


About this Episode

Episode 96 of Voices in AI features Byron speaking with author and psychologist Gary Marcus about the nature of intelligence and what the mind really means in relation to AI.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Bryon Reese. Today our guest is Gary Marcus. He is a scientist, author, and entrepreneur. He’s a professor in the Department of Psychology at NYU. He was the founder and CEO of Geometric Intelligence, a machine learning company later acquired by Uber. He has a new company called Robust.AI and a new book called Rebooting AI, so we should have a great chat. Welcome to the show, Gary.

Gary Marcus: Thanks very much for having me.

Why is intelligence such a hard thing to define, and why is artificial intelligence artificial? Is it really intelligence, or is it just something that can mimic intelligence, or is there not a difference between those two things?

I think different people have different views about that. I’m not doctrinaire about vocabulary. I think that intelligence itself is a multidimensional variable. People want to stuff it into a single number and say your IQ is 110, or 160, or 92, or whatever it is, but there are really many things that go into natural intelligence such as the ability to solve problems you haven’t seen before, or the ability to recognize objects, or the ability to speak or to be very verbal about it. There’s many, many different dimensions to intelligence. When we talk about artificial intelligence, we’re basically talking about whether machines can do some of those things.

You’re a provocative guy with all kinds of ideas in all different areas. Talk a little bit about the mind, how you think it comes about in 30 seconds or less, please. And will artificial intelligence need to have a mind to do a lot of the things we want it to do?

The best thing I ever heard about that, short version, is Steven Pinker was on Stephen Colbert. Colbert asked him to explain the brain in five words, and he said brain cells firing patterns. That’s how our brains work is there’s a lot of neural firing, and minds emerge from the activity of those brains. We still don’t really understand what all that means. We don’t have a very good grip on what the neural processes are that give rise to basic things like speaking sentences. We have a long way to go understanding it in those terms.

I tend to take a psychologist’s perspective more than a neuroscience perspective and say the mind is all of our cognitive functions. It’s how we think and how we reason, how we understand our place in the world. Machines, if we want to get to the point where they’re trustworthy, are going to have to do many of the things that human minds do, not necessarily in identical ways. It has to be able to capture, for example, the flexibility that human minds have, such that when they encounter something they haven’t seen before, they can cope with it and not just break down.

I know you said you don’t usually approach it from neurology, but I’m fascinated by the nematode worm who’s got just a handful of neurons. People have spent so long, 20 years in the OpenWorm project, trying to model those 302 neurons to make behavior. They’re not even sure it’s even possible to do that.

READ ALSO  Perspective | I saw the leaks and thought I knew. I was still not prepared for ‘The Last of Us Part II’

Do you think we are going to have to crack that code and understand something about how the brain works before we can build truly intelligent machines, or is it like the old saw about airplanes and birds [flying differently]? They’re going to think in a way that’s alien to the way we think?

I think it’s somewhere in between, but I’m also pushing towards the psychology side. I don’t think that understanding the connectome of the human brain or all those connections is anytime soon going to really help us with AI. I do think that understanding psychology better, like how people reason about everyday objects as they navigate the world, that might actually help us.

Psychology isn’t as much of a prestige discipline, so to speak, as neuroscience. Neuroscience gets more money, gets more attention. Neuroscience will probably tell us a lot about the nature of intelligence in the long term. That could be a long term of 50 or 100 years. Meanwhile, thinking about psychology has actually led to some AI that I think really works. None of it’s what we call artificial general intelligence. Most of the AI we have doesn’t owe that much to neuroscience, and if anything, it owes something to psychology and people trying to figure out how human beings or other animals solve problems.

Yeah, I agree completely with that. I think AI tries to glom onto things like neural nets and all of that to try to give them some biological tie, but I think it’s more marketing than anything.

I was about to say exactly that. I think it’s more marketing than anything.Neural networks are very, very, loosely modeled on the brain. I’m trying to think of a metaphor. It’d be like comparing a child’s first drawing to some incredibly elaborate work of art. Okay, they’re both drawings, but they’re really not the same thing. Neural networks, for example, only have essentially one kind of neuron, which either fires or doesn’t. Biology, first of all, separates the firing neurons from the inhibiting neurons, the positive from the negatives, and then there are probably 1,000 different kinds of neurons in the brain with many different properties. The so-called neural networks that people are using don’t have any of that. We don’t really understand how the biology works, so people just ignore it. They wind up with something that is only superficially related to how that brain actually functions.

Let’s talk about consciousness. Consciousness is the experience of being you, obviously. A computer can measure temperature, but we can feel warmth. I’ve heard it described as the last great scientific question we know neither how to pose scientifically nor what the answer would look like. Do you think that’s a fair description of the problem of consciousness?

The only part I’m going to give you grief about is that it’s the last great scientific question. I mean, as you yourself said later in your question, it’s not a well-formed question. Great scientific questions are well formed. We know what an answer would look like and what a methodology would be for answering them. Maybe we lack some instrument. We can’t do it yet. We need a bigger collider or something like that where we understand the principle of how you can get data to address it. [With] consciousness, we don’t really at this point know that.

READ ALSO  Are OnePlus 7T’s Features Good Enough For Its Existence?

We don’t know even what a ‘consciousness meter’ would look like. If we had one, we’d go around and do a bunch of experiments and say, “Well, does this worm that you’re talking about have consciousness? Does my cat? What if I’m asleep? What if I’m in a coma?” You could start to collect data. You could build a theory around that. We don’t even know how we would collect the data.

My view is: there is something there that needs to be answered. Obviously, there is a feeling of experiencing red, or experiencing orgasm, or whatever we would describe as consciousness. We don’t have any, I think, real scientific purchase on what it is that we’re even asking. Maybe it will turn out to be the last great scientific question, but if it is, it’ll be somehow refined relative to what it is that we’re asking right now.

Do you believe that we can create a general intelligence on some time period measured in centuries, even? Do you believe it’s possible to do that?

I do, absolutely. I’m widely known as a critic of AI, but I’m only a critic of what people are doing now, which I think is misguided in certain ways. I certainly think it’s possible to build a general intelligence. You could argue on the margins. Could a machine be conscious? I would say, “Well, it depends what you mean by conscious, and I don’t know what the answer is.”

Could you build a machine that could be a much more flexible thinker than current machines? Yes, I don’t see a principled reason why you couldn’t have a machine that was as smart as MacGyver and could figure out how to get its way out of a locked room using twist ties and rubber bands or something like that, which a current machine can’t do at all. I don’t see the principled reason why computers can’t do that, and I see at least some notion of how we might move more in that direction.

The problem right now is: people are very attracted to using large databases. We’re in the era of big data, and almost all of the research is around what you can do with big data. That leads to solutions to certain kinds of problems. How do I recognize a picture and label it if I have a lot of labels from other people that have taken similar pictures? It doesn’t necessarily lead you to questions about what would I do if I had this small amount of data, and I was addressing a problem that nobody had ever seen before? That’s what humans are good at, and that’s what’s lacking from machines. This doesn’t mean it’s an unsolvable problem in principle. It means that people are chasing research dollars and salary and stuff like that for a certain set of problems that are popular right now. My view is that AI is misguided right now, but not that it’s impossible.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.





Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com