Voices in AI – Episode 98 – A Conversation with Jerome Glenn – Gigaom


About this Episode

On this Episode of Voices in AI Byron speaks with futurist and CEO of the Millennium Project Jerome Glenn about the direction and perception of AI as well as the driving philosophical questions behind it.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Jerome Glenn. He has for 23 years been the Director and CEO of the Millennium Project. It’s a global proprietary think tank with 63 nodes around the world, producing the annual State of the Future since 1997, and it’s a whole lot more than that as well. I think we’re going to have a great half hour. Welcome to the show, Jerome.

Jerome Glenn: Thank you. It’s nice to be here.

Well I don’t know that I gave you the full and proper introduction to the Millennium Project, which I’m sure many of my listeners are familiar with, but just bring us all up to date, so we all know how you spend your days?

Well I spend my days reading of course all day long and responding. The Millennium Project was actually conceived back in 1988. And the idea was: the year 2000 is coming up, and we ought to do something serious as serious futurists. So what we did was get interviewed by guys like you, and I had too much to drink the night before and said something stupid. So we decided to create a global system to create global futures research reports relatively quickly. We now have 64 nodes around the world. These nodes are groups of individuals and institutions that help find people that have the best ideas in their countries, and to bring all those ideas together and assess them; do various reports on the future of ethics, and on the future of AI, the future of all kinds of different stuff.

We’re not proprietary research by the way. We do some specific research contracts, like we designed the collective intelligence system for the Kuwait Oil Company, but that was so that we would get experience doing it. But we’re not a regular proprietary consulting company as such. It’s more like a public utility where we’re looking at the whole game as much as possible, and then people draw from our stuff and our methods and content the way they like.

And so since the millennium has come and gone, unless maybe it’s now the third millennium, is there a timeline focus of the group? I mean because there’s a difference between asking what’s life going to be like in 5 years versus 500 years right?

Right. Sure, and we don’t tend to do 500 years too often. Although in 1999, we did write five alternative thousand year scenarios. The idea was that since everyone’s looking back a thousand years [at] the Vikings, the rest of it, we figured we ought to at least look out a thousand years and see what… Those scenarios are actually on our website and you can take a look at them. But it normally depends on the issue. If you’re looking at financial stuff, you’re looking short range. Obviously if you’re looking at environmental stuff, you’re looking at a longer range. So we don’t have a set timeline.

READ ALSO  Top 6 Best Budget Tablets Under $300 To Buy In 2019

What we do have is a set of 15 global challenges that we update and improve insights, hopefully improve insights on, on an ongoing basis and that’s much of the annual report in the State of the Future. But it’s also the framework for understanding global change that we have in our online collective intelligence system.

So when you write these materials, are you writing them for the general public, for policymakers, for… is there any group in particular that’s like your intended audience for your materials?

Yeah. Well like any writer, we’re happy if anybody’s reading our stuff. But it’s more for people who are actually engaged in decision making. They’re the thought leaders, they’re the advisers for policy people. A lot of the corporate strategic planning folks read this stuff, use it regularly. It’s also used by government foresight folks around the world and an increasing number of libraries get our stuff.

University libraries do because people in many universities are teaching sort of global strategic stuff and long range future technology sort of stuff. And so universities are using, let’s say the future they use in their content, it’s like a textbook. And then those that teach methods, use our future with a series of 37 different methods of looking at the future. It’s the largest collection of methodology around. And so increasingly the college audience is getting involved in this, but initially it was your U.N. long term folks, your government policy people, and mostly advisors to the policy decision makers, and the corporate players.

So let’s pick an issue. Our show is Voices in AI, so tell me what you think about artificial intelligence? What are some of the open questions? What are some of the things that we think might be obvious at this point?

I think one thing we’re trying to get the world to get clarity on is there’s a tremendous confusion between artificial narrow, general and super AI. And it’s extremely annoying. Let me give you an example. In Korea, as you know, the Alpha Go beat the Go champion. And many people in Korea (I go there a lot) were going nuts because they’re saying “oh my God. All these things that Elon Musk, and [Stephen] Hawkings… it’s here now!” And you go “No no no no no no.”

It’s something different, right? I’m with you. So let’s split that up into three separate things. I’m sure all of my listeners are familiar with the distinction, but of course Narrow AI is a technology designed to do one thing. The only fears people have around it typically are related to automation, not any of these dire scenarios. Then there’s general intelligence, which is a technology we don’t know how to build, and that experts disagree [about] when we’ll get it, [ranging] to between 5 and 500 years. And then there’s a super intelligence, which is a highly theoretical kind of general intelligence that has evolved so quickly to such a state that it is as incomprehensible to us as we are to an ant.

READ ALSO  How to Clean Up Your Old Social Media Posts

Yeah. Well I would add two things to that.

Go ahead.

One is I think it’s proper for Elon Musk and the rest of the folks to start raising red flags right now because as you point out, we don’t know how long it will take to get to a general intelligence. It might be 10 years, but it may be longer. We don’t know. But if we do get it, we also don’t know a more important thing, and that is how long it’ll take to go from general to super: where super sets its own goals without our understanding. That might never happen. It might happen almost immediately. We don’t know. So it’s better to panic early on this.

Well let’s talk about that for a minute. So let’s go with the general intelligence first. We’ll start in the middle. I’ve had almost 100 people on this show and they’re all for the most part practitioners in AI. And I ask them all, do you believe we can build a general intelligence? And then I ask “Are people machines?” Let me just ask you that to begin with: “Is our brain a machine, do you believe?”

Well my early education was philosophy. So in the philosophy world we always say: “well it depends on your definition.”

Well let me ask it differently then. Do you believe that with regards to the brain and by extension the mind, that there is nothing that goes on in them that cannot fundamentally be explained by the rules of physics?

I think that’s a useful… many of the additions to the United States philosophy was what’s called pragmatic philosophy. It said “I don’t know what the truth is, but I know what works and what doesn’t work.” I think taking what you said as a working hypothesis and pursuing that way will produce more value than just guessing. So I’m sympathetic to that, but I don’t know enough truth to know the answer to that.

The idea though is that all the people who believe we can build a general intelligence — to a person — hold that belief not because they know how to do it, but because they begin with that simple assumption that people are machines.

And that’s why I’m saying Begin with that. It’s sort of like the Gaia hypothesis. If you begin with a hypothesis, you get better science. So yes, I think that that’s a good rational approach.

But the interesting thing is even though 95% of my guests say ‘yes’ to that… they believe people are machines, when I put that same question on my website to the general public, 85% of the people say ‘no, of course not. That’s ridiculous.’ So there’s this huge disconnect between what AI practitioners think they are and what the general public believes they are. And you say it’s a useful assumption. It’s not a useful assumption if it’s wrong. I mean because what it is…

On the contrary, au contraire, Copernicus had wrong assumptions, but he made some useful insights.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.





Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com