A Conversation with Hilary Mason – Gigaom


About this Episode

On this Episode of Voices in AI features Byron speaking with Hilary Mason, an acclaimed data and research scientist, about the mechanics and philosophy behind designing and building AI.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by Gigaom and I am Byron Reese. Today, our guest is Hilary Mason. She is the GM of Machine Learning at Cloudera, and the founder and CEO of Fast Forward Labs, and the Data Scientist in residence at Accel Partners, and a member of the Board of Directors at the Anita Borg Institute for Women in Technology, and the co-founder of hackNY.org. That’s as far down as it would let me read in her LinkedIn profile, but I’ve a feeling if I’d clicked that ‘More’ button, there would be a lot more.

Welcome to the show, amazing Hilary Mason!

Hilary Mason: Thank you very much. Thank you for having me.

I always like to start with the question I ask everybody because I’ve never had the same answer twice and – I’m going to change it up: why is it so hard to define what intelligence is? And are we going to build computers that actually are intelligent, or they can only emulate intelligence, or are those two things the exact same thing?

This a fun way to get started! I think it’s difficult to define intelligence because it’s not always clear what we want out of the definition. Are we looking for something that distinguishes human intelligence from other forms of intelligence? There’s that joke that’s kind of a little bit too true that goes around in the community that AI, or artificial intelligence, is whatever computers can’t do today. Where we keep moving the bar, just so that we can feel like there’s something that is still uniquely within the bounds of human thought.

Let’s move to the second part of your discussion which is really asking, ‘Can computers ever be indistinguishable from human thought?’ I think it’s really useful to put a timeframe on that thought experiment and to say that in the short term, ‘no.’ I do love science fiction, though, and I do believe that it is worth dreaming about and working towards a world in which we could create intelligences that are indistinguishable from human intelligences. Though I actually, personally, think that it is more likely we will build computational systems to augment and extend human intelligence. For example, I don’t know about you but my memory is horrible. I’m routinely absentminded. I do use technology to augment my capabilities there, and I would love to have it more integrated into my own self and my intelligence.

READ ALSO  Data on almost every Ecuadorean citizen leaked

Yeah, did you know ancient people, not even that far back, like Roman times, had vastly better memories than we had? We know of one Roman general that knew the names of all 25,000 of his troops and the names of all their families. Yet, Plato wasn’t a big fan of writing for that very reason. He said that with writing, you’ve invented a system for reminding yourself but not for remembering anything. He predicted that once literacy was widespread, our memories would go to pot, and he was right. Like you, I can’t remember my PIN# half the time!

That’s incredible!

I guess my real question, though, is when you ask people – “well, when will we have a general intelligence?” you have a range of answers. You have five years for—Elon Musk used that timeline and then to 500. Andrew Ng is worrying about such things as overpopulation on Mars. The reason the range is so high is nobody knows how to build a general intelligence. Would you agree with that?

Yes, I would agree, and I would firmly state that I do not believe there is a technical path from where we are today to that form of general intelligence.

You know that’s a fantastic observation because machine learning, our trick du jour, is an idea that says: ‘let’s take information about the past, study it, look for patterns, and project them into the future.’ That may not be a path to general intelligence. Is that what you’re saying?

That is what I’m saying. That we know how to build systems that look at data and make predictions or forecasts that infer things that we can’t even directly observe, which is remarkable. We do not know how to make systems that mimic intelligence in ways that would distinguish it from the systems or from humans.

READ ALSO  Data Science: A Hot Job Opportunity

I’ve had 100 guests on this show – and they virtually all believe we could/can, with your caveat about the timeframe, create a general intelligence, even though they all agree we don’t know how to do it. The reason those two things are compatible is they have a simple assumption that is: humans are machines, specifically our brains are machines. You know how the thought experiment goes… if you could take what a neuron did and model that and then did that a hundred billion times and figured out what the glial cells do and all that other stuff, there’s no reason you can’t build a general intelligence.

Do you believe people are machines, or our brains are purely mechanistic in the sense that there’s nothing about them that cannot be described with physics, really?

So I do believe that, with the caveat that we don’t necessarily understand all of that physics, necessarily today. I do think there is a biological and physical basis for human intelligence, and that should we understand it well enough, we could possibly construct something that’s indistinguishable. But we certainly don’t understand it and we may need to invent entire new fields of physics before we would.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.





Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com