Stephen Hawking’s view on AI

[ Update May 2017:
I’ve just realised I could have written most of this in a haiku:

Philosophy’s Dead
Says Steve. But its kid, A.I.,
Is now our worst threat?!?!?

]

Stephen Hawking’s in the press again, on the subject of AI. Surely it’s always good to hear the views of an expert scientist?

Before going on to the dangers of AI, first, my view on him – starting with his comment of 2012 (and probably other times) that “philosophy is dead”.

As someone very much into AI, I knew…

1: that AI was very important, especially nowadays, and…
2: that it was intimately connected to philosophy wherever you want to look:

Is a robot responsible for itself? Does the responsibility belong to its inventor? Or its owner? Or its manufacturer? Or the researcher who enabled its technology? Or the body that chose to allocate the research funds? Or those who originally provided the money for it? That’s just a part of moral philosophy.

Does the robot/AI have free will? That’s a part of philosophy.

Does it have consciousness, to the extent that we should treat it as a sentient being, give it its freedom, and ask its explanations for any deaths it causes? Would it be safer to give it free-will and consciousness or make certain it never got it?

Whether questions on AI are philosophical or not, good answers certainly will be informed by philosophy. It will present readily-available groundwork that took centuries to achieve, and which will save decades of time in finding the solutions we need.

Hawking’s taken a poke at philosophy, saying it’s so useless as to be dead, and then he says AI, so very close to philosophy, is perhaps the most important thing in the world.

What is this guy up to? How can someone who knows so little about the nature of the issues underlying AI, suddenly be interested in the area? (We needn’t ask why the media go to him to solve the issues – they don’t; they go to him to enhance their clickability.)

Near the start of my book I highlighted some scientists who had stupidly spouted the same childish slogan that scientists need philosophy like birds need ornithology: Steve Jones, Brian Cox and Simon Singh. I’ve also blogged that this is mererly a group (think: gang) attacking something they see as a rival, to enhance their group identity and perhaps promote their group at the expense of the target. This doesn’t even have to be conscious – people do it automatically unless they understand morality (think “social wisdom”) well enough to stop themselves. It’s always there just below the surface for any shameless rabble-rouser to exploit. I do see this as partly behind what I see partly as an attack by Hawking on AI.

Is Hawking as famous as he is because he’s so much wiser than other scientists or even other cosmologists?

Try this thought experiment: Ask 1000 randomly selected people if they’ve heard of Stephen Hawking. Now ask a similar group of another 1000 people if they’ve heard of Roger Penrose.

Would you expect to find the number who’ve heard of Hawking to be about 1.5 orders of magnitude more than of Penrose? If so, is it because Hawking is 30 times better as a scientist? Let’s say Hawking is twice as good as Penrose. That still leaves him15 times more famous than his due, which means over 90 percent of his fame is due to his drive for self-projection, his wheelchair and his special voice. And trust me, there are a great many other scientists out there who think the same but just don’t say it.

Yes, he’s raised the profile of science while he’s been raising his own profile. But what have people learned from him? Ask the average person what Hawking has done, and they’ll say: “Er… Black Holes?”

Now, let’s look at his contribution to the AI thing.

“It could be really dangerous.” I think that just about covers it, don’t you? He also managed to bring the Cambridge Center for Existential Risk, at Cambridge, where he is, to the attention of people who hadn’t heard of it (I hadn’t).

Here’s my contribution:

1: As we’ve known for over fifty years, it could be really dangerous, and now we see from viruses (imagine if they went feral) and the realising capabilities of AI, that some threats are real and getting nearer.

2: Back in the 1970’s it was decided (the Lighthill report) not to fund AI much in the UK since it would probably never come to anything. (Lighthill was a respected mathematician who seems to have had no understanding of AI, so we needn’t think a cosmologist inevitably has some useful angle on the nature of intelligence/consciousness etc. Roger Penrose certainly doesn’t.) I learned my AI in the shadow of that report and I can tell you that even when combined with the similar pressure on neural net research, work on AI was not slowed much. Certainly not in my university, where Hinton started his work. Then came microcomputers, and the capability to program anything in your own bedroom. Recall the havoc the inventor of Facebook caused in his own bedroom? Before we knew it, the chaos had spread to millions of other bedrooms… and then a billion. In no time he’d crawled into people’s heads all over the world and infected their habit centres. Just by analysing search terms Google has massive power to predict the future and to benefit from it. They’re going to put AI components into that if they haven’t already. You might stop big funding agencies but you’re never going to stop AI research now. And if you stopped it in your country, how about Russia, China, N. Korea for example.

3: We’ll need to study AI just to protect ourselves against them, never mind the singularity (though I do understand that Hawking was not trying to strangle AI research, just cast it in a dirty light and gain attention.) We should try to imagine all the ways a spontaneous AI agent could “hit the singularity” or whatever, and plan how to control it.

4: We won’t be able to. The only reason the genie isn’t already out of the bottle is because the genie doesn’t exist yet, but when it does, no bottle will be able to contain it. It’s hard enough to contain the greenhouse effect when it actually takes a great effort to propagate it. What are you going to do? Set up an emergency parallel internet to migrate to, when the time arises? Just like we’re all going to go to Mars when it all goes wrong down here? We can’t even start writing big computer systems for the BBC, the health service etc. etc with any confidence that we’ll be able to finish them. Our financial systems are definitely seriously corrupted to the point where they’re just playthings for psychopaths; whatever useful they do is a front and a lucky side-effect. We’re going to need AI just to help us do the job and watch over the bad guys.

5: It seems like all we can do is hope it doesn’t happen. But if it does, we certainly won’t need some showman taking all the attention away from, and insulting, those who might just be able to do something about it.

Advertisements
This entry was posted in Uncategorized and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s