Who ordered this future? 🙂
For many years I liked to think I was keeping up with AI, keeping my own ideas ticking over, and very occasionally catching a glimpse over other peoples’ shoulders. But there was always the possibility that behind my back, things were racing so far ahead I’d never catch up (as indeed they have done in most other technical areas). Worse, “my idea” might have been pinched, and a billion made off it by someone else!
But what in fact seems to have happened is this:
* AI’s notoriously sluggish advancement has continued.
* Somebody did discover my idea… or rather it turned out to be idea whose time had come.
* People are being signed up to the idea… but very slowly. Indeed so slowly I not only don’t feel like “my idea’s” been pinched, but so slowly that I’m resenting the resistance to it.
* I’d seen my conception as a single idea but people have only started to take up one half. The basic Blackboard architecture I was taught at uni (when it was new), and now developed into something known as Global Workspace Theory, is recognised, has a following, and is considered to be tied up with consciousness (which I leave to others). The other half they’ve left on the side of the plate. That’s the idea of epiprogramming, combined with “total flexibility”, perhaps in the sense of Cruyff’s vision of “total football”: every player being able to play in any position. (Specifically, any module – and it’s a neural net, not code – potentially able to take up any place in the hierarchies in perception, logic or goal pursuit. Also: total simplicity, partial democracy instead of A-brain/B-brain, etc.) I’m pretty sure people would have ignored it even if I had done more to shout it from the rooftops. But this is where AI borrows a virtue from (or exploits its identity as) engineering: No matter what people think of an idea, if it works, it works. (I was signed up to make it work, in an unusually tempting project to weave it in with experimentally very plausible linguistic mechanisms at the Gobet lab at Brunel, but didn’t get a grant. At my age and with the grants I’ve had, I suppose I had a cheek even to ask for one!)
We got a more detailed angle on the nature of current AI the other day. Prof. Pat Langley, head of this and director of that, now at Arizona State, via Stanford, got his PhD from Carnegie Melon (where GWT star Stan Franklin came from, as did Fernand Gobet), and he complains that the original visions of early AI are being abandoned. I realise I sympathise. He tells us of the now largely but sadly relinquished original assumptions of AI:
The first assumption of the early AI’ers was that it involved high-level cognition. He says this still applies in some fields but only in the isolated high levels on their own: e.g. planning and automated reasoning.
The second assumption had been that that structured knowledge plays an important role in cognition: symbolic rather than just numeric, e.g. expert systems. But in the last 20 years, this has been taken over by exercises in stats and probability. (I have to say, it has always been a part of AI’s character that as soon as it gave birth to a new technology, that technology stopped being part of AI and became maths or computing or something.) He says knowledge representation and constraint satisfaction still use symbols but now limit their scope.
The third assumption is a system-level theme: seeing at least part of the trick as that of relating a number of different aspects together. Also the possibility of building such systems or their components. Langley cites Newell’s Cognitive Architecture. (One problem that impinged on this was the emphasis on conference publications! Time to cover small components but not for system level accounts! 🙂 )
The fourth assumption was the central rôle of heuristic search. “The popularity of statistical approaches has resulted largely from the belief, often mistaken, that techniques with mathematical formulations provide guarantees about their behaviour.” Right on, there! Yes, you can have too much maths. I well remember putting forward an interesting idea to a professor of mine who said “Can you prove that?” to which I should have given the answer: “Can you prove that you need to prove it?!”. I’ve always considered it a serious strategic error to hustle and force the condensing of conceptual nebulae towards a mathematical form, especially a form you’re already familiar with. The components of what you’re inventing should be allowed to remain uninstantiated for as long as possible. With luck they may condense down to some new mathematical concept (if you’re a mathematician); anyway, with luck, to some subtlely novel concept. Rushing towards the familiar shackles creativity. Unfortunately, pinging straight to mathematical formalism does give such an impression of being sensible, responsible, scientific and grown up, that it blinds people to the threat to creativity, general and mathematical.
I believe that is one thing that’s held AI back somewhat. But anyway, above a certain level of generality, heuristics do become unprovable, and I think this works in with what Pat is saying. But anyway, as he says, an unprovable heuristic can often work better than a provable algorithm.
Finally, it was assumed that AI was closely connected with human cognition. He says we need to get back to psychology more, and he’s right.
Left pic from Prof. Langley’s site: https://webapp4.asu.edu/directory/person/976696
Right pic from In The Open Space
 AISB Quarterly No. 133 Jan 2012 pp1-4