Chomsky and the Google man Norvig, On The Nature Of AI and Science

Recently a heap big pow-wow in the big chiefs’ wigwam hashed out the essence not just of AI but eventually of science itself. Or at least tried to.

It was at the 150th anniversary of MIT – the Massachussets Institute of Technology – where Marvin Minsky and his sometime side-kick Seymour Pappert had been big players back at the start of AI. However Minsky is a minnow compared to MIT man Noam Chomsky, at one time more cited than anyone else, and reckoned as much as anyone to have helped define cognitive science.

Sitting amongst the attendees were dark shadows. AI has failed, at least in realising the hopes people had for it, decade after decade. Minsky successfully forbade the field to go down the neural net line for ten years, but it turned out to be one of the more fruitful avenues. Chomsky told the world there had to be a genetically inbuilt language structure inside children because if they had to learn language relying on the stimulus-response methods suggested by behaviourist B. F. Skinner, it would take far too long.

That was very controversial but it did succeed in taking behaviourism down a step or two, though some such as Terence Deacon have claimed that many language rules can be learned almost simultaneously, and others have suggested that using categories instead of nitty-gritty details can speed learning. Actually the main lesson of Chomsky’s controversies should have been that views often need not clash so long as the concepts they deal with are refined properly and agreed: for example, as humans are the only animals with language there does have to be some inbuilt genetic component but exactly what that genetic component is, is still moot. This importance of getting the basic concepts right before arguing at cross-purposes, has not stopped people blaming Chomsky for getting everything wrong. That was a vital issue at the conference – though who knows if anyone took account of it. Another lesson we learn from Chomsky is that if you write papers read by lots of people, particulary lots of clever ones, and then you stay on the scene… like a science machine… for 50 years, prepare for unparalleled criticism!

Chomsky has lived long enough to take part in MIT’s version of the Gangnam Style video, but also to cross swords with Google’s director of research: Peter Norvig. I have found that research managers aren’t like other managers. They at least have to be, if not apparently endearingly mad, then very open minded. Norvig’s open mind has allowed him to range far and wide around linguistics and information theory – as you might expect with his job – and that job means he has to know lots of statistical tricks to get Google to translate text, and of course to index and find it, without the text being well understood. He knows that it works, indeed works better than other approaches (he used to be heavily into LISP), and he has the figures to back it up.

The Question Of Science

But does it give us proper scientific understanding? This is the heart of the matter. Norvig resents Chomsky’s suggestion that it’s second rate understanding, perhaps second rate science. Is one right and the other less so? To answer this we need to refer to the central issue of the nature of science. This is endlessly necessary for all sorts of situations in many disciplines not just science. And that question that answers all the other questions to life, science, AI and everything… is…

…mysteriously absent from both Chomsky’s and Norvig’s narrative – yet it is readily available. Without a central idea to solve these issues you’re handwaving all the time. It is precisely because of situations like this that people took the trouble to clarify. The following definition works well in general, and is a description of Popper’s view:

Science is the generation, judging and honing of theories which model (i.e. explain or predict) the best.

Although designed for science, not engineering techniques, it stresses that science is all about models; it’s precisely because models are only there to guide action, that their use fairly extends to inherent modelling arising from applications such Norvig’s cheap and cheerful machine translation, or internet search. His perhaps arbitrary statistical models are fine for this. Not only is this type of machine translation the only one we’ve got so far that works well, but the Google stats approach would be justified and admirable even if they were merely cheaper and inferior than something else on offer, because it’s absolutely ok to have more than one model for the same thing.

Someone who understands cars well has a model of a car that’s different from a driver who merely knows the effects on the car’s motion of the steering wheel and pedals. A carrion crow dodging traffic to peck at roadkill also has a model of cars adequate for its purposes, and those Japanese crows using cars at crossroads to crack nuts have their own slightly more enhanced car models. Models are essentially for guiding behaviour. Google’s engineering models, generated by simply looking at data, which Chomsky dislikes, are right to be seen by Chomsky as not the final model/s of any phenomenon. Really good models of human language would combine to describe the parts of the brain that do various tasks, how the capability developed in the child, what machinery the child had when it was born, and how it all evolved. Also, the lingistic structural concepts not embodied physically in the brain but which Chomsky favours (and takes to be different from those structures arising from Google type stats analyses), would need to be defined. Norvig can claim to be only doing engineering in order to maximise profit, and doesn’t need to see himself as ‘doing science’ (though I think he does and is right to do so), but linguistic scientists should be falling over themselves to see the details of his results which can’t help but be terribly revealing.

Same thing with modern genetics

Guessing what goes on the the ‘Thought Machine’ is a bit like understanding the ‘Gene Machine’. Another old campaigner, Nobel prizewinning geneticist Sydney Brenner, like Chomsky, dislikes the information approach that the modern genomic researchers are able to use thanks to their high-powered computers. He says they’re not really solving the central problem, and neither are those who think mapping out all the connections in the brain will be useful(!).

As Yarden Katz says in his Atlantic article,

“Brenner called this [connection mapping] a “form of insanity.”

Brenner’s catch-phrase bite at systems biology and related
techniques in neuroscience is not far off from Chomsky’s criticism of
AI. An unlikely pair, systems biology and artificial intelligence both
face the same fundamental task of reverse-engineering a highly complex
system whose inner workings are largely a mystery. Yet, ever-improving
technologies yield massive data related to the system, only a fraction
of which might be relevant. Do we rely on powerful computing and
statistical approaches to tease apart signal from noise, or do we look
for the more basic principles that underlie the system and explain its
essence? The urge to gather more data is irresistible, though it’s not
always clear what theoretical framework these data might fit into.
These debates raise an old and general question in the philosophy of
science: What makes a satisfying scientific theory or explanation, and
how ought success be defined for science?”

Seems to me it’s pretty simple: the data-rich approach solves immediate problems like throwing light on the histories of populations and evolution, but those solutions are not necessarily answers to the question of modelling the whole genetic machine (as they aren’t for the langauage machine). They are by no means useless for those problems, and Chomsky is right to seek more insightful models, but I don’t think people were about to stop looking for them.

Recursive Paradox

Paradoxical to our investigations of often using stats models of the mind and the cells that make it and the rest of the body, is the mind’s task itself of seeking statistical features, and make its way around the world by learning how to make use of regularities.

Further Comments on Chomsky’s Comments

Marr

We think of Chomsky as an influencer but he himself follows the strategy of David Marr who published the classic book Vision shortly before his death.

Marr tells us to separate the wood from the trees: look at how the individual nerve cells work together, yes. But also, describe their overall task in terms of the algorithm that connects their input to their output. From That Algorithm, work out what the nerve cells must be doing. He is doing us the favour of relating the inputs and outputs.

But there are problems with his advice. First, his recommendation to compare the input with the output, then work out the maths to go in between. He didn’t actually quite.. really… do that when he studied the visual system. Marr was thinking in terms of Hubel and Weisel’s Nobel prize winning experiments. They had already puzzled out the sequence between the retina and the back of the head (the primary visual cortex) by reading the electrical behaviour of cells along the route. They hadn’t really used much maths. Marr defined the algorithm required, but after the fact. We’re still waiting for this ‘reductio ad mathematicum’ to bear fruit in significant areas. He was trying to make the whole thing a mathematical exercise. It’s to do with describing brain problems in terms of a convenient mathematical form (maybe not the first form you think of, but one of the earliest), and then smugly telling everyone you’ve reduced the whole problem to its mathematical essence. People are frightened to question that since only the brave argue against a mathematical solution, even when they only want to say the real solution might be a different formula. Or even some messier biological style operation not so maths-friendly. Try winning that kind of argument in an engineering institution!

Also, the ‘output’ from that system was only the end part of that visual input chain from the eyue to the visual cortex; the real ‘output’ challenge for brain science is comparing the Behavioural output with the total input including remembered, high level concepts and motives, and working out what the hell is going on in higher-level cognition.

Simple Physics

Echoing what I said in my book concerning the nature of physics, NC says in the YK interview:

“If you take a look at
the progress of science, the sciences are kind of a continuum, but
they’re broken up into fields. The greatest progress is in the sciences
that study the simplest systems. So take, say physics — greatest
progress there. But one of the reasons is that the physicists have an
advantage that no other branch of sciences has. If something gets too
complicated, they hand it to someone else.”

Physics ends up being complex because you can do complex things with it, because, in turn, its essential components, though simple, are sharply defined, not to mention readily experimentable. But as practitioners of other disciplines may well agree, they are annoyingly simple. (When you’ve discovered them of course! The discovery needn’t be simple.)

His View On The Evolution Of Language

YK [Yarden Katz]: “You’ve
criticized a very interesting position you’ve called “phylogenetic
empiricism.” You’ve criticized this position for not having explanatory
power. It simply states that: well, the mind is the way it because of
adaptations to the environment that were selected for. And these were
selected for by natural selection. You’ve argued that this doesn’t
explain anything because you can always appeal to these two principles
of mutation and selection.”
.
NCh: “What it strongly suggests is
that in the evolution of language, a computational system developed,
and later on it was externalized. And if you think about how a language
might have evolved, you’re almost driven to that position. At some
point in human evolution, and it’s apparently pretty recent given the
archeological record — maybe last hundred thousand years, which is
nothing — at some point a computational system emerged with had new
properties, that other organisms don’t have, that has kind of
arithmetical type properties…”

YK: “It enabled better thought before externalization?

NCh: “It gives you thought. Some rewiring of the
brain, that happens in a single person, not in a group. So that person
had the capacity for thought — the group didn’t. So there isn’t any
point in externalization. Later on, if this genetic change
proliferates, maybe a lot of people have it, okay then there’s a point
in figuring out a way to map it to the sensory-motor system and that’s
externalization but it’s a secondary process.”

Crumbs. Lots to critique there! Surely to criticise any system that relied on mutation and natural selection as not explaining anything, if YK is right in what he suggests and NC doesn’t seem to deny it, is to use the old attack on evolution that it isn’t testable (I remember Maynard Smith telling me personally that a potential refutation would be to find a rabbit in the Precambrian). And evolution certainly counts as a model albeit a high-level one.

Yes, I would agree that language grew out of a computational system obviously internal at the start, which then became externalised.

Bear in mind though that many animals have the same lobes and many other finer details in common with us, and the basis of that setup won’t be too recent. Much more blatant though is the notion that language developed 100,000 years ago. Unfortunately for that idea, the San people of Southern Africa appear to have split off from everyone else more than 100,000 years ago. Nelson Mandela’s mother is from that source, and his father is not. However I think he would be surprised to be told his mother’s family either could not use language or they evolved it independently or learned it from others very recently. I don’t know what archaeological evidence NC might have had in mind but it’s trumped by that. To a first approximation, the hardware support for language stopped significant evolution prior to 100,000ya. I’d guess most of it was completed by 400,000ya, since there is no reason to believe Neanderthals and Denisovans had significantly different language capabilities from us. Archaeological evidence of “cultural niceties” that might imply language, are not very common prior to 100,000ya, so their absence does not refute language.

If, when he says above, “It gives you thought”, he means we couldn’t think before we had that “it” which allows the internal processing that preceeded language, and even that internal form only appeared 100,000 years ago, then he’s way, way off. First, thought doesn’t imply language since it’s possible to think about and even dream about anything you can consciously perceive, and conscious or not that includes things that mammals and even other animals very much lower down the scale would be capable of perceiving. And the notion that we couldnt’ think until 100,000 years ago, which I’m sorry to say is implied by what NC says, I find very alarming.

Chomsky On The Philosophy Of Science

YK: “You mentioned that you taught a philosophy of science course at MIT and
people would read, say, Willard van Orman Quine,
and it would go in one ear out the other, and people would go back
doing the same kind of science that they were doing.”

NCh: “Philosophy of science is a very interesting
field, but I don’t think it really contributes to science. It learns
from science; it tries to understand what the sciences do, why do they
achieve things, what are the wrong paths, see if we can codify that and
come to understand. What I think is valuable is the history of science.
I think we learn a lot of things from the history of science that can
be very valuable to the emerging sciences. Particularly when we realize
that in say, the emerging cognitive sciences, we really are in a kind
of pre-Galilean stage. We don’t know what we’re looking for anymore
than Galileo did, and there’s a lot to learn from that. So for example
one striking fact about early science, not just Galileo, but the
Galilean breakthrough, was the recognition that simple things are
puzzling

Well yes, Noam, philosophy of science is a very interesting field, but it must also be used not just as an entertainment or even a commentary but as an operational manual to guide us out of difficulties. If he’d used an understanding of the nature and value of a model properly, he would have seen the stats approach he criticised, as being a different kind of model, but with its own special value. Admittedly he was right to imply that subsequent to those models cognitive scientists shouldn’t fold up their field and walk away considering their task done and dusted, but should progress to more satisfying models, but maybe he shouldn’t have been quite so critical.

More importantly, if he started students on philosohy of science with Quine, then not only should he not expect them to change their behaviour, but he should be replaced as a philosophy teacher. You have no better place to start Phil. of Sci. than Popper; he goes back to first principles and makes the usefulness of his ideas clear. Quine is I suspect great, but if someone as interested in philosophy as me isn’t ready for him, don’t teach him to anyone not majoring in philosophy.

Yup; the cognitive sciences might well be considered to be pre-Galilean.
Perhaps more later.

This entry was posted in Artificial Intelligence, Philosophy of science and tagged , , , , , , , , , . Bookmark the permalink.

1 Response to Chomsky and the Google man Norvig, On The Nature Of AI and Science

  1. rrameez says:

    Well, if you don’t like what Chomsky says about the philosophy of science, which by the way is pretty mild as compared to many other scientists, then you would really not like what Steven Weinberg says 🙂 Anyway, here’s a humorous critique of Norvig’s approach.
    http://scensci.wordpress.com/2012/12/14/big-data-or-pig-data/

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.