Skip to content

A Full Transcript of the Lighthill Debate on AI from 1973, with Introductory Remarks

Notifications You must be signed in to change notification settings

Dicklesworthstone/the_lighthill_debate_on_ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 

Repository files navigation

The Lighthill Debate on AI from 1973: An Introduction and Transcript

Illustration

Introduction (by Jeffrey Emanuel)

In 1973, the British scientist James Lighthill published a report on the state of artificial intelligence research in the UK. The report was highly critical of the field, and led to a significant reduction in funding for AI research in the UK. The report was controversial, and led to a debate between Lighthill and several leading AI researchers, including John McCarthy, the creator of the LISP programming language, and Donald Michie, a pioneer in machine learning. The debate took place at the Royal Institution in London on May 9, 1973. You can watch the full video of the debate on YouTube here.

In this article, we will provide an introduction to the debate, and a transcript of the debate itself, generated using the Whisper transcription model via a custom script and formatted as markdown with speaker identification (done manually while consulting the video). Afterwards, we will attempt to review how the various points made by the speakers have held up over the intervening 50 years.

The debate is a fascinating historical "time capsule" and provides great insights into the early history of AI research and the controversies that surrounded the field at the time. Essentially, Lighthill's report argued that AI research had failed to deliver on its promises, and that the field was unlikely to make significant progress in the near future. His report was based on a review of the state of AI research in the UK, and concluded that the field had made little progress in solving the fundamental problems of AI, such as understanding natural language and reasoning about the world.

Lighthill's report was highly influential, and was used as ammunition by the many critics of the field to cut off government funding and otherwise cast a pall over the entire field. Lighthill emphasized the "curse of dimensionality" and the "combinatorial explosion" as fundamental problems that AI research had failed to solve, and argued that these problems made it unlikely that AI systems would ever be able to reason about the world in the way that humans do. Indeed, Lighthill argued that the fundamental problems of AI were so difficult that they were unlikely to be solved in the near future, and might even be unsolvable in principle.

What I find most fascinating about all this is that Lighthill was unquestionably a genius who made very important contributions to the field of fluid dynamics. He was a Fellow of the Royal Society, and was the Lucasian Professor of Mathematics at Cambridge University, a position once held by Isaac Newton. You can see a fuller recounting of his many impressive accomplishments in his obituary in the Times. As an aside, Freeman Dyson tells a wonderful story about how he and Lighthill and two others would sit together in the Winchester College library every day and slowly work through all of the problems in the very challenging algebra textbook by Camille Jordan, the Cours d'analyse, which they stumbled upon in the stacks— despite it being written in French and intended for much older students (they were around age 15 at the time!). Dyson later concluded that the book had been mysteriously placed in the library by none other than G. H. Hardy, who had attended Winchester years earlier, in the hopes that it would inspire some young student to take up the challenge.

That is all to say, Lighthill was no fool. And yet, he was very confidently and persuasively wrong about the potential for AI, and his negativity towards the field helped contribute to the AI winter that lasted from the late 70s to the mid 90s. To be fair, the symbolic based AI that was then in vogue was indeed a dead end, and it was only with the rise of connectionism and the development of the backpropagation algorithm in the 80s that AI research began to make significant progress, and not until the 2010s that AI began to have a significant impact on the world.

But Lighthill's report was not just a critique of the current state of AI research, but a prediction that the field was unlikely to make significant progress in the future because of what seemed to him to be insurmountable, "mathematical" problems (which, as it turns out, were not insurmountable at all, but rather just required a different approach).

I think we can all learn a very valuable lesson from this episode about the dangers of overconfidence and the importance of keeping an open mind. The fact that such a brilliant and learned person could be so confidently wrong about something so important should give us pause. It suggests, to me at least, that there is real value not only in being open-minded, but also in being an optimist and always seeking to ask "Well, what could work then?" rather than just "This won't work because of X, Y, and Z, and I think it will never work."

Lighthill's final remarks in the debate are

"I think that in practical terms, it's a mirage, in the sense that if it's something that we think we can see on the horizon, in the sense that on our deathbeds it may be announced or our children will see it, that it's really there on the horizon, then I disagree with such a view."

Sadly, Lighthill died in 1997 from a heart valve rupture while swimming around the island of Sark, so he didn't live to see it. But his son and 4 daughters are very much still alive, and they have indeed lived to see what I suspect even Lighthill would admit is a compelling indication that the "mirage" of AI just might be real after all.

Profile Photos of the Speakers
The Speakers (left to right): Don Michie; John McCarthy; James Lighthill; Richard Gregory

The Full Transcript of the Debate

Host: Good evening and welcome to the Royal Institution. Tonight we're going to enter a world where some of the oldest visions that have stirred man's imagination blend into the latest achievements of his science. Tonight we're going to enter the world of robots. Robots like Shakey, developed by the Stanford Research Institute. Shakey is controlled by a large computer. He's directed through a radio antenna. Through a television camera he gets visual feedback from his environment. The box appears on the monitor screen. The computer analyses the traces which appear on the visual display until it can interpret them as an object it recognises. Shakey gets tactile feedback through his feelers. He's able to move boxes with his push bar. He's programmed to solve certain problems that can be contrived in his environment. To choose, say, an alternative route to a certain point when his way has been blocked. Shakey is unquestionably an ingenious product of computer science and engineering. But is he anything more? Is he the forerunner of startling developments which will endow machines with artificial intelligence and enable them to compete with and even outstrip the human brain? Robots like this one in Stanford University's computer science department are able to perform certain tasks. But will robots ever be able to perform a wide variety of tasks? To learn from their experience? To use what they've learned to solve these new problems beyond those envisaged by their human programmers? Or will their so-called intelligence, their performance, remain forever at the level of a three-year-old child at its first games?

One man who's pessimistic about the long-term prospects of artificial intelligence is our speaker tonight, Sir James Lighthill, one of Britain's most distinguished scientists. He's a Lucasian Professor of Applied Mathematics at Cambridge and has worked in many fields of applied mathematics. He's a former director of the Royal Aircraft Establishment at Farnborough. Last year, he compiled a report for the Science Research Council which condemned work on general purpose robots. Not surprisingly, scientists who've been working on such robots have reacted strongly in defence of their field. Three of them are here tonight to challenge Sir James' findings. After they've had their say, the discussions will be open to bring in members of the audience here. Of many mathematicians and engineers, computer scientists and psychologists among them, their contribution will be particularly welcome.

Before Sir James Lighthill makes his opening speech, I should like to introduce the men who will lead the debate against him. Donald Michie is Professor of Machine Intelligence in the University of Edinburgh. His laboratory is the only one in this country engaged on large-scale robot research. John McCarthy is Professor of Computer Science at Stanford University in the United States, another great centre of robot research. He's the director of Stanford's Artificial Intelligence Laboratory and has flown over especially for this programme. And Richard Gregory is a Professor in the Department of Anatomy at Bristol University. His concern with artificial intelligence arises out of his work on perception. Before going to Bristol, he was a founder member of the Edinburgh team and helped launch its robot project.

And now the moment has come to meet our Principal Speaker, Professor Sir James Lighthill.

Professor Sir James Lighthill: I'll begin by making a few distinctions. Between automation, which is replacing human beings by machines for specific purposes, that has made great progress in the 20th century and where the replacements have been put into effect humanely has led to general benefit, improved productivity, creating higher standards of living. And a general purpose robot, an idea that has often been described involving an automatic device that could substitute for a human being over a wide range of human activities. That's what I shall argue is a mirage. Automation is the province of the control engineer. He designs feedback control systems that act to reduce any change in some quantity from its desired value. For example, in this automatic aircraft landing, the throttles move so as to reduce changes in speed of the aircraft, and other controls reduce deviations from the desired glide path.

Increasingly, an important role in automation is played by computers. A computer is an extremely fast, reliable, and biddable device for manipulating numbers and similar symbols according to rules clearly prescribed in a program. Computers are tools of the greatest value in a very wide range of human activities, including all branches of scientific research, and we'll see examples of that, and all branches of engineering. The control engineers have made excellent use of computers in automation, for example, in numerically controlled machine tools that will cut metal parts to geometrical shapes defined by equations in a program. One of the important new branches of science is computer science. Workers in computer science constantly improve our repertoire of things that can be done with computers, not just arithmetic and geometry, also algebra and calculus and logic.

Logic by computer means manipulating symbols, symbols representing different statements, in accordance with a program, to find out what can be deduced from what, and so on. Advanced automation can mean automation making use of the computer's full logical potentialities developed by computer scientists. For example, in a modern computer-aided design for an electrical printed circuit, the computer's role in identifying all these current paths is still very specialized, replacing human beings for a very specific purpose, but most effective. Other automatic devices exploiting the logical capabilities of computers are used to organize scientific data through data banking and retrieval. For example, in a data bank of different properties, boiling point, latent heat, etc., of hundreds of thousands of chemical compounds.

I've been talking about computers and their benefits to us in a lot of fields, but I must come to the other side of the case. Computers have been oversold, understandably enough, as they are very big business indeed. It's common knowledge that some firms bought computers in the expectation of benefits which fail to materialize. My concern tonight, however, is with the overselling of the longer-term future of computers. The scientific community has a heavy responsibility to put forward its carefully considered view of the facts to avoid the public being seriously misled. Just as the U.S. National Academy of Sciences did in 1966 when it reported that enormous sums of money had been spent on the aim of language translation by a computer with very little useful result, a conclusion not subsequently shaken.

Failures continually occurred also in computer recognition of human speech or handwritten letters and in automatic proving of theorems in higher mathematics. But our subject tonight is robots, and we must identify what they are. Several groups of able computer scientists have for many years been adopting a particular point of view regarding their work and given it a name, artificial intelligence or machine intelligence. The idea is to operate from a sort of bridge between studies of how brains of living creatures work and studies of how computer programs and automatic devices based on them work. This interesting point of view has been current for over 20 years. Please notice that this view does not just mean that people who study brains, psychologists and neurobiologists should use computers. Those, as I mentioned earlier, are used effectively in all branches of science now. Nor does it mean another obvious fact that in writing programs for computers we are influenced by introspectively considering how a human brain would carry out the logical processes required.

The idea of artificial intelligence means that besides doing these two things, we engage in a definite bridge activity between advanced automation and computer science on the one hand and studies of brains of central nervous systems using computers on the other. The bridge activity proposed is building robots. I use robot not to mean an automatic device aimed at replacing human beings for a specific purpose in an economical way. A robot, rather, is an automatic device designed to mimic a certain range of human functions without seeking in any useful sphere of human activities to replace human beings. This robot, which you've already seen from the Stanford Research Institute, is one of the most sophisticated in current operation. I'll ask, why should robots be built and studied? There are at least two serious answers to this question.

First, that generalized information on automatic devices may result, which can be of use in a wide range of specialized automation problems. Second, that a device which mimics a human function, such as how we avoid an obstacle, may assist in making a scientific study of that function. I shall argue that these were eminently good and sufficient reasons for embarking on the work 20 or 10 years ago. In practice, however, the line of approach has led to somewhat disappointing results in these respects. We have acquired rather little generalized information applicable to a wide range of automation problems. Instead, we find that specialized problems are best treated by specialized methods, and I shall try to explain why that is. Similarly, the sciences of psychology and neurobiology have benefited not from robo-work in general, but from those computer models that take into account really extensive bodies of experimental data on psychological behavior or on nerve cell networks in the brain. Before I expand on these reasons for a certain disenchantment with robot research, I shall predict that people nevertheless will go on building them. At all periods of history, the human imagination has been captivated by the idea that the mysterious arts whether of the sorcerer's cell in earlier times or the scientist's laboratory today might be used for a process of, as it were, artificially giving birth. Whether for this reason or not, a large section of the public finds the very idea of robots thrilling. It wants robots. It is prepared to pay for robots, if only as entertainment.

Money was being made at the end of the 18th century not only from mechanical dolls of great ingenuity, but also from exhibiting large, apparently automatic chess-playing robots. Their capabilities actually arose, like those of the Daleks today, from the presence of a man skillfully concealed inside. Science fiction in all the media has helped to intensify this old fascination with robots as artificial beings artificially given birth. Modern robots certainly seem to imitate children in some respects. They play games. They do puzzles. They build towers of bricks. They recognize pictures in drawing books. Scientists may well find building them attractive either because the very idea exercises its old fascination on them or because the public, as represented in funding bodies, still feels that fascination enough to be prepared to pay. On this, I'll say no more. The last thing I want to do is to argue against the entertainment industry. What I have said, however, explains my description of the general purpose robot as a mirage, meaning an illusion of something that may be strongly desired. Now I must speak of the fundamental obstacles to developments on those lines.

Every existing robot operates in an extremely restricted world, a sort of playpen. That limited set of objects which are to be processed by the robot's computer program is often referred to as the program's limited universe of discourse. Such a limited universe of discourse may be a so-called tabletop world where block-stacking jobs and other eye-hand operations may be carried out, or it may be a drawing book for visual recognition jobs in two dimensions, or a board for chess or some other game or puzzle. Whether or not there are psychological motives for a choice of an extremely limited playpen universe within which the robot operates, there are certainly practical reasons. The whole of a very large computer is being used to organize the sequence of operation of one of these robots. If the universe of discourse within which it operates were made a lot bigger, the size of computer required would increase astronomically. This is often referred to as the combinatorial explosion. The combinatorial explosion means an explosive increase in the computer power required to deal with moderate increases in the so-called knowledge base which the computer has to keep organized. It's not the movements of the robot that require these huge computer powers. It's organizing the logical analysis needed to decide its sequence of operations.

A so-called self-organizing program is a program that can organize the sequence of robot operations without clues fed in from the fruits of human intelligence. Experience indicates that any self-organizing program must continually cause long searches to be made through the computer's store of data regarding the universe of discourse. A typical search might be for that combination of items and their associated attributes which satisfies some relationship necessary in solving a problem of what to do next. The combinatorial explosion means that the length of search grows explosively with an increase in the universe of discourse, essentially because that length of search depends on the number of ways in which items in the store of data can be grouped according to particular rules and that number of ways becomes enormously large extremely fast. Doubling the universe of discourse may make the searches thousands of times longer. All this means that any big increase in computer power that will come in the future will allow these self-organizing programs to handle only a moderately increased size of universe of discourse.

All attempts at general problem-solving programs, whether concerned with theorem-proving or with the so-called common-sense problems that arise in most robot situations, have been and must continue to be severely limited by the combinatorial explosion in the size of problem which they can tackle. Through repeated failure to get round, these difficulties led to programmers being forced to adopt an expedient known as the heuristic. This is a method of constantly guiding the search by, as it were, telling the robo when it is warm, and when it's getting warmer, and so on, a procedure that we all know shortens any search. The heuristic is a numerical measure of how warm the computer has got, that is, of how favorable to the aims of the program is the current configuration within the computer store. It is purely human intelligence and human experience that assigns this heuristic this evaluation function.

For example, a specialized program for playing chess involves such a heuristic based entirely on human knowledge and experience of how to evaluate a chess position. This numerical evaluation includes basic elements like an estimate of the advantage of any difference in the white and black forces with the usual weightings attached to the values of different pieces like a knight having about three times the weight of a pawn, and attaches also suitable weights to space control elements like the added up number of squares under attack from each one's pieces with extra weight for any center squares, and similar numerical estimates of the extent of development of one's pieces, the degree of exposure of one's king, and so on and so forth. Because of widespread interest in finding out how good a computer might be in a complicated game like chess, devoid of any chance element, a great deal of effort by chess grandmasters, including the former world champion Botvinnik, has been expended on getting these evaluation functions better and better. Then the computer conducts at each move a long search to find a sequence that will give it the best achievable position three or four moves ahead, assuming that its opponent makes its best replies, where best, of course, means only best from the point of view of the evaluation function.

This line of research was pursued actively for over 20 years, so the results give a good indication of what can be achieved with special purpose automation when a very large amount of human knowledge and experience about the problem domain or universe of discourse, still quite a modest one in size, has been incorporated into the program. The programs play quite good chess of experienced amateur standard, characteristic of county club players in England, although chess masters like our own David Levy beat them easily. This story is typical of the whole range of advanced automation in general, which has made reasonable progress when directed towards some specialized purpose concerning which a very large amount of human knowledge can be incorporated into the program. On the other hand, general purpose programs cannot be designed in this way, and in any large, variegated universe of discourse they fail by enormous margins, owing to the combinatorial explosion. The general purpose robot, then, is a mirage.

The science fiction writers, possibly others, will try to keep it shimmering or appearing to shimmer there on the horizon in front of us, and there's something in most of our minds that wants to believe it's there. So many people may feel disappointed to hear it's not, although really they should feel encouraged by evidence for the uniqueness of man, the uniqueness of the human race and of human brains. The many unique features of human beings include emotional drives and remarkable gifts for relating effectively with other human beings, as well as powerful abilities for reasoning over an extraordinarily wide universe of discourse. There is no reason why any of these features should be realizable in a computer of relatively simple organization, driven by even a very complicated program that has been read into its store. No reason why such a combination can begin to approach what the vastly more intricate networks of nerve cells inside human skulls can do. Neurobiological research on the visual cortex has shown the extraordinary efficiency with which specialized networks of specialized neurons play their part in analyzing visual fields.

It's probable that the extraordinary self-organizing capability of the cerebral cortex has resulted from the evolution of specialized neural networks of extreme complexity, which there is no question of imitating with a programmed robot. Research on many different aspects of brain structure and function will continue and will increasingly be helped by computer-based theories adapted to the actual neurobiological data and problems and to the results of experimental psychology. At the same time, advanced automation in various specialized problem domains will forge ahead. However, the gap between these two fields will remain too great for the attempts at building a bridge between them to be effective. Always there may be some people who try to make us think we can see that old general-purpose robot shimmering there on the horizon, but he's a mirage.

Host: Thank you very much, Sir James. I suspect that a number of people will be rather sad to hear that robots are a mirage, and we have here at least three people who have good reason to believe they're not. For instance, Professor Michie runs a laboratory in Edinburgh which is one of the world's leading centers for robot research. Don Michie, would you like to start the discussion off?

Professor Don Michie: I'm certainly not going to take you up, Sir James, on the term mirage, and I think to do so might be presumptuous in this company. Professor Gregory is one of the world's leading authorities in optical illusions of all kinds, and presumably that includes mirages. But I am going to take you up quite sharply on the term general purpose, because I have the feeling that this is very near the crux of the matter, and I have a suspicion that under your term, general purpose, it's possible that there are two quite distinct and two quite important notions snuggling under the same blanket, one notion being the notion of an experimental device, a research prototype, which one might more properly call a research purpose device, and I would be happy to talk about research purpose robots, by which one means devices which have no other purpose but to be used by scientists to advance knowledge in a particular new domain to test feasibility and to investigate principles. It's very close to the idea of the experimental prototype. I would say that the primitive flying machine of the Wright brothers would be a good example of such a device, certainly general purpose. And certainly the work done with computer-controlled robots in the various artificial intelligence research laboratories around the world in the last five or six years, I think could fairly be described in those terms, research purpose robots.

The other concept which I think comes under the same term of general purpose, and may be confused with it, is the notion of versatility, by which one means the ability to re-instruct, re-educate almost, a device rather quickly and rather easily and rather conveniently from the point of view of the human user. And this property of versatility is of extreme interest to workers in the field of artificial intelligence, and it's not entirely without relevance in the future, perhaps in the industrial applications of robotics to assembly line operations and similar tasks. One of the problems in the industrial context is the problem of short runs, where a given product has its specifications changed every few weeks or every few months, requiring radical retooling and writing off of assembly equipment. And there's a good deal of interest at present in industry in how to incorporate versatility in such devices. Research on versatility in programming systems of a complex kind, which have to deal with fragments of the real world, is one of the studies which may lead towards that end, quite apart from its own intrinsic interest. Now, in Edinburgh, we have made some identifiable steps in that direction.

The film that I'm going to show now is reasonably described, I think, as the Edinburgh research purpose, versatile assembly robot, locally described as Freddy. That is Pat Ambler on the left, Rod Burstall, who supervised this work in the last years, putting a heap of parts on the platform. This is the beginning of the execution phase of the program. The previous teaching phase, which occupied altogether two days, isn't shown here. And the parts can be dumped at random and have been. And in fact, an extra part, which doesn't belong to that assembly kit at all, but belongs to a ship assembly kit, this one has been thrown in for good measure. That is the oblique camera that is responsible for building up an internal model of the overview of the whole platform. When it comes to detailed examination, the overhead camera is used, as here.

In a moment, that outline will be replaced by an approximation in terms of line segments. And there are elaborate data structures in the machine memory, which is a large part of the research interest, which form compact and convenient descriptions of the messy images seen by the television camera. Having identified that as a car body, it's being picked up and put in the assembly station in a stereotyped position, ready for the second phase of the program, which we'll see later, and which has less of interest in it from an artificial intelligence point of view. But the points of interest in this first part of the program are to do with constructing internal descriptions of messy and complex external phenomena, including, for example, jumbled heaps of that sort, as the basis for identification where possible, and where not possible, appropriate action. In the case of a heap, there's a whole repertoire of strategies available, of which the first will be to attempt to identify a protuberant part, and then an attempt will be made to pick it up.

These are potential hand positions, there's a little bit of internal planning going on, to try and find positions for placing the two hands which will be suitable for grabbing that protuberant part without fouling any of the other objects. And now, having selected a pair of positions, we're doing a pick-up, the system will now look around the platform, that little twitch was to shake off anything which might have been associated, picked up by mistake, with the rest. Having found an empty place that's being put down, it will now be examined by the overhead camera, a caricature, a simple description made, and this will be matched through what the computer memory holds in the way of descriptions, and identified as a wheel, hopefully, and then picked up and put in a suitable place preparatory for the final assembly.

We've jumped a bit and we're now nearing the end of this part of the program, this is the last wheel to be identified, and now all the parts are laid out, and the second part is just about to begin. The second part perhaps looks a little smoother, it's very much less interesting, it's done blind, no use of vision at all, quite considerable use of touch. For this purpose, the system has a primitive workbench, and that's a little vice, it's about to close the jaw on the wheel in order to clamp it in a more or less standard position. The only really high-level routines available as commands to the programmer in instructing the putting together part are two, which we'll see illustrated in a moment. These little feeling operations are simply updating its internal model to correct the dead reckoning.

In fitting the axle into the hull, one of the high-level routines was used, a spiral search in which it goes round in a widening spiral, prodding, and when it meets no resistance, pushes it home. The next operation is to insert the axle into one of the holes through the car body. And this is spiral search again, which has been successful. The difference between the two parts of the program, the second part is fairly conventional programming, simply taking advantage, perhaps, of a reasonably various and well-documented library of robot support software, but the difference between that and the earlier part, the vision and the layout, can be illustrated by the effects of interference. If things go wrong during the assembly phase, the one that I'm saying is far the less intelligent of the two, insofar as that word should be introduced at all, if you at this stage were to knock the partially completed assembly onto the floor, the program would never recover from it. But if you do things like that in the earlier stage, it has a sufficiently elaborate model of its world and a sufficiently broad repertoire of strategies, in general, to be able to recover and push the job through to completion. So in that sense, one derives a valuable degree of robustness from the employment of some of these techniques.

Well, this is the final assembly test. Since then, a slightly more advanced tour de force has been attempted and, in fact, succeeded, which was to teach the system to approach a jumbled heap consisting both of car parts and ship parts, completely mixed up, and successfully disentangle them, identify them, and construct one finished car and one finished ship. There are, in that innocently, deceptively simple-looking program, a number of techniques which we regard as artificial intelligence techniques. We have a number of fairly concrete ideas about how some of the crudities should be improved, and more intelligent features and considerable shortening of the instruction time introduced. My main thesis is that work in this area is work in an area of science which has an existence in its own right, that artificial intelligence is indeed a subject with its own purposes, its own criteria, and its own professional standards, and it is not to be identified with specific application areas.

Host: Well, Sir James, what about the versatile research purpose robot? Is that a mirage?

Professor Sir James Lighthill: Well, I thought that, in many respects, Professor Michie's film was a good illustration of the description that I'd given of the robot as a device designed to mimic a certain range of human functions without seeking any useful sphere of human activities to replace human beings operating in a playpen world with its toy car and its toy ship, and a small universe of discourse, and therefore able to solve the logical problems that were involved in organizing the program. I would certainly agree with Professor Michie's implication that, in certain factory jobs, one can create an artificially small enough universe of discourse so that one can think in terms of carrying out this type of logical organization of the task. Of course, against that is the fact that those who are involved in industrial automation are already doing this by their own methods. It's not the people who work in laboratories called AI, laboratories that have a monopoly of thinking how to organize tasks in factories, of how to carry out these operations in small universes of discourse.

Professor Don Michie: Just a brief interruption, Sir James. Industrial robots are becoming quite common in factories, but they do have one thing in common, and that is that up to the present date, no use of visual sensing has yet been achieved on a practicable scale, and very little tactile sensing. I thought for a moment you were implying that there was not likely to be a chain of beneficial influence between research studies of this kind and the factory robotics of a few years hence. We still have to incorporate, and there are members here of industrial automation groups who can confirm it, some of these facilities in the industrial environment. But there are firms that are doing very good visual pattern recognition and analysis by what I would call relatively conventional data processing methods, firms like Image Analysis, Computers Limited.

Host: Professor McCarthy, one of the things I'm finding difficult to understand is this distinction between advanced automation and what you call artificial intelligence. Can you define for us what this distinction is? What is artificial intelligence?

Professor John McCarthy: Okay. Artificial intelligence is a science, namely it's the study of problem-solving and goal-achieving processes in complex situations. It's a basic science, like mathematics or physics, and has problems distinct from applications and distinct from the study of how human and animal brains work. It requires experiment to carry it out. Now it involves about a very large number of parts to it, of which I will mention precisely four. One of them is these processes of search, which are dealing with a combinatorial explosion. Now it seemed from what you said that you had just discovered that as a problem, but in fact the very first work in artificial intelligence, namely Turing's, already treated the problem of combinatorial explosions, and there has been a very large part of the work in artificial intelligence, especially game playing, has dealt with that. The next problem is the representation of information internally in the machine, both information about particular situations that the machine has to deal with, the representation of procedures, and the representation of general laws of motion, which determine the future as a function of the past. A third problem is advice-giving, how we are going to instruct the computer or communicate with it. At present, programming that is influencing a computer program is as though we did education by brain surgery. This is inconvenient with children and is also inconvenient with computer programs. Progress is being made on this.

The fourth that I want to mention can be called compiling, or now the word used is automatic programming, but in an extended sense beyond the way it's used normally in the computer industry, and that is going from information that determines how something should be done to a rapid machine procedure for efficiently carrying this out. This is one of the major topics. Now I should remark with regard to all of these topics that they can be treated independently of applications and independently of how the brain works, and I would be perfectly glad to treat any one of these that you choose. On general purpose robots, I'd like to remark that in the strong sense of a general purpose robot, one that would exhibit human quality intelligence, if not, so to speak, quantity, but would be able to deal with a wide variety of situations, the situation is even in worse shape than you think, namely, even the general formulation of what the world is like has not been accomplished, so that even if you are prepared to lead the machine by hand through the combinatorial explosion, that is, to tell it which things to do next, you still cannot make it with the present formulations decide how to solve a complex problem.

Now this, in fact, has turned out to be the difficulty, not the combinatorial explosion. The common sense programs have occupied relatively little computer time in the areas in which they were capable of doing, or at least many of them have anyway, but simply have too limited a formulation. Now part of this is due to a defect in current systems of mathematical logic where the systems are designed to be reasoned about rather than to be reasoned in. Now I want to ask you a question, or maybe it's a rhetorical one, which is in the documents that you received before you wrote your report and in the comments that you received after you received the report, almost everyone made the point that AI was a separate subject with goals of its own and not intended to be a bridge between the other things.

Professor Sir James Lighthill: I would like to answer that question, I think it's a very important question. You see, in this country there are a large number of first-rate computing science laboratories which have preferred not to call themselves AI laboratories, but have concerned themselves with what Professor McCarthy calls the central area of his field, namely the study of problem-solving and goal-achieving programs. And these have been tackled in their own right as fundamental computer science. Many of the points that he's mentioned come in, for example, search in the whole field of information retrieval, compiling, where our fine computer science laboratories have been much involved in producing advanced programming languages. I was grouping with advanced automation for a very good reason, because extremely often one finds that the stimulus of a really important practical problem in automation is the thing that causes solutions, new solutions, to be found to these questions. And these add it to the repertoire of what computer scientists can do. Now, what are the arguments for not calling this computer science, as I did in my talk and in my report, and calling it artificial intelligence? It's because one wants to make some sort of analogy. One wants to bring in what one can gain by a study of how the brains of living creatures operate. This is the only possible reason for calling it artificial intelligence instead.

Professor John McCarthy: Let's see. Excuse me. I invented the term artificial intelligence. I invented it because we had to do something when we were trying to get money for a summer study in 1956, and I had a previous bad experience. The previous bad experience concerns occurred in 1952, when Claude Shannon and I decided to collect a batch of studies, which we hoped would contribute to launching this field. And Shannon thought that artificial intelligence was too flashy a term and might attract unfavorable notice, and so we agreed to call it automata studies. I was terribly disappointed when the papers we received were about automata, and very few of them had anything to do with the goal that at least I was interested in. I decided not to fly any false flags anymore, but to say that this is a study aimed at the long-term goal of achieving human-level intelligence. Since that time, many people have quarreled with the term, but have ended up using it. Newell and Simon, the group at Carnegie Mellon University, tried to use complex information processing, which is certainly a very neutral term, but the trouble was that it didn't identify their field, because everyone would say, well, my information is complex. I don't see what's special about you.

Yes. Well, Newell and Simon, I think, are a good example of people who have moved a little bit towards the problem of trying to do psychology. They've been actually saying, how do human beings solve simple problems, and we all try to do a theory of this, and this is obviously a very desirable thing to do also. But I'm trying to suggest that we would be much clearer in what we attempt if, when we are trying to do psychology and neurobiology, we think of ourselves as psychologists and neurobiologists, and work with all the other guys in the field. And when we're trying to do advanced automation and computer science, we work with people like control engineers, who developed an awful lot of experience of how to do advanced automation well, and also with the key problem of how to actually get it into practice, all that business of humanely introducing it, and so on, that I mentioned at the beginning of the talk.

Host: I think this is where we must ask Richard Gregory to come and help us, because his research is in perception. To what extent have the robot studies of Michie and McCarthy helped in this work, Richard?

Professor Richard Gregory: Well I want to say that it's the general concepts which are coming out of artificial intelligence which are having an impact in psychology, rather than the specific programs, for example on neural nets, which you mentioned. And I go a little further than that. I may be exaggerating a little, but let me put the point. I think since behaviorism started in about 1900, and then Skinner, and the stimulus-response paradigm to psychology, experimental psychologies on the whole were regarding human beings as examples of advanced automation, where you have a stimulus coming in, a response coming out, and there's a black box in the middle, and that's it. Now what I think is becoming very apparent is that human beings are not at all like that, that we have a vast amount of data store inside us, that we have extremely noisy, often directly not relevant information available to us that we make rather good on the whole decisions. We act extremely reliably with poor input. Now what's happened I think with the robot studies, and I think Donald was getting at this, is that we were all shocked and amazed how difficult it was, because we were misled by the total inadequacy of psychological theory and the emphasis that was put on stimulus-response. Now what it's turning out, I think, is that the stimulus is not directly controlling behavior, it's rather calling up, generally speaking and in normal situations, an appropriate internal model, map, hypothesis of the external world, and it's this that we act upon.

So it's not stimulus-response, it's rather a certain amount of rather grotty information, a hopefully adequate internal model, and then the response based on that, much as a hypothesis in science enables one to make a decision, but the hypothesis is the result of a great deal of past information and generalization, which has been logically organized, and then the decision is made far more on what is internally represented than on what is available at that time, either available to the eye or to the telescope or to the microscope or to the electrical instrument of an engineer. I think this is what's happened. So the emphasis on internal data and how it's organized logically is what's happening from the robot research.

So I think to say that this is a business of models and neural nets is what we thought ten years ago, and it's what we've very much moved away from. Now, to sort of finish this a little bit, the point about intelligence is this. It exists because we're intelligent. We have 10^10 components in a box about the size of a football on top of our shoulders. Ten to the ten component is a lot, but an awful lot of that is used up in vegetative functions like my tongue having to waggle about and this kind of thing in order to communicate. The actual number of neurons responsible for intelligence may be very, very much smaller than that. I see no reason why we cannot, in fact, make brains because they exist physically. When you say the brain is unique, you at one time said it's unique because it's big. You then made remarks that there are certain circuits in the brain which we can't replicate. Now, I would like to have an argument to show why this pessimism is justified. It seems to me pure pessimism or metaphysics.

Professor John McCarthy: It's worse. It contradicts a mathematical theorem. I didn't really expect that I would ever...

(Audience laughter)

Host: Let Sir James answer.

Professor Sir James Lighthill: Well, Professor Gregory seemed to make two almost contradictory points. First, he said that neural nets don't matter, and finally, he said that in the end, they're the thing we ought to be researching on most. I think he's been a bit unfair to experimental psychologists because I think they have been working on internal stores of information. They've been working on short-term memory and long-term memory and these things, and they've used computer models to find out the relation between these different things. But of course, I do agree with his statement that we have learned a lot from the research in artificial intelligence, essentially finding out how difficult it was. We were all shocked and amazed to find out how difficult it was, namely, to extract information from noisy pictures, from sense impressions of the real complex world, and the evidence that we in fact do it by comparing with some sort of internal model, and this is a key feature that has come out of the work. Of course, it's arguing against the hopes for general-purpose robots because they would have to have such a very complicated internal model, such a large internal universe of discourse that they'd be working with in order to identify what they were seeing in the real world.

But now I'll come to his last point where he comes back to neural networks. I mean, I pointed out that the difference between a current computer architecture, which by comparison with the cerebral cortex is a very simple architecture, and all the complication is built into the program. I say that there's no reason to suppose that that type of architecture plus program can begin to approach what the vastly more intricate networks of nerve cells inside human skulls can do. And of course, there's a theorem that says it can. I don't think a theorem that says it can in a way that can be realized in a time that is acceptable because of the difficulties of a combinatorial explosion. I mean, the theorems for which Professor McCarthy is justly famous and Professor Robinson and others have pointed out that problems can be solved by algorithms, but the algorithms all involve enormous lengths of time with any reasonably sized universe of discourse because of the combinatorial explosion.

Professor John McCarthy: No, no, you're confused.

(Talking over each other)

Host: But could we have Richard Gregory first of all to see whether this point has been answered about whether it is possible to construct an artificial brain.

Professor John McCarthy: But there's a mathematical question we might succeed in answering it...

Professor Richard Gregory: Well, first of all, I'd like to know what sense you felt what I was saying was self-contradictory. I'm not saying one shouldn't study neural nets, but I think... I mean, you said there was nothing ten years ago, but the actual fact there's been some quite good work on new ideas of how neural nets can achieve specific tasks in the last few years. I mean, it is an active field of research at the moment.

Professor Sir James Lighthill: I mean, you said there was nothing ten years ago, but the actual fact there's been some quite good work on new ideas of how neural nets can achieve specific tasks in the last few years. I mean, it is an active field of research at the moment.

Professor Richard Gregory: Yes, I'd like to submit that perhaps the concept of constraints is important here, that if you have a system which is following logical operations, it has to have physical constraints corresponding to the logical steps required to produce the solution, whereas the network such as Birl's work was more on whether it's going to, so to speak, catch fire, run away with itself, this kind of thing, a very, very crude work. I mean, it was brilliant at that time, but now it looks terribly crude because it isn't concerned with the state of a net for the specific problem. It seems to me now the emphasis is on the logic of the problem. The next question will be how the physiology carries it out, but we haven't yet even begun to answer that question. Now, what the robot stuff is beginning to do is to show how it can be carried out with electronics, and this is a lead, I think, to how physiological research may go when the cognitive process has become respectable within physiology, which is only just happening. The respectability, I think, is coming with the robot research. It's making the logical cognitive processes scientifically respectable. This is a very great thing it's doing.

Professor Sir James Lighthill: For different functions, I think the answer is different. Where it comes to simpler parts of the brain, like the cerebellum, I think it is already beginning to be possible to identify the function of neural networks, but I did feel that the cerebral cortex is incomparably more difficult.

Professor Richard Gregory: That's why we need the robot research, is my point.

Host: It's extremely difficult. You are not saying that there is any fundamental reason why it's impossible, it's just extremely difficult.

Professor Sir James Lighthill: Well, I do feel that, I mean, my neurophysiologist friends tell me that contemplating the complexity of the extraordinary random appearance of the connection of all the nerve cells in the cerebral cortex makes them feel that it is quite hopeless to attempt an analysis.

Host: Yes, Professor McCarthy, you wanted to say something earlier.

Professor John McCarthy: Right, I would like to make it clear what the theorem is, not due to me. There are several theorems, as a matter of fact. The first one is due to McCulloch and Pitts, which was that 1943, that a certain kind of neuron that they were fascinated with could exercise any logical, do any logical calculations. Another theorem along the same line was in Minsky's PhD thesis, in which he showed that any element that had essentially what amounted to negative resistance could do it. Now, the simulation theorem would say the following, that the time required to simulate a device with 10 to the 10th components would be simply proportional to the number of components, provided you have a large enough memory, that is a 10 to the 10th element memory, to do the table lookups in. Now, one certainly would not advocate having an intelligent machine that would do this by simulating the neural net in the brain. In the first place, you can't find out what the neural net is, and in the second place, there are almost certainly better ways of doing it, since the neural net in the brain is quite inefficient. But nevertheless, there are some mathematical theorems that say that the time required would really be only linear in the number of components.

Professor Sir James Lighthill: But that is only if you can unravel this extraordinarily complex network and decide how it does it, and I feel that is not so. Minsky's perceptrons and so on were obviously a very interesting investigation. It's not certain that the neurons are all of this type. As I say, the ones in the visual...

Professor John McCarthy: Minsky's against perceptrons.

Professor Sir James Lighthill: Yes, I know. He's written a book recently which sums up and says that they haven't got us anywhere. Well, that's unfortunate. And on the other hand, the detailed work on the visual cortex has shown that various specialized hardware, as the computer architect would speak, is involved in the discrimination between edges, vertical edges, discrimination, distance, and so on.

Host: Donald Michie?

Professor Don Michie: I wonder if I could follow the combinatorial explosion just a little, and first of all ask whether the problem of programming a computer to play chess is a reasonable instance of the kind of difficulty that you have in mind. I think, in fact, you quoted it as an instance. There's an alternative of about 20 moves, 20 legal moves at each stage, so that you can work out, and my colleague I. J. Good has worked out, that the total number of possible chess games is of the order of 10 to the 120, which is more than the number of elementary particles in the observable universe. So that if you want combinatorial explosions, that's quite a nice big one. There are two ways, two lines of approach that artificial intelligence people attempt to use to damp off and combat the combinatorial explosion.

Roughly speaking, they can be grouped into syntactic methods of cutting down this wild branching ratio on the one hand, and on the other hand, and in the long run, much more significant and more promising ways of building semantic information into the program in order to cut whole branches of the search tree. Most of the progress that has been made with computer chess so far, in fact almost all of it, would come under that first and rather primitive category, and in spite of that, I think you said yourself, the present level of computer chess is perhaps a middling club player, you said an experienced amateur. Have you, first of all, read the article in the June issue of the Scientific American, where the first program with semantics built in has been described and looked at the quality of the game that's there cited? Secondly, you mentioned David Levy and you mentioned Botvinnik, the former world champion. Do you know about David Levy's bet, and do you know about Botvinnik's comment on it?

Professor Sir James Lighthill: Yes, I do, and of course the interesting thing is that this...

Host: Well, could one of you tell us...

(Audience laughter)

Professor Don Michie: Well, David Levy is an international master, and in 1968 he wagered a thousand pounds against a consortium that no computer program would beat him across the board in a 10-game match before 1979. It had to be done by the end of 1978. I can reveal the identity of the consortium. It consists in fact of Seymour Papert, John McCarthy, and myself. Now, it seems to me that if your pessimism is as deep-rooted as you wish us to believe, and the power of the combinatorial explosion, everything that you've stated, you should be ready to double that stake.

(Audience laughter and applause)

Professor Sir James Lighthill: I picked on chess because it's an area where one can put in a maximum amount of human knowledge and experience of something that human beings have been very active in for centuries, and one can feed in through the heuristic as much as possible. The heuristic has been the main method of reducing the impact of the combinatorial explosion, and even in spite of this, one is able to reach, in a fairly modest universe of discourse, the chess board with its 64 squares, one is able to reach the kind of levels of play that we've described. The program that you saw, in fact, involved not a complete search of every possibility in the tree. There was a rejection of possibilities at an early stage if the position started deteriorating fast, and so a certain selection from the more promising lines is made in this program, and the programs have been constructed with a very great deal of ingenuity. It's been one of the classic problems in computer science, something that everyone would like to solve. Computer firms have tried because they would like to see the success of computer against a master or grand master in this field, but nevertheless, this has not been achieved, and David Levy still doesn't seem to think it will be achieved, and I agree with him.

Professor Don Michie: I think that I would like, if I might be allowed to utter a small warning here, dredged up from the remote past nearly a hundred years ago. It is, in fact, a very short excerpt that I want to read from a report. It was a report submitted on November the 15th, 1876, to the president of the United States Telegraph Company, and it goes as follows. Mr. G. G. Hubbard's fanciful predictions, while they sound very rosy, are based upon wild-eyed imagination and a lack of understanding of the technical facts of the situation, and a posture of ignoring the obvious technical limitations of his device, which is hardly more than a toy, a laboratory curiosity. Mr. A. G. Bell, the inventor, is a teacher of the hard of hearing, and this telephone may be of some value for his work, but it has too many shortcomings to be seriously considered as a means of communication.

(Audience laughter and applause)

Professor Sir James Lighthill: If I had given my talk ten years ago, it would have been a very reasonable response to respond by quoting what was said about Bell's inventions within a year or two after they'd started to be investigated. But the situation is different when one's past the quarter century of a field like artificial intelligence. Then one comes into a period where some of the fundamental difficulties have begun to emerge. I have made it clear that I support a great deal of the work that is done by people calling themselves the artificial intelligentsia.

(Audience laughter)

Professor Sir James Lighthill: But I have also argued that enough of the fundamental objections and difficulties have now emerged so that one can feel dubious to a very high degree about predictions of a general purpose robot.

Host: Well, perhaps this is a good time to leave the artificial intelligentsia and get to the members of our audience, and ask them whether they'd like to make any points to Sir James or to the other principal speakers. There's a question here.

Dr. Larkin: My name is Dr. Larkin. I work at the Department of Computer Science at the University of Warwick. I make robots. In particular, I make a robot called Arthur, who is mobile and blind, unlike Professor Michie's, which is a hand-eye robot, although I must admit that Professor Michie moves the world rather than moving his robot in the world. Now, I call what I do with this robot psychomechanics. I prefer it. It's one word rather than two of artificial intelligence, and probably describes more accurately what I'm trying to do. But this is a subject, as far as I'm concerned, which is firmly embedded in computer science. I'm in the computer science department, and the work I'm doing involves the kind of advanced programming that we've heard about, but the work I'm doing also affects our concepts of what a computer is, or rather, what a psychomechanism is, because a computer is not the beast for doing the job we're talking about.

A computer is designed to do sums, and we have to look into the design of the machine we're using to do the actual thinking part of the task, and see whether we can't redesign that as well. And this is the area I'm, at present, very much concerned in. Now, when we get into the area of psychomechanisms, and that's a psychomechanism. It's fairly small, but if I use that as a computer, which I could do, it's more powerful than the first computer I first worked on, a juice computer, and that used to fill a room. Now, psychomechanisms have some form of intelligence. I don't think there's any dispute about that, but there's no spectrum of intelligence for robots the way there is a spectrum of intelligence for animals. For instance, my robot, Arthur, could be described as a literate dog. Now, just how general purpose is a literate dog?

Professor Sir James Lighthill: Well, I think if you really investigate the full range of psychological functions and capabilities of a dog, you'll find it's well ahead of all these robots.

Roger Needham (Cambridge University): I think one root of disagreement between Professor McCarthy and Professor Lighthill is that the one believes, and the other doesn't, that there is, subject to study, of problem-solving and goal-seeking, which is quite independent of any particular sort of problem or any particular sort of goal. Now, it seems to me that it would be a good thing to help resolve this, this being an issue on which I myself, in some intellectual doubt, as to if we could have a list of the achievements in problem-solving and goal-seeking without any reference at all to what kind of problem or goal it is.

Professor John McCarthy: One achievement has been what I call the separation of heuristics from epistemology as a subject. Namely, the search processes are the heuristic, is the domain of heuristics, and the epistemology is the formalism that you use to represent information and describe the world. Another achievement is the Winograd achievement of showing that carrying semantic information as you go along is the key thing even in parsing a sentence. And as far as I'm concerned, this work basically refutes the Chomsky school of grammar, because it's not merely general semantic information about the meanings of words which is being carried along, but information which is being used, but information about the particular situation in which the sentence is uttered.

Roger Needham (Cambridge University): I don't think it's necessary to go through the large list. Indeed, of course, I don't deny the accuracy of what's been said, and things like the involvement of semantics. I'm very glad that people who wish to make robots do things have learnt about that, because it's been known for a very long time. All I would like to point out is that these are somewhat in the area of anecdotes, the swallows that might begin to make the summer, rather than beginning to look like the coherent structure of a coherent scientific discipline, which I gather Professor McCarthy claimed that this all was.

Professor John McCarthy: Okay, as to its coherence, well, it's a bit weak now for a number of reasons. One of them is one might call the look-ma-no-hands school of programming, which says that you take something that no one has ever done before, and you write a program to do it, and you call your friends, and you write a paper, and they admire the fact that it did it, with no effort to connect this into any coherent theory. The other thing which inhibits theory in artificial intelligence is that it can be immediately checked out by whether it really does provide the behavior that it is supposed to. So the apparent existence of theories in fields like psychology is very often a mirage... (Audience Laughter) ...and I would say that the theoretical situation in AI is very tough. And now I've been in this field, I don't know, maybe 20 years, and I'm not discouraged yet. I don't identify the rise of the science, and it's reaching its peaks, with my own career. I imagine that the science will continue to grow, even after I am not actively making contributions to it.

Christopher Strachey (Oxford University): Thank Professor McCarthy. I've also been in the game about 20 years, nor am I discouraged. But I do not choose to work in the field of artificial intelligence, because I think it is too difficult. I would like to make a comment about Professor Michie's anecdotal method of supporting work in a very difficult field by quoting a totally irrelevant, incorrect prophecy about Bell's telephone a hundred years ago. A more recent incorrect prophecy was made about the ability to translate human languages by machines, and that turned out exactly the opposite way. The general opinion was that it would be possible to translate, or to have machine translation of human languages efficiently and economically. It turned out, in fact, of course, that it was rather cheaper to use a human being than to use any translation mechanism. I think it's a mistake to confuse the intellectual difficulties with these fields. I think it's to underestimate them. I'm a bit surprised, by the way, in which the people who seem to work on artificial intelligence come along and say, oh, well, we started off like this, and after quite a short time, we were horrified to find it was all rather difficult.

Professor John McCarthy: And Samuel's program for the same game?

Christopher Strachey (Oxford University): Samuel's program for the same game is an example of advanced automation.

(Audience laughter)

Host: Let's just let Professor Strachey finish.

Christopher Strachey (Oxford University): I think Samuel's program, and so I think would you, is an example of advanced automation, where he's built into the program the properties of the game.

Host: Right, Donald Michie.

Professor Don Michie: I was just going to say that I think Professor Strachey is a little too modest, in that his own work on checkers in the 1950s was, in fact, the launching pad from which Samuel subsequently developed his checker learning program. Furthermore, Samuel's program has played a worthy and useful role, and many people have learned many things from it. One, if I may just conclude, one of the most valuable roles it's played in the general, rather than the technical area, is discrediting the cruder versions of the doctrine that you only get out what you put in, because eventually Samuel's program learned to be a better checkers player than Samuel himself.

Christopher Strachey (Oxford University): Well, not, if I may come back on that, a better player than the people whose games he played into it. Now, I object very strongly to the miscellaneous and irresponsible use of words like learning, which have no very clear meaning. They are emotive terms. I do not believe that Samuel's checkers player is, in any genuine sense, a learning program. It's an optimizing program. I do not call optimizing programs learning programs. I mean, it's not the place to go into technical details, but there's a very great tendency, I find, with people working in the artificial intelligence field, to make, really, to spoil their case by using normal human terms, anthropomorphic terms, about very, very, very simplified objects, things like advice takers. The advice taker, the chess advice taker, is simply a programming system. It's a more specialized advice taker than my ordinary programming language compiler and loader. That will take advice. It isn't an advice taker. It's a way of instructing the computer to do something. Now, I think to use the word advice taker when you mean a program is misleading.

Richard Parkins: Oh, yes. I'm Richard Parkins, and I'm a computer scientist. And it seems to me that Professor Lighthill and the artificial intelligentsia are arguing not about things, but about the names of things. Because whenever the artificial intelligentsia have produced what they consider to be a good example of artificial intelligence, Professor Lighthill has turned around and said, yes, that's a marvelous piece of work, but it's some other field.

Host: Do you want to answer that at this point?

Professor Sir James Lighthill: Well, it's certainly true that I believe in having the minimum amount of philosophical mystification in talking about science. I agree with Professor Strachey that when we're talking about programs, we should call them programs, and when we're talking about brains, we should call them brains.

Host: You've been silent for some time, Richard Gregory. Do you want to say something?

Professor Richard Gregory: I'm a bit worried, so to speak, about the philosophical position here. When you say you can recognize that a problem is beyond science, I think you're saying it's beyond any future science. This is, I think, a bold claim. For example, alchemy, the transmutation of gold. I'm sure I can check on my facts here. I understand it was accepted as possible right through the Middle Ages, then it was damned by science, and then it was done with atomic power. I think I'm correct.

Host: There are many examples all through science. The need for a vital force, for example, in organic chemistry is one of the best. And inevitably, almost, they have been shown to be wrong, these restrictions. But I think...

Professor Sir James Lighthill: There are limits to how far forward in the centuries we can even contribute.

Host: You did, in fact, Sir James might have, did eventually say that it was not impossible, but it was highly improbable. This was the point, wasn't it?

Professor Sir James Lighthill: I think that in practical terms, it's a mirage, in the sense that if it's something that we think we can see on the horizon, in the sense that on our deathbeds it may be announced or our children will see it, that it's really there on the horizon, then I disagree with such a view.

Host: Well, we have half a dozen people wanting to talk, but I'm afraid time's running out, and I must wind up the discussion, stimulating as it's been. I'd like to thank all those who've taken part, and especially our principal speakers, Professor Mickey, Professor McCarthy, and Professor Gregory, for coming along tonight. Thank you.

Evaluation of the Predictions (With the Benefit of Hindsight)

In this section, we will review the various claims made by the participants in the debate with an eye towards how these claims have held up compared to the actual developments in the ~50 years since the debate. Our goal here is not to criticize anyone for getting things wrong. Predicting the future so far in advance is perhaps one of the hardest things to reliably do (other than making the discoveries that bring the future close yourself), and perhaps just a handful of people in the world have demonstrated such an ability.

As an aside, I would single out Ray Kurzweil as being one particularly far-seeing and accurate prognosticator, holding firm to his predictions even in the face of extreme skepticism and mockery from peers. He grounded his predictions in rather simplistic arguments about the growth of microprocessor transistor count and comparisons to the number of neurons in the human brain, using these to pinpoint ~2025 as when you would start to see true intelligence start to manifest in machines. But he wrote about all of this long before the arrival of AlexNet, let alone ChatGPT!

Anyway, it's easy to point out flaws from the cheap seats and with the benefit of hindsight. Our goal here is rather to help us see how people can conceptualize the future in ways that are limiting and overly skeptical, and to better understand the kinds of biases and preconceived notions can prevent otherwise smart and well informed people from even granting the possibility of things that today are commonplace, much like how people dismissed and mocked the Wright Brothers before they provided an existence proof of controlled heavier-than-air flight:


Sir James Lighthill's Key Points and Predictions:

  • General Purpose Robots as a Mirage: Lighthill distinguishes between automation (specific task machines) and the concept of general-purpose robots, arguing the latter is "a mirage." He posits that the idea of a machine being able to substitute for a human across a wide range of activities is not feasible. In hindsight, this has been partially proven incorrect as AI and robotics have advanced to handle a broader array of tasks than Lighthill anticipated, though we have not reached the level of a general-purpose robot capable of performing all human tasks.

  • Limitations Due to Combinatorial Explosion: He discusses the "combinatorial explosion" in the context of AI problem-solving, suggesting that as the universe of discourse expands, the computational power needed grows exponentially, severely limiting AI's capabilities. While this issue remains relevant in computational complexity, advances in algorithm efficiency, machine learning techniques, and hardware have allowed AI to tackle much larger and more complex problem spaces than envisioned.

  • Heuristic Methods as a Human Crutch: Lighthill critiques the use of heuristics in AI as relying too heavily on human intelligence and experience, suggesting that this limits the potential for AI to operate independently or surpass human capabilities. Today, machine learning and deep learning have enabled AI systems to "learn" heuristics from vast datasets, often discovering patterns and strategies that humans do not explicitly program.

  • AI's Impact on Specialized Domains: He acknowledges that AI and automation have made progress in specialized domains but doubts the utility and impact of general AI research. Current developments have shown that AI's impact on specialized domains has been profound, and the lines between specialized and general AI capabilities are increasingly blurred, with transfer learning and other methods.

  • Skepticism Towards AI Contributing to Understanding Human Functions: Lighthill expresses skepticism that robotics or AI could significantly contribute to understanding human psychological functions or the brain's workings. Contrary to this, AI and computational models have become invaluable tools in neuroscience and psychology, helping to simulate and understand complex cognitive processes and brain functions.

Professor Don Michie's Comments and Predictions:

  • Distinction Between General Purpose and Research Purpose Robots: Michie introduces a nuanced view by distinguishing between "general purpose" robots and "research purpose" robots. He argues that the work being done in AI labs, like his in Edinburgh, is aimed at advancing knowledge and investigating principles, rather than creating robots that can replace humans in a wide range of activities. In hindsight, this distinction remains relevant, as AI research has indeed focused on both developing specialized applications and exploring foundational AI principles. The progress in robotics and AI has shown significant advancements in both research and application domains, validating Michie's perspective on the importance of research-focused development.

  • Versatility and Industrial Application: Michie also talks about the concept of versatility in robots, emphasizing the importance of being able to reprogram robots quickly and easily for different tasks, particularly in industrial settings. Today, this vision has been realized to a considerable extent with robots being more adaptable and reprogrammable, supporting varied tasks across industries, from manufacturing to logistics.

Professor Sir James Lighthill's Response:

  • Skepticism Towards Versatile Robots: Lighthill acknowledges Michie's points but maintains his skepticism, describing the demonstrated robot as operating within a "playpen world" with limited complexity. While Lighthill's critique was grounded in the technological limitations of the time, the subsequent decades have seen robots operating in increasingly complex environments, leveraging advancements in machine learning, computer vision, and sensor technology.

Professor John McCarthy's Contributions:

  • Artificial Intelligence as a Separate Science: McCarthy emphasizes that AI should be considered a distinct scientific field, focusing on problem-solving and achieving goals in complex environments. He outlines key areas of AI research, including dealing with combinatorial explosions, internal representation of information, advice-giving, and automatic programming. McCarthy's delineation of AI as a basic science has proven prescient, as these areas remain central to AI research, with significant progress made in each. The development of AI as a distinct discipline has indeed led to breakthroughs that go beyond mere applications or mimicry of human intelligence.

  • The Challenge of General Purpose Robots: McCarthy also points out the difficulty in creating general-purpose robots due to the lack of a comprehensive understanding of the world, a challenge that extends beyond computational limitations to the formulation of knowledge itself. This remains a challenge in AI, though progress in machine learning, natural language processing, and robotics has allowed for more sophisticated interactions with the world than was possible at the time of the debate. Yet, the creation of a truly general-purpose robot, as McCarthy suggested, remains elusive and a topic of ongoing research.

In Hindsight:

  • Research Purpose and Versatility in Robotics: Michie's arguments about research-purpose robots and the quest for versatility have borne fruit, with robotics now a critical component in various sectors. Robots today are indeed versatile, capable of being reprogrammed for different tasks, which aligns with Michie's vision. His anticipation of the significance of versatility in robotics, particularly for industrial applications, has been validated by the proliferation of robots in manufacturing, assembly lines, and beyond, where they are regularly retooled for different tasks.

  • AI as a Distinct Field of Study: McCarthy's assertion that AI is a distinct scientific field has been thoroughly validated. AI has established itself as a foundational field of study, distinct from computer science though closely related, with its own set of problems, methodologies, and applications. The areas McCarthy outlined—dealing with combinatorial explosions, representation of knowledge, interaction with machines, and automatic programming—have all seen substantial research and development, underscoring the foresight in his delineation of AI as a basic science.

  • Skepticism and Realistic Expectations: Lighthill's skepticism, particularly regarding the capabilities of robots and AI to mimic a wide range of human functions, serves as a reminder of the limitations and challenges that have persisted in AI. While significant progress has been made, the vision of creating robots that operate outside of "playpen worlds" and handle the complexity of real-world environments with human-like adaptability and understanding remains a work in progress.

Professor John McCarthy's Insights:

  • Origination of the Term "Artificial Intelligence": McCarthy explains his choice of the term "artificial intelligence" to describe the field focused on achieving human-level intelligence through machines. This term was chosen to clearly define the ambitious goals of the field, distinguishing it from broader or more neutral terms like "complex information processing."

    In Hindsight: The term "artificial intelligence" has indeed become the standard descriptor for the field, embodying both the aspirations and the interdisciplinary nature of the research aimed at understanding and emulating human cognitive processes. McCarthy's insistence on a term that reflects the field's ambitious goal was prescient, capturing the essence of AI research's long-term objectives.

  • Distinction Between AI and Psychology/Neurobiology: McCarthy emphasizes the distinction between AI as a field aimed at engineering intelligent systems and the disciplines of psychology and neurobiology, which study natural intelligence. He suggests that clarity in these distinctions would benefit the progress in each area.

    In Hindsight: The boundaries between AI research and cognitive sciences have become increasingly porous, with insights from one field often enriching the other. However, McCarthy's call for clear distinctions has ensured that AI has developed its own methodologies and theoretical foundations, even as it draws inspiration from and contributes to understanding human intelligence.

Professor Richard Gregory's Commentary:

  • Impact of AI on Understanding Human Cognition: Gregory argues that AI and robotics research has shifted the understanding of human cognition away from a simplistic stimulus-response model to recognizing the importance of internal models and data processing in human decision-making and perception.

    In Hindsight: Gregory's insights have proven to be remarkably forward-looking. Modern cognitive science and AI heavily emphasize the role of internal representations, models, and the processing of "noisy" information. This shift has led to significant advancements in areas like machine learning, where algorithms are designed to infer patterns and make decisions based on incomplete or uncertain data, mirroring aspects of human cognitive processes.

  • Feasibility of Constructing Artificial Brains: Gregory challenges the pessimism around replicating the human brain's capabilities, arguing for the potential to construct artificial brains given the physical existence of natural brains.

    In Hindsight: While the construction of an artificial brain identical to a human's remains elusive, advancements in neural networks and computational neuroscience have made significant strides in emulating aspects of brain function. Gregory's optimism reflects the ongoing efforts and achievements in creating complex AI systems that, while not replicating the human brain in totality, demonstrate capabilities that were once thought to be exclusive to biological systems.

Professor Sir James Lighthill's Skepticism:

  • Complexity of Achieving General-Purpose Robots: Lighthill reiterates his skepticism about the feasibility of creating general-purpose robots, citing the enormous complexity of the real world and the limitations imposed by the need for a vast internal model or universe of discourse for such robots to operate effectively.

    In Hindsight: Lighthill's skepticism remains partly valid. While AI and robotics have advanced significantly, creating a truly general-purpose robot with human-like adaptability across a broad range of real-world environments and tasks remains a challenge. However, progress in AI, especially in machine learning, computer vision, and natural language processing, has led to the development of systems that can perform a wide array of tasks with increasing proficiency.

Professor Richard Gregory's Comments:

  • On Neural Networks and Cognitive Processes: Gregory mentions the progress in neural networks and suggests that robot research could lead to advances in understanding physiological processes. He emphasizes the importance of logic in problem-solving and suggests that AI and robotics are making cognitive processes scientifically respectable.

    In Hindsight: Gregory's optimism about neural networks was prescient. The resurgence of neural networks in the form of deep learning has revolutionized AI, enabling breakthroughs in image recognition, natural language processing, and many other areas. His view that AI research could inform physiological understanding aligns with current interdisciplinary approaches where AI models contribute to neuroscience and cognitive science.

Professor Sir James Lighthill's Skepticism:

  • Complexity of the Cerebral Cortex: Lighthill expresses skepticism about analyzing the cerebral cortex due to its complexity, suggesting it may be "hopeless" to attempt an analysis.

    In Hindsight: While the cerebral cortex remains a complex and not fully understood structure, advances in neuroimaging, computational neuroscience, and AI have allowed for better models and understanding of neural processes. Although we're far from completely understanding the cortex, the situation is not as bleak as Lighthill suggested.

Professor John McCarthy's Clarifications:

  • On Simulating Neural Networks: McCarthy discusses theorems related to simulating neural activity, indicating that simulating a neural network's logical capabilities is theoretically possible but acknowledging practical challenges.

    In Hindsight: McCarthy's discussion highlights the foundational theoretical work that underpins much of modern AI, especially the capability of neural networks to approximate a wide variety of functions. His acknowledgment of practical challenges in simulating neural networks foreshadows the computational and algorithmic developments that have enabled the deep learning revolution.

Minsky's Skepticism on Perceptrons:

  • Critique of Perceptrons: Both Lighthill and McCarthy reference Marvin Minsky's critique of perceptrons, suggesting they haven't advanced AI towards understanding complex visual processing or general intelligence.

    In Hindsight: Minsky's critique, particularly his and Seymour Papert's analysis in "Perceptrons," was accurate regarding the limitations of early perceptron models, especially their inability to solve problems like XOR or recognize patterns with variations. However, the development of multi-layer perceptrons (deep learning) has vastly extended the capabilities of neural networks beyond what was imagined, addressing many limitations identified in the 1960s and 1970s.

Chess as an Example of Combinatorial Explosion:

  • AI and Chess: The discussion on chess and AI's ability to handle the combinatorial explosion in game-playing contexts provides insights into early AI strategies for tackling complex problems. Michie points out the astronomical number of possible chess games and mentions efforts to incorporate semantic information into chess programs to improve performance.

    In Hindsight: This focus on chess as a benchmark for AI capabilities foreshadowed the significant milestones AI would achieve in game-playing. The eventual defeat of world champion Garry Kasparov by IBM's Deep Blue in 1997 and later developments in AI game-playing, such as AlphaGo's victory over Go world champion Lee Sedol, illustrate the progress in handling combinatorial explosions through advanced algorithms, machine learning techniques, and hardware capabilities. Michie's anticipation of incorporating semantic information aligns with the evolution of AI systems that learn strategies and patterns from vast datasets, beyond mere syntactic analysis.

David Levy's Bet and Chess AI Predictions:

  • David Levy's Bet: Professor Don Michie mentions a bet made by David Levy, an international master, that no computer program would beat him in a 10-game match before 1979. This bet reflects the skepticism about AI's capabilities in chess at the time. In hindsight, this skepticism was off the mark. In 1997, IBM's Deep Blue defeated world champion Garry Kasparov in a six-game match, marking a significant milestone in AI's capabilities in playing chess. This event happened nearly two decades after the bet's deadline, showcasing that while AI's progress in chess was slower than optimists hoped, it eventually exceeded the expectations of many skeptics from the 1970s. Levy did in fact win the bet, however.

  • Professor Sir James Lighthill's Agreement with Levy: Lighthill agrees with Levy's skepticism, emphasizing the limitations of AI in chess despite the inclusion of vast amounts of human knowledge and heuristics to combat the combinatorial explosion. Lighthill's stance was proven to be overly pessimistic in the long term, as AI has not only mastered chess but also other complex games like Go and poker, often surpassing the best human players.

Misestimations of AI's Potential:

  • Michie's Warning from the Past: Michie's anecdote about skepticism towards Alexander Graham Bell's telephone invention serves as a caution against underestimating technological advancements. He implicitly warns against underestimating AI's future potential based on current limitations. This warning is pertinent, as the advancement in AI since the debate has often outpaced even optimistic predictions, highlighting the difficulty in projecting the trajectory of technological innovation.

  • Lighthill's Reference to AI's Age and Challenges: Lighthill argues that after a quarter-century of AI research, the emergence of fundamental difficulties justifies skepticism about the field's lofty ambitions, like general-purpose robots. While Lighthill correctly identifies persistent challenges in AI, the continuous advancements in the field, especially in machine learning, computer vision, and natural language processing, have shown that early struggles did not preclude significant progress.

Dr. Larkin's Comments on Psychomechanics:

  • Concept of Psychomechanics: Dr. Larkin introduces "psychomechanics" as a term to describe his work with robots, indicating a preference for embedding AI research within computer science and focusing on the practical redesign of computational systems for AI tasks. This perspective aligns with the direction AI research has taken, emphasizing interdisciplinary approaches and the integration of AI with computational hardware innovations.

Roger Needham's Skepticism:

  • Doubt on General Problem-Solving and Goal-Seeking: Needham questions the existence of a field of study dedicated to problem-solving and goal-seeking that is independent of specific problems or goals. Today, AI research has indeed established that general problem-solving and goal-achieving processes can be studied and developed, with advances in algorithms such as reinforcement learning and general-purpose machine learning models demonstrating capabilities across a variety of tasks and domains.

John McCarthy's Assertions:

  • Separation of Heuristics from Epistemology: McCarthy discusses the achievement of separating search processes (heuristics) from the representation of knowledge (epistemology), emphasizing their importance in AI. This distinction has become a foundational concept in AI, underpinning the development of more sophisticated systems that can effectively navigate and interpret complex environments.

  • Refutation of Chomsky's School by Semantic Information Processing: McCarthy argues that the incorporation of semantic information in processing, exemplified by Terry Winograd's work, essentially refutes the Chomsky school of grammar. While Winograd's work was influential in demonstrating the importance of semantic information, Chomsky's contributions to linguistics and the study of grammar remain foundational, with ongoing debates about the role of syntax versus semantics in language understanding.

Christopher Strachey's Perspective:

  • AI as Too Difficult: Strachey expresses his decision not to work in AI, citing its difficulty. In hindsight, while AI has proven to be a challenging field, it has also been one of rapid progress and significant breakthroughs, demonstrating that its challenges are not insurmountable.

  • Machine Translation Skepticism: Strachey references a "recent incorrect prophecy" about machine translation, stating it was cheaper to use humans than machines for translation. This view has been dramatically overturned, as current machine translation systems like Google Translate and ChatGPT offer fast, efficient, and increasingly accurate translations across many languages, often free of charge.

  • Misestimation of Intellectual Difficulties: Strachey criticizes the AI community for underestimating the intellectual difficulties of the field. While early optimism in AI may have underestimated some challenges, the field has matured significantly, addressing complex problems with innovative solutions and making substantial advances that were once thought to be far off.

Evaluation of Comments and Predictions:

  • General Problem-Solving and Goal-Seeking: Needham's skepticism about the independence of problem-solving and goal-seeking from specific domains has been countered by the development of AI techniques that demonstrate general applicability across tasks. The emergence of general-purpose machine learning models and multi-task learning frameworks supports the idea that AI research can indeed target general problem-solving mechanisms.

  • Heuristics and Epistemology: McCarthy's emphasis on the separation of heuristics from epistemology has proven to be a crucial concept in AI, with current research often focusing on how to efficiently search through vast spaces of possibilities using informed strategies, and how to represent knowledge in a way that is accessible and useful for AI systems.

  • Semantic Information Processing: While McCarthy's claim about the refutation of Chomsky's school by semantic information processing may have been too strong, the importance of semantic understanding in AI has indeed grown. However, the debate between syntax-driven and semantics-driven approaches to language continues, with modern NLP systems often integrating insights from both perspectives.

  • Machine Translation: Strachey's skepticism regarding machine translation was a common view at the time, given the early difficulties faced by the field. However, advancements in neural machine translation have radically changed the landscape, making machine translation a highly efficient and widely used application, challenging his prediction about the feasibility and economic viability of machine translation.

  • Underestimation of AI's Intellectual Difficulties: Strachey's concern about underestimating AI's challenges was valid, reflecting early overoptimism. However, the continuous progress in AI, despite these difficulties, shows a field that is capable of overcoming its hurdles, often in ways that were not anticipated at the time.

Professor Don Michie's Comments on Learning:

  • Samuel's Program Learning to Play Checkers: Michie argues that Arthur Samuel's checkers program, which improved over time, serves as evidence against the notion that "you only get out what you put in," suggesting the program learned to play checkers better than Samuel himself. In hindsight, Michie's perspective aligns with contemporary understandings of machine learning, where programs indeed 'learn' from data and improve over time, often surpassing the performance of their creators in specific tasks. This is now a foundational principle of AI, with machine learning models achieving superhuman performances in games like Go and poker, validating Michie's argument against the limited view of learning.

Christopher Strachey's Skepticism:

  • Skepticism Towards the Term 'Learning': Strachey criticizes the use of terms like "learning" for what he considers merely optimization processes, arguing for a clear demarcation between human-like learning and what AI programs do. Today, the distinction between learning and optimizing in the context of AI has blurred, with learning algorithms indeed optimizing on a given objective function but doing so through exposure to data and adaptation, which is widely accepted as a form of learning within the AI community.

The Debate on Terminology and Philosophy:

  • Argument Over Terminology: Richard Parkins points out that much of the debate seems to revolve around terminology rather than substance, suggesting that achievements labeled as AI are dismissed by Lighthill as belonging to other fields. This reflects an ongoing issue in AI research, where interdisciplinary contributions are sometimes segmented into traditional fields, though the collaborative nature of AI has increasingly blurred these lines.

  • Philosophical Stance on AI's Limitations: Professor Richard Gregory questions the stance that some problems are beyond science, hinting at a historical pattern where seemingly insurmountable scientific challenges are eventually overcome. Gregory's caution against declaring certain goals as unachievable reflects the trajectory of AI, where tasks once thought impossible for machines, such as image and speech recognition, natural language understanding, and autonomous driving, have seen significant advancements.

Sir James Lighthill's Final Remarks:

  • AI as a Mirage: Lighthill reiterates his view of AI's ambitious goals as a mirage, suggesting that while progress may be made, the grand visions of AI matching or surpassing human capabilities across the board are overly optimistic. While it's true that AI has not achieved the sci-fi vision of fully autonomous, general-purpose intelligence, the progress in AI has exceeded expectations in many specific domains, suggesting that Lighthill's skepticism was more aligned with the challenges of creating AGI (Artificial General Intelligence) rather than denying the potential for AI to achieve remarkable feats within narrower scopes.

About

A Full Transcript of the Lighthill Debate on AI from 1973, with Introductory Remarks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published