{"id":2747,"date":"2017-04-12T11:23:01","date_gmt":"2017-04-12T18:23:01","guid":{"rendered":"https:\/\/sites.evergreen.edu\/compcog17\/?page_id=2747"},"modified":"2017-04-12T11:23:01","modified_gmt":"2017-04-12T18:23:01","slug":"minsky","status":"publish","type":"page","link":"https:\/\/sites.evergreen.edu\/compcog17\/minsky\/","title":{"rendered":"Minsky"},"content":{"rendered":"<h4>WHY PEOPLE THINK COMPUTERS CAN&#8217;T<\/h4>\n<h4>Marvin Minsky, MIT<\/h4>\n<h4>First published in AI Magazine, vol. 3 no. 4, Fall 1982. Reprinted in<br \/>\nTechnology Review, Nov\/Dec 1983, and in The Computer Culture,<br \/>\n(Donnelly, Ed.) Associated Univ. Presses, Cranbury NJ, 1985<\/h4>\n<h4>Most people think computers will never be able to think. That is, really<br \/>\nthink. Not now or ever. To be sure, most people also agree that computers<br \/>\ncan do many things that a person would have to be thinking to do. Then<br \/>\nhow could a machine seem to think but not actually think? Well, setting<br \/>\naside the question of what thinking actually is, I think that most of us<br \/>\nwould answer that by saying that in these cases, what the computer is<br \/>\ndoing is merely a superficial imitation of human intelligence. It has been<br \/>\ndesigned to obey certain simple commands, and then it has been provided<br \/>\nwith programs composed of those commands. Because of this, the<br \/>\ncomputer has to obey those commands, but without any idea of what&#8217;s<br \/>\nhappening.<\/h4>\n<h4>Indeed, when computers first appeared, most of their designers intended<br \/>\nthem for nothing only to do huge, mindless computations. That&#8217;s why the<br \/>\nthings were called &#8220;computers&#8221;. Yet even then, a few pioneers &#8212;<br \/>\nespecially Alan Turing &#8212; envisioned what&#8217;s now called &#8220;Artificial<br \/>\nIntelligence&#8221; &#8211; or &#8220;AI&#8221;. They saw that computers might possibly go<br \/>\nbeyond arithmetic, and maybe imitate the processes that go on inside<br \/>\nhuman brains.<\/h4>\n<h4>Today, with robots everywhere in industry and movie films, most people<br \/>\nthink Al has gone much further than it has. Yet still, &#8220;computer experts&#8221;<br \/>\nsay machines will never really think. If so, how could they be so smart,<br \/>\nand yet so dumb?<\/h4>\n<h4>================== CAN MACHINES BE CREATIVE? ==================<\/h4>\n<h4>We naturally admire our Einsteins and Beethovens, and wonder if<br \/>\ncomputers ever could create such wondrous theories or symphonies. Most<br \/>\npeople think that creativity requires some special, magical &#8220;gift&#8221; that<br \/>\nsimply cannot be explained. If so, then no computer could create &#8211; since<br \/>\nanything machines can do (most people think can be explained.<\/h4>\n<h4>To see what&#8217;s wrong with that, we must avoid one naive trap. We mustn&#8217;t<br \/>\nonly look at works our culture views as very great, until we first get good<br \/>\nideas about how ordinary people do ordinary things. We can&#8217;t expect to<br \/>\nguess, right off, how great composers write great symphonies. I don&#8217;t<br \/>\nbelieve that there&#8217;s much difference between ordinary thought and<br \/>\nhighly creative thought. I don&#8217;t blame anyone for not being able to do<br \/>\neverything the most creative people do. I don&#8217;t blame them for not being<br \/>\nable to explain it, either. I do object to the idea that, just because we can&#8217;t<br \/>\nexplain it now, then no one ever could imagine how creativity works.<\/h4>\n<h4>We shouldn&#8217;t intimidate ourselves by our admiration of our Beethovens<br \/>\nand Einsteins. Instead, we ought to be annoyed by our ignorance of how<br \/>\nwe get ideas &#8211; and not just our &#8220;creative&#8221; ones. Were so accustomed to the<br \/>\nmarvels of the unusual that we forget how little we know about the<br \/>\nmarvels of ordinary thinking. Perhaps our superstitions about creativity<br \/>\nserve some other needs, such as supplying us with heroes with such<br \/>\nspecial qualities that, somehow, our deficiencies seem more excusable.<\/h4>\n<h4>Do outstanding minds differ from ordinary minds in any special way? I<br \/>\ndon&#8217;t believe that there is anything basically different in a genius, except<br \/>\nfor having an unusual combination of abilities, none very special by<br \/>\nitself. There must be some intense concern with some subject, but that&#8217;s<br \/>\ncommon enough. There also must be great proficiency in that subject;<br \/>\nthis, too, is not so rare; we call it craftsmanship. There has to be enough<br \/>\nself-confidence to stand against the scorn of peers; alone, we call that<br \/>\nstubbornness. And certainly, there must be common sense. As I see it, any<br \/>\nordinary person who can understand an ordinary conversation has<br \/>\nalready in his head most of what our heroes have. So, why can&#8217;t<br \/>\n&#8220;ordinary, common sense&#8221; &#8211; when better balanced and more fiercely<br \/>\nmotivated &#8211; make anyone a genius,<\/h4>\n<h4>So still we have to ask, why doesn&#8217;t everyone acquire such a combination?<br \/>\nFirst, of course, it sometimes just the accident of finding a novel way to<br \/>\nlook at things. But, then, there may be certain kinds of difference-in-<br \/>\ndegree. One is in how such people learn to manage what they learn:<br \/>\nbeneath the surface of their mastery, creative people must have<br \/>\nunconscious administrative skills that knit the many things they know<br \/>\ntogether. The other difference is in why some people learn so many more<br \/>\nand better skills. A good composer masters many skills of phrase and<br \/>\ntheme &#8211; but so does anyone who talks coherently.<\/h4>\n<h4>Why do some people learn so much so well? The simplest hypothesis is<br \/>\nthat they&#8217;ve come across some better ways to learn! Perhaps such &#8220;gifts&#8221;<br \/>\nare little more than tricks of &#8220;higher-order&#8221; expertise. Just as one child<br \/>\nlearns to re-arrange its building-blocks in clever ways, another child<br \/>\nmight learn to play, inside its head, at Fe-arranging how it learns!<\/h4>\n<h4>Our cultures don&#8217;t encourage us to think much about learning. Instead<br \/>\nwe regard it as something that just happens to us. But learning must itself<br \/>\nconsist of sets of skills we grow ourselves; we start with only some of them<br \/>\nand and slowly grow the rest. Why don&#8217;t more people keep on learning<br \/>\nmore and better learning skills? Because it&#8217;s not rewarded right away, its<br \/>\npayoff has a long delay. When children play with pails and sand, they&#8217;re<br \/>\nusually concerned with goals like filling pails with sand. But once a child<br \/>\nconcerns itself instead with how to better learn, then that might lead to<br \/>\nexponential learning growth! Each better way to learn to learn would lead<br \/>\nto better ways to learn &#8211; and this could magnify itself into an awesome,<br \/>\nqualitative change. Thus, first-rank &#8220;creativity&#8221; could be just the<br \/>\nconsequence of little childhood accidents.<\/h4>\n<h4>So why is genius so rare, if each has almost all it takes? Perhaps because<br \/>\nour evolution works with mindless disrespect for individuals. I&#8217;m sure no<br \/>\nculture could survive, where everyone finds different ways to think. If<br \/>\nso, how sad, for that means genes for genius would need, instead of<br \/>\nnurturing, a frequent weeding out.<\/h4>\n<h4>================== PROBLEM SOLVING. ==================<\/h4>\n<h4>We can hardly expect to be able to make machines do wonders before we<br \/>\nfind how to make them do ordinary, sensible things. The earliest<br \/>\ncomputer programs were little more than simple lists and loops of<br \/>\ncommands like &#8220;Do this. Do that. Do this and that and this again until that<br \/>\nhappens&#8221;. Most people still write programs in such languages (like BASIC<br \/>\nor FORTRAN) which force you to imagine everything your program will<br \/>\ndo from one moment to the next. Let&#8217;s call this &#8220;do now&#8221; programming.<\/h4>\n<h4>Before long, Al researchers found new ways to make programs. In their<br \/>\n&#8220;General Problem Solver&#8221; system, built in the late 1950&#8217;s- Allen Newell,<br \/>\nJ.C.Shaw and Herbert A.Simon showed ways to describe processes in terms<br \/>\nof statements like &#8220;If the difference between what you have and what you<br \/>\nwant is of kind D, then try to change that difference by using method M.&#8221;<br \/>\nThis and other ideas led to what we call &#8220;means-ends&#8221; and &#8220;do if needed&#8221;<br \/>\nprogramming methods. Such programs automatically apply rules<br \/>\nwhenever they&#8217;re needed, so the programmers don&#8217;t have to anticipate<br \/>\nwhen that will happen. This started an era of programs that could solve<br \/>\nproblems in ways their programmers could not anticipate, because the<br \/>\nprograms could be told what sorts of things to try, without knowing in<br \/>\nadvance which would work. Everyone knows that if you try enough<br \/>\ndifferent things at random, eventually you can do anything. But when<br \/>\nthat takes a million billion trillion years, like those monkeys hitting<br \/>\nrandom typewriter keys, it&#8217;s not intelligence &#8212; just Evolution. The new<br \/>\nsystems didn&#8217;t do things randomly, but used &#8220;advice&#8221; about what was<br \/>\nlikely to work on each kind of problem. So, instead of wandering around<br \/>\nat random, such programs could sort of feel around, the way you&#8217;d climb a<br \/>\nhill in the dark by always moving up the slope. The only trouble was a<br \/>\ntendency to get stuck on smaller peaks, and never find the real mountain<br \/>\ntops.<\/h4>\n<h4>Since then, much Al research has been aimed at finding more &#8220;global&#8221;<br \/>\nmethods, to get past different ways of getting stuck, by making programs<br \/>\ntake larger views and plan ahead. Still, no one has discovered a<br \/>\n&#8220;completely general&#8221; way to always find the best method &#8212; and no one<br \/>\nexpects to.<\/h4>\n<h4>Instead, today, many Al researchers aim toward programs that will match<br \/>\npatterns in memory to decide what to do next. I like to think of this as &#8220;do<br \/>\nsomething sensible&#8221; programming. A few researchers &#8212; too few, I think<br \/>\n&#8212; experiment with programs that can learn and reason by analogy. These<br \/>\nprograms will someday recognize which old experiences in memory are<br \/>\nmost analogous to new situations, so that they can &#8220;remember&#8221; which<br \/>\nmethods worked best on similar problems in the past.<\/h4>\n<h4>================== CAN COMPUTERS UNDERSTAND? ==================<\/h4>\n<h4>Can we make computers understand what we tell them? In 1965, Daniel<br \/>\nBobrow wrote one of the first Rule-Based Expert Systems. It was called<br \/>\n&#8220;STUDENT&#8221; and it was able to solve a variety of high-school algebra &#8220;word<br \/>\nproblems&#8221;., like these:<\/h4>\n<h4>The distance from New York to Los Angeles is 3000 miles. If the<br \/>\naverage speed of a jet plane is 600 miles per hour, find the time it<br \/>\ntakes to travel from New York to Los Angeles by jet.<\/h4>\n<h4>Bill&#8217;s father&#8217;s uncle is twice as old as Bill&#8217;s father. Two years from<br \/>\nnow I Bill&#8217;s father will be three times as old as Bill. The sum of their<br \/>\nages is 92.<br \/>\nFind Bill&#8217;s age.<\/h4>\n<h4>Most students find these problems much harder than just solving the<br \/>\nformal equations of high school algebra. That&#8217;s just cook-book stuff &#8212; but<br \/>\nto solve the informal word problems, you have to figure out what<br \/>\nequations to solve and, to do that, you must understand what the words and<br \/>\nsentences mean. Did STUDENT understand? It used a lot of tricks. It was<br \/>\nprogrammed to guess that &#8220;is&#8221; usually means &#8220;equals&#8221;. It didn&#8217;t even try to<br \/>\nfigure out what &#8220;Bill&#8217;s fathers&#8217; uncle&#8221; means &#8212; it only noticed that this<br \/>\nphrase resembles &#8220;Bill&#8217;s father&#8221;. It didn&#8217;t know that &#8220;age&#8221; and &#8220;old&#8221; refer<br \/>\nto time, but it took them to represent numbers to be put in equations. With<br \/>\na couple of hundred such word-trick-facts, STUDENT sometimes managed<br \/>\nto get the right answers.<\/h4>\n<h4>Then dare we say that STUDENT &#8220;understands&#8221; those words? Why bother.<br \/>\nWhy fall into the trap of feeling that we must define old words like<br \/>\n&#8220;mean&#8221; and &#8220;understand&#8221;? It&#8217;s great when words help us get good ideas,<br \/>\nbut not when they confuse us. The question should be: does STUDENT avoid<br \/>\nthe &#8220;real meanings&#8221; by using tricks?<\/h4>\n<h4>Or is it that what we call meanings really are just clever bags of tricks.<br \/>\nLet&#8217;s take a classic thought-example, such as what a number means.<br \/>\nSTUDENT obviously knows some arithmetic, in the sense that it can find<br \/>\nsuch sums as &#8220;5 plus 7 is 12&#8221;. But does it understand numbers in any other<br \/>\nsense &#8211; say, what 5 &#8220;is&#8221; &#8211; or, for that matter, what are &#8220;plus&#8221; or &#8220;is&#8221;? What<br \/>\nwould ?say if I asked you, &#8220;What is Five&#8221;? Early in this century, the<br \/>\nphilosophers Bertrand Russell and Alfred North Whitehead proposed a<br \/>\nnew way to define numbers. &#8220;Five&#8221;, they said, is &#8220;the set of all possible sets<br \/>\nwith five members&#8221;. This set includes each set of five ball-point pens, and<br \/>\nevery litter of five kittens. Unhappily, it also includes such sets as &#8220;the<br \/>\nFive things you&#8217;d least expect&#8221; and &#8220;the five smallest numbers not<br \/>\nincluded in this set&#8221; &#8212; and these lead to bizarre inconsistencies and<br \/>\nparadoxes. The basic goal was to find perfect definitions for ordinary<br \/>\nwords and ideas. But even to make the idea work for Mathematics, getting<br \/>\naround these inconsistencies made the Russell-Whitehead theory too<br \/>\ncomplicated for practical, common sense, use. Educators once actually<br \/>\ntried to make children use this theory of sets, in the &#8220;New Mathematics&#8221;<br \/>\nmovement of the 1960&#8217;s; it only further set apart those who liked<br \/>\nmathematics from those who dreaded it. I think the trouble was, it tried to<br \/>\nget around a basic fact of mind: what something means to me depends to<br \/>\nsome extent on many other things I know.<\/h4>\n<h4>What if we built machines that weren&#8217;t based on rigid definitions? Wont<br \/>\nthey just drown in paradox, equivocation, inconsistency? Relax! Most of<br \/>\nwhat we people &#8220;know&#8221; already overflows with contradictions; still we<br \/>\nsurvive. The best we can do is be reasonably careful; let&#8217;s just make our<br \/>\nmachines that careful, too. If there remain some chances of mistake, well,<br \/>\nthat&#8217;s just life.<\/h4>\n<h4>================== WEBS OF MEANING. ==================<\/h4>\n<h4>If every meaning in a mind depends on other meanings in that mind,<br \/>\ndoes that make things too ill-defined to make a scientific project work?<br \/>\nNo, even when thing go in circles, there still are scientific things to do!<br \/>\nJust make new kinds of theories &#8211; about those circles themselves! The<br \/>\nolder theories only tried to hide the circularities. But that lost all the<br \/>\nrichness of our wondrous human meaning-webs; the networks in our<br \/>\nhuman minds are probably more complex than any other structure<br \/>\nScience ever contemplated in the past. Accordingly, the detailed theories<br \/>\nof Artificial Intelligence will probably need, eventually, some very<br \/>\ncomplicated theories. But that&#8217;s life, too.<\/h4>\n<h4>Let&#8217;s go back to what numbers mean. This time, to make things easier,<br \/>\nwell think about Three. I&#8217;m arguing that Three, for us, has no one single,<br \/>\nbasic definition, but is a web of different processes that each get meaning<br \/>\nfrom the others. Consider all the roles &#8220;Three&#8221; plays. One way we tell a<br \/>\nThree is to recite &#8220;One, Two, Three&#8221;, while pointing to the different<br \/>\nthings. To do it right, of course, you have to (i) touch each thing once and<br \/>\n(ii) not touch any twice. One way to count out loud while you pick up each<br \/>\nobject and remove it. Children learn to do such things in their heads or,<br \/>\nwhen that&#8217;s too hard, to use tricks like finger-pointing. Another way to<br \/>\ntell a Three is to use some Standard Set of Three things. Then bring ?set of<br \/>\nthings to the other set, and match them I one-to-one: if all are matched<br \/>\nand none are left, then there were Three. That &#8220;standard I Three&#8221; need<br \/>\nnot be things, for words like &#8220;one, two, three&#8221; work just as well. For Five<br \/>\nwe have a wider choice. One can think of it as groups of Two and Three, or<br \/>\nOne and Four. Or, one can think of some familiar shapes -. a pentagon, an<br \/>\nX, a Vee, a cross, an aeroplane; they all make Fives.<\/h4>\n<h4>o o o o o o o o<br \/>\no o o o o o o o o o o<br \/>\no o o o o o<\/h4>\n<h4>Because each trick works in different situations, our power stems from<br \/>\nbeing able to shift from one trick to another. To ask which meaning is<br \/>\ncorrect &#8211; to count, or match, or group &#8211; is foolishness. Each has its uses<br \/>\nand its ways to support the others. None has much power by itself, but<br \/>\ntogether they make a versatile skill-system. Instead of flimsy links in<br \/>\nchain of definitions in the mind, each word we use can activate big webs<br \/>\nof different ways to deal of things, to use them, to remember them, to<br \/>\ncompare them, and so forth. With multiply-connected knowledge-nets,<br \/>\nyou can&#8217;t get stuck. When any sense of meaning fails, you can switch to<br \/>\nanother. The mathematician&#8217;s way, once you get into the slightest trouble,<br \/>\nyou&#8217;re stuck for good!<\/h4>\n<h4>Why, then, do mathematicians stick to slender chains, each thing<br \/>\ndepending as few things as is possible? The answer is ironic:<br \/>\nmathematicians want to get stuck! When anything goes wrong, they want<br \/>\nto be the first to notice it. The best way to be sure of that is having<br \/>\neverything collapse at once! To them, fragility is not bad, because it helps<br \/>\nthem find the perfect proof, lest any single thing they think be<br \/>\ninconsistent with any other one. That&#8217;s fine for Mathematics; in fact,<br \/>\nthat&#8217;s what much of mathematics is. It&#8217;s just not good Psychology. Let&#8217;s<br \/>\nface it, our minds will always hold some beliefs that turn out wrong.<\/h4>\n<h4>I think it&#8217;s bad psychology, when teachers shape our children&#8217;s<br \/>\nmathematics into long, thin, fragile, definition tower-chains, instead of<br \/>\nrobust cross-connected webs. Those chains break at their weakest links,<br \/>\nthose towers topple at the slightest shove. And that&#8217;s what happens to a<br \/>\nchild&#8217;s mind in mathematics class, who only takes a moment just to watch<br \/>\na pretty cloud go by. The purposes of ordinary people are not the same as<br \/>\nthose of mathematicians and philosophers, who want to simplify by<br \/>\nhaving just as few connections as can be. In real life, the best ideas are<br \/>\ncross-connected as can be. Perhaps that&#8217;s why our culture makes most<br \/>\nchildren so afraid of mathematics. We think we help them get things<br \/>\nright, by making things go wrong most times! Perhaps, instead, we ought<br \/>\nto help them build more robust networks in their heads.<\/h4>\n<h4>================== CASTLES IN THE AIR. ==================<\/h4>\n<h4>The secret of what something means lies in the ways that it connects to all<br \/>\nthe other things we know. The more such links, the more a thing will<br \/>\nmean to us. The joke comes when someone looks for the &#8220;real&#8221; meaning of<br \/>\nanything. For, if something had just one meaning, that is, if it were only<br \/>\nconnected to just one other thing, then it wold scarcely &#8220;mean&#8221; at all!<\/h4>\n<h4>That&#8217;s why I think we shouldn&#8217;t program our machines that way, with<br \/>\nclear and simple logic definitions. A machine programmed that way<br \/>\nmight never &#8220;really&#8221; understand anything &#8212; any more than a person<br \/>\nwould. Rich, multiply-connected networks provide enough different ways<br \/>\nto use knowledge that when one way doesn&#8217;t work, you can try to figure<br \/>\nout why. When there are many meanings in a network, you can turn<br \/>\nthings around in your mind and look at them from different perspectives;<br \/>\nwhen you get stuck, you can try another view. That&#8217;s what we mean by<br \/>\nthinking!<\/h4>\n<h4>That&#8217;s why I dislike logic, and prefer to work with webs of circular<br \/>\ndefinitions. Each gives meaning to the rest. There&#8217;s nothing wrong with<br \/>\nliking several different tunes, each one the more because it contrasts<br \/>\nwith the others. There&#8217;s nothing wrong with ropes &#8211; or knots, or woven<br \/>\ncloth &#8211; in which each strand helps hold the other strands together &#8211; or<br \/>\napart! There&#8217;s nothing very wrong, in this strange sense, with having all<br \/>\none&#8217;s mind a castle in the air!<\/h4>\n<h4>To summarize: of course no computer could understand anything real &#8212;<br \/>\nor even what a number is &#8211; if forced to single ways to deal with them. But<br \/>\nneither could a child or philosopher. So such concerns are not about<br \/>\ncomputers at all, but about our foolish quest for meanings that stand by<br \/>\nthemselves, outside any context. Our questions about thinking machines<br \/>\nshould really be questions about our own minds.<\/h4>\n<h4>================== ARE HUMANS SELF-AWARE? ==================<\/h4>\n<h4>Most people assume that computers can&#8217;t be conscious, or self-aware; at<br \/>\nbest they can only simulate the appearance of this. Of course, this<br \/>\nassumes that we, as humans, are self-aware. But are we? I think not. I<br \/>\nknow that sounds ridiculous, so let me explain.<\/h4>\n<h4>If by awareness we mean knowing what is in our minds, then, as every<br \/>\nclinical psychologist knows, people are only very slightly self-aware, and<br \/>\nmost of what they think about themselves is guess-work. We seem to build<br \/>\nup networks of theories about what is in our minds, and we mistake these<br \/>\napparent visions for what&#8217;s really going on. To put it bluntly, most of<br \/>\nwhat our &#8220;consciousness&#8221; reveals to us is just &#8220;made up&#8221;. Now, I don&#8217;t<br \/>\nmean that we&#8217;re not aware of sounds and sights, or even of some parts of<br \/>\nthoughts. I&#8217;m only saying that we&#8217;re not aware of much of what goes on<br \/>\ninside our minds.<\/h4>\n<h4>When people talk, the physics is quite clear: our voices shake the air; this<br \/>\nmakes your ear-drums move &#8212; and then computers in your head convert<br \/>\nthose waves into constituents of words. These somehow then turn into<br \/>\nstrings of symbols representing words, so now there&#8217;s somewhere in your<br \/>\nhead that &#8220;represents&#8221; a sentence. What happens next?<\/h4>\n<h4>When light excites your retinas, this causes events in your brain that<br \/>\ncorrespond to texture, edges, color patches, and the like. Then these, in<br \/>\nturn, are somehow fused to &#8220;represent&#8221; a shape or outline of a thing.<br \/>\nWhat happens then?<\/h4>\n<h4>We all comprehend these simple ideas. But there remains a hard problem,<br \/>\nstill. What entity or mechanism carries on from there? We&#8217;re used to<br \/>\nsaying simply, that&#8217;s the &#8220;self&#8221;. What&#8217;s wrong with that idea? Our standard<br \/>\nconcept of the self is that deep inside each mind resides a special, central<br \/>\n&#8220;self&#8221; that does the real mental work for us, a little person deep down<br \/>\nthere to hear and see and understand what&#8217;s going on. Call this the<br \/>\n&#8220;Single Agent&#8221; theory. It isn&#8217;t hard to see why every culture gets attached<br \/>\nto this idea. No matter how ridiculous it may seem, scientifically, it<br \/>\nunderlies all principles of law, work, and morality. Without it, all our<br \/>\ncanons of responsibility would fall, of blame or virtue, right or wrong.<br \/>\nWhat use would solving problems be, without that myth; how could we<br \/>\nhave societies at all?<\/h4>\n<h4>The trouble is, we cannot build good theories of the mind that way. In<br \/>\nevery field, as Scientists we&#8217;re always forced to recognize that what we<br \/>\nsee as single things &#8211; like rocks or clouds, or even minds &#8211; must sometimes<br \/>\nbe described as made of other kinds of things. We&#8217;ll have to understand<br \/>\nthat Self, itself, is not a single thing.<\/h4>\n<h4>============ NEW THEORIES ABOUT MINDS AND MACHINES. ============<\/h4>\n<h4>It is too easy to say things like, &#8220;Computer can&#8217;t do (xxx), because they<br \/>\nhave no feelings, or thoughts&#8221;. But here&#8217;s a way to turn such sayings into<br \/>\nfoolishness. Change them to read like this. &#8220;Computer can&#8217;t do (xxx),<br \/>\nbecause all they can do is execute incredibly intricate processes, perhaps<br \/>\nmillions at a time&#8221;. Now, such objections seem less convincing &#8212; yet all<br \/>\nwe did was face one simple, complicated fact: we really don&#8217;t yet know<br \/>\nwhat the limits of computers are. Now let&#8217;s face the other simple fact: our<br \/>\nnotions of the human mind are just as primitive.<\/h4>\n<h4>Why are we so reluctant to admit how little is known about how the mind<br \/>\nworks? It must come partly from our normal tendency to repress<br \/>\nproblems that seem discouraging. But there are deeper reasons, too, for<br \/>\nwanting to believe in the uniqueness and inexplicability of Self. Perhaps<br \/>\nwe fear that too much questioning might tear the veils that clothe our<br \/>\nmental lives.<\/h4>\n<h4>To me there is a special irony when people say machines cannot have<br \/>\nminds, because I feel we&#8217;re only now beginning to see how minds<br \/>\npossibly could work &#8212; using insights that came directly from attempts to<br \/>\nsee what complicated machines can do. Of course we&#8217;re nowhere near a<br \/>\nclear and complete theory &#8211; yet. But in retrospect, it now seems strange<br \/>\nthat anyone could ever hope to understand such things before they knew<br \/>\nmuch more about machines. Except, of course, if they believed that minds<br \/>\nare not complex at all.<\/h4>\n<h4>Now, you might ask, if the ordinary concept of Self is so wrong, what<br \/>\nwould I recommend in its place? To begin with, for social purposes, I don&#8217;t<br \/>\nrecommend changing anything &#8211; it&#8217;s too risky. But for the technical<br \/>\nenterprise of making intelligent machines, we need better theories of<br \/>\nhow to &#8220;represent&#8221;, inside computers, the kinds of webs of knowledge and<br \/>\nknowhow that figure in everyone&#8217;s common-sense knowledge systems.<br \/>\nWe must develop programs that know, say, what numbers mean, instead of<br \/>\njust being able to add and subtract them. We must experiment with all<br \/>\nsorts of common sense knowledge, and knowledge about that as well.<\/h4>\n<h4>Such is the focus of some present-day Al research. True, most of the world<br \/>\nof &#8220;Computer Science&#8221; is involved with building large, useful, but shallow<br \/>\npractical systems, a few courageous students are trying to make<br \/>\ncomputers use other kinds of thinking, representing different kinds of<br \/>\nknowledge, sometimes, in several different ways, so that their programs<br \/>\nwon&#8217;t get stuck at fixed ideas. Most important of all, perhaps, is making<br \/>\nsuch machines learn from their own experience. Once we know more<br \/>\nabout such things, we can start to study ways to weave these different<br \/>\nschemes together. Finally, we&#8217;ll get machines that think about themselves<br \/>\nand make up theories, good or bad, of how they, themselves might work.<br \/>\nPerhaps, when our machines get to that stage, we&#8217;ll find it very easy to<br \/>\ntell it has happened. For, at that point, they&#8217;ll probably object to being<br \/>\ncalled machines. To accept that will be will be difficult, but only by this<br \/>\nsacrifice will machines free us from our false mottos.<\/h4>\n<h4>================== KNOWLEDGE AND COMMON SENSE ==================<\/h4>\n<h4>We&#8217;ve all enjoyed those jokes about the stupid and literal behavior of<br \/>\ncomputers. They send us silly checks and bills for $0.00. They can&#8217;t tell<br \/>\nwhen we mean &#8220;hyphen&#8221; from when we mean minus They don&#8217;t mind<br \/>\nbeing caught in endless loops, doing the same thing over again a billion<br \/>\ntimes. This total lack of common sense is one more reason people think<br \/>\nthat no machine could have a mind. It&#8217;s not just that they do only what<br \/>\nthey&#8217;re told, it&#8217;s also that they&#8217;re so dumb it&#8217;s almost impossible to tell<br \/>\nthem how to do things right.<\/h4>\n<h4>Isn&#8217;t it odd, when you think about it, how even the earliest Al programs<br \/>\nexcelled at &#8220;advanced&#8221; subjects, yet had no common sense? A 1961<br \/>\nprogram written by James Slagle could solve calculus problems at the<br \/>\nlevel of college students; it even got an A on an MIT exam. But it wasn&#8217;t till<br \/>\naround 1970 that we managed to construct a robot programs that could see<br \/>\nand move well enough to handle ordinary things like children&#8217;s building<br \/>\nblocks and do things like stack them up, take them down, rearrange them,<br \/>\nand put them in boxes.<\/h4>\n<h4>Why could we make programs do those grown-up things before we could<br \/>\nmake them do those childish things? The answer is a somewhat<br \/>\nunexpected paradox: much &#8220;expert&#8221; adult thinking is basically much<br \/>\nsimpler than what happens in a child&#8217;s ordinary play! It can be harder to<br \/>\nbe a novice than to be an expert! This is because, sometimes, what an<br \/>\nexpert needs to know and do can be quite simple &#8212; only, it may be very<br \/>\nhard to discover, or learn, in the first place. Thus, Galileo had to be smart<br \/>\nindeed, to see the need for calculus. He didn&#8217;t manage to invent it. Yet any<br \/>\ngood student can learn it today.<\/h4>\n<h4>The surprising thing, thus, was that when it was finished, Slagle&#8217;s<br \/>\nprogram needed only about a hundred &#8220;facts&#8221; to solve its college-level<br \/>\ncalculus problems. Most of them were simple rules about algebra. But<br \/>\nothers were about how to guess which of two problems is likely to be<br \/>\neasier; that that kind of knowledge is especially important, because it<br \/>\nhelps the program make good judgments about what to do next. Without<br \/>\nthis such programs only thrash about; with it they seem much more<br \/>\npurposeful. Why do human students take so long to learn such rules? We<br \/>\ndo not know.<\/h4>\n<h4>Today we know much more about making such &#8220;expert&#8221; programs &#8212; but<br \/>\nwe still don&#8217;t know much more about making programs with more<br \/>\n&#8220;common sense&#8221;. Consider all the different things that children do, when<br \/>\nthey play with their blocks. To build a little house one has to mix and<br \/>\nmatch many different kinds of knowledge: about shapes and colors, space<br \/>\nand time, support and balance, stress and strain, speed, cost, and keeping<br \/>\ntrack. An expert sometimes can get by with deep but narrow bodies of<br \/>\nknowledge &#8211; but common sense is, technically, a lot more complicated.<\/h4>\n<h4>Most ordinary computer programs do just the things they&#8217;re programmed<br \/>\nfor. Some Al programs are more flexible; when anything goes wrong,<br \/>\nthey can back up to some previous decision and try something else. But<br \/>\neven that is much too crude a base for much intelligence. To make them<br \/>\nreally smart, we&#8217;ll have to make them more reflective. A person tries,<br \/>\nwhen things go wrong, to understand what&#8217;s going wrong, instead of just<br \/>\nattempting something else. We look for causal explanations, or excuses,<br \/>\nand, when we find them, add them to our networks of belief and<br \/>\nunderstanding. We do intelligent learning. Some day programs, too, could<br \/>\ndo such things &#8212; but first we&#8217;d need a lot more research to find out how.<\/h4>\n<h4>================== UNCONSCIOUS FEARS AND PHOBIAS. ==================<\/h4>\n<h4>I&#8217;ll bet that when we try to make machines more sensible, we&#8217;ll find that<br \/>\nlearning what is wrong turns out to be as important as learning what&#8217;s<br \/>\ncorrect. In order to succeed, it helps to know the likely ways to fail. Freud<br \/>\ntalked about censors in our minds, that keep us from forbidden acts or<br \/>\nthoughts. And, though those censors were proposed to regulate our social<br \/>\nactivity, I think we use such censors, too, for ordinary problem solving &#8212;<br \/>\nto know what not to do. Perhaps we learn a new one each time anything<br \/>\ngoes wrong, by constructing a process to recognize similar<br \/>\ncircumstances, in some &#8220;subconscious memory&#8221;.<\/h4>\n<h4>This idea is not popular in contemporary psychology, perhaps because<br \/>\ncensors only suppress behavior, so their activity is invisible on the<br \/>\nsurface. When a person makes a good decision, we tend to ask what &#8220;line<br \/>\nof thought&#8221; lies behind it. But we don&#8217;t so often ask what thousand<br \/>\nprohibitions might have warded off a thousand bad alternatives. If<br \/>\ncensors work inside our minds, to keep us from mistakes and absurdities,<br \/>\nwhy can&#8217;t we feel that happening? Because, I suppose, so many thousands<br \/>\nof them work at once that, if you had to think about them, you&#8217;d never get<br \/>\nmuch done. They have to ward off bad ideas before you &#8220;get&#8221; those bad<br \/>\nideas.<\/h4>\n<h4>Perhaps this is one reason why so much of human thought is<br \/>\n&#8220;unconscious&#8221;. Each idea that we have time to contemplate must be a<br \/>\nproduct of many events that happen deeper and earlier in the mind. Each<br \/>\nconscious thought must be the end of processes in which it must compete<br \/>\nwith other proto-thoughts, perhaps by pleading little briefs in little<br \/>\ncourts. But all that we do sense of that are just the final sentences.<\/h4>\n<h4>And how, indeed, could it be otherwise? There&#8217;s no way any part of the<br \/>\nmind could know everything that happens in the rest. Our conscious<br \/>\nminds must be like high executives, who can&#8217;t be burdened with the small<br \/>\ndetails. There&#8217;s only time for summaries from other, smaller parts of<br \/>\nmind, that know much more about much less; the ones that do the real<br \/>\nwork.<\/h4>\n<h4>\n================== SELF-CONSCIOUS COMPUTERS. ==================<\/h4>\n<h4>Then, is it possible to program a computer to be self-conscious? People<br \/>\nusually expect the answer to be &#8220;no&#8221;. What if we answered that machines<br \/>\nare capable, in principle, of even more and better consciousness than<br \/>\npeople have?<\/h4>\n<h4>I think this could be done by providing machines with ways to examine<br \/>\ntheir own mechanisms while they are working. In principle, at least, this<br \/>\nseem possible; we already have some simple Al programs that can<br \/>\nunderstand a little about how some simpler programs work. (There is a<br \/>\ntechnical problem about the program being fast enough, to keep up with<br \/>\nitself, but that can be solved by keeping records.) The trouble is, we still<br \/>\nknow far too little, yet, to make programs with enough common sense to<br \/>\nunderstand even how today&#8217;s simple Al problem-solving programs work.<br \/>\nBut once we learn to make machines that are smart enough to understand<br \/>\nsuch things, I see no special problem in giving them the &#8220;self-insight&#8221;<br \/>\nthey would need to understand, change, and improve themselves.<\/h4>\n<h4>This might not be so wise to do. But what if it turns out that the only way<br \/>\nto make computers much smarter is to make them more self-conscious?<br \/>\nFor example, it might turn out to be too risky to assign a robot to<br \/>\nundertake some important, long-range task, without some &#8220;insight&#8221; about<br \/>\nit&#8217;s own abilities. If we don&#8217;t want it to start projects it can&#8217;t finish, we&#8217;d<br \/>\nbetter have it know what it can do. If we want it versatile enough to solve<br \/>\nnew kinds of problems, it may need to be able to understand how it<br \/>\nalready solves easier problems. In other words, it may turn out that any<br \/>\nreally robust problem solver will to understand itself enough to change<br \/>\nitself. Then, if that goes on long enough, why can&#8217;t those artificial<br \/>\ncreatures reach for richer mental lives than people have. Our own<br \/>\nevolution must have constrained the wiring of our brains in many ways.<br \/>\nBut here we have more options now, since we can wire machines in any<br \/>\nway we wish.<\/h4>\n<h4>It will be a long time before we learn enough about common sense<br \/>\nreasoning to make machines as smart as people are. Today, we already<br \/>\nknow quite a lot about making useful, specialized, &#8220;expert&#8221; systems. We<br \/>\nstill don&#8217;t know how to make them able to improve themselves in<br \/>\ninteresting ways. But when we answer such questions, then we&#8217;ll have to<br \/>\nface one, even stranger, one. When we learn how, then should we build<br \/>\nmachines that might be somehow &#8220;better&#8221; than ourselves? We&#8217;re lucky<br \/>\nthat we have to leave that choice to future generations. I&#8217;m sure they<br \/>\nwon&#8217;t want to build the things that well unless they find good reasons to.<\/h4>\n<h4>Just as Evolution changed man&#8217;s view of Life, Al will change mind&#8217;s view<br \/>\nof Mind. As we find more ways to make machines behave more sensibly,<br \/>\nwe&#8217;ll also learn more about our mental processes. In its course, we will<br \/>\nfind new ways to think about &#8220;thinking&#8221; and about &#8220;feeling&#8221;. Our view of<br \/>\nthem will change from opaque mysteries to complex yet still<br \/>\ncomprehensible webs of ways to represent and use ideas. Then those<br \/>\nideas, in turn, will lead to new machines, and those, in turn, will give us<br \/>\nnew ideas. No one can tell where that will lead and only one thing&#8217;s sure<br \/>\nright now: there&#8217;s something wrong with any claim to know, today, of<br \/>\nany basic differences between the minds of men and those of possible<br \/>\nmachines.<\/h4>\n","protected":false},"excerpt":{"rendered":"<p>WHY PEOPLE THINK COMPUTERS CAN&#8217;T Marvin Minsky, MIT First published in AI Magazine, vol. 3 no. 4, Fall 1982. Reprinted in Technology Review, Nov\/Dec 1983, and in The Computer Culture, (Donnelly, Ed.) Associated Univ. Presses, Cranbury NJ, 1985 Most people think computers will never be able to think. That is, really think. Not now or [&hellip;]<\/p>\n","protected":false},"author":274,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_mi_skip_tracking":false},"_links":{"self":[{"href":"https:\/\/sites.evergreen.edu\/compcog17\/wp-json\/wp\/v2\/pages\/2747"}],"collection":[{"href":"https:\/\/sites.evergreen.edu\/compcog17\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.evergreen.edu\/compcog17\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.evergreen.edu\/compcog17\/wp-json\/wp\/v2\/users\/274"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.evergreen.edu\/compcog17\/wp-json\/wp\/v2\/comments?post=2747"}],"version-history":[{"count":0,"href":"https:\/\/sites.evergreen.edu\/compcog17\/wp-json\/wp\/v2\/pages\/2747\/revisions"}],"wp:attachment":[{"href":"https:\/\/sites.evergreen.edu\/compcog17\/wp-json\/wp\/v2\/media?parent=2747"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}