In more religious times, people worried that art's imitation of nature carried a disruptive presumption of man playing at God. Computers playing at man is the modern dilemma and in a Godless age, where artists become the fount of spirituality, replicating the artist's actions is the new heresy. If computers can create beauty, perhaps we'd better find faith in God again. paul fisher The Guardian
Even the casual observer of our post-religious times must be starting to cotton on by now that old distinctions between real/unreal, alive/not alive, natural/artificial and so on, right up to good/evil can not be applied in a world of cryogenics, biotechnology, nanotechnology, instant communication, quantum physics. Time to shake the bag and try a new combination.
We can all name names – Spinoza's posthumously-published shock-horror claim three centuries ago that it was possible to devise a scientific psychology fully consistent with our knowledge of how the body works, an idea which started the man-as-machine bandwagon; Newton, for giving us the mathematical and conceptual basis for the Western science which made so much of this possible; Darwin, for kicking away the crutch of religion, or Freud for forcing us to realise that maybe the rational mind was not all it was cracked up to be after all.
But pointing a finger at the past solves no present dilemmas. There's no go going back to a pre-relative world, no option but to just keep peeling away the layers to see what's (in and out) there.
We have long become used to our machines outstripping us in many departments, including ones once important to the survival of the individual and held in great esteem by society, such as strength, speed, stamina and more recently, calculating ability. But the idea that a computer might equal or even overtake us in the capacities of the mind, might become creative, is seen by many as a threat, an outrage, even a blasphemy. Acting like a magnifying glass, focussing and concentrating the technical and philosophical issues, lies artificial intelligence (ai), the most personal attack on traditional definitions of humanity.
Both lay people – ai has always had the power to incite popular media interest with Electric Brain Will Rule World-type headlines – and academics alike have criticised the development of as fundamentally misguided, dehumanising and ideologically pernicious, undermining human agency and responsibility and presenting a travesty of human potential. Scepticism and mockery are now commonly accompanied by fears that if we allow humanity image to be moulded in the likeness of a computer, human values must take second place or even be negated altogether. The deepest anxiety is that such theories and technologies will impoverish our image of ourselves and increase the individual's sense of helplessness in the face of life's challenges.
If we are nothing but machines, then the social practices and personal attitudes, which value our specifically human qualities, must be sentimental illusions. If our minds are nothing but computers, what then?
But paradoxically, ai is currently helping some thinkers investigate how such mental processes as purpose and subjectivity are possible. ai's main achievement is precisely that it forces us to appreciate the enormous subtlety of the human mind. Computer models have allowed the simplistic theories of the mind, language and perception to be trashed. In fact it wasn't until these theories were attempted to be applied to computers that we realised how oversimplified they
were, and at the same time that it was not our ability to play chess schedule industrial processes and calculate pi to a thousand places that was so remarkable, but the simple skills we all have like interpreting a wink across a table from a friend, walking across a crowded room without bumping into anyone, recognising 100 different designs of chair as chairs and learning to speak without being told the rules of grammar.
Antecedents of the computer date back to the 17th century, when Leibniz (the patron saint of cybernetics, Norbert Weiner called him) and Pascal designed arithmetic machines.
1 See Bert Mulder's review of J.P. Bischoff's Versuch einer Geschichte der Rechenmachine.
ai' s antecedents go back even further: the dream of sexless reproduction or artificial consciousness can be seen in the ancient Greek myth of Pygmalion and Galatea; the alchemists' homunculus and the Golem of Jewish Kaballa fame.
Leaving the fairy stories behind, in the 1840s Charles Babbage designed his Difference Engines
2 So called because their operation is based on the method of finite differences used by contemporary human 'computers' in the preparation of mathematical tables.
based on the more dependable (but equally obscure to the uninitiated) Kaballa of mathematical integration. Neither these nor his later Analytic Engine were built because of the limitations of Victorian mechanical
engineering,
3 The British Science Museum built Difference Engine No. 2 last year out of 4,000 iron, steel and gun metal parts. It weighs 3 tonnes and calculates 7th order polynomials to 30 decimal places, analogically by cranking a handle.
but the latter would have been the first programmable mechanical computer – and would have been programmed by Lady Ada Lovelace,
4 Whose father, Lord Byron, was coincidentally present the night Mary Shelley conceived her Frankenstein myth.
who predicted that the Analytical Engine would be able to act on other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations (...) such as those between pitched sounds in the science of harmony and of musical composition (...) the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent (B. Toole, Ada, the Enchantress of Numbers).
The modern search for ai began in 1950, when Cambridge mathematician and wartime cryptographer Alan Turing published a popular reworking of the core
concepts he had first outlined in 1936, claiming that computers could - and by 2000, would - imitate human intelligence perfectly. He devised and gave his
name to a (purely behavioural) test for establishing whether this had been accomplished.
The initial reaction in England was to scoff, but the academics
5 Turing believed this was because he was gay, and wrote a syllogism expressing this belief shortly before killing himself: Turing thinks machines could think. Turing lies with men. Therefore machines can not think.
soon got over this and the search for ai began at what seemed the most obvious starting point: design a computer modelled on the brain. Simple, transistor-based learning networks were built in the 50s, but neither the technology nor the theory were sfficiently developed to get anywhere. General-purpose (Von Neuman) logic machines had arrived on the scene, and seemed to offer greater scope. By the end of the decade, the question Could a computer think had been rephrased to Could a machine that manipulated physical symbols according to structure-sensitive rules think?
At the time there were good reasons for believing yes. Church's Thesis that every effectively computable function is recursively computable and Turing's demonstration that any recursively computable function can be computed in finite time by a maximally simple sort of symbol manipulating device (known as a Universal Turing Machine). Together, these ideas mean that a digital computer, given only the right program, a large enough memory and sufficient time, can compute any rule-governed input-output function. In other words, it can display any systematic pattern of responses to the environment whatsoever, and therefore a suitably programmed computer would be able to pass the (purely behavioural) Turing Test for conscious intelligence. The only problem left was to identify the complex function of response to environment and then write the program (the set of recursively applicable rules) by which the symbol manipulating machine will compute it. These goals became the kernel of the classical or hard ai research program.
Protestations from psychology labs and philosophy seminar rooms that digital computers were not very 'brain-like' were brushed aside with the theoretically appealing notion that the physical machine has nothing to do with what functions it computes; what you can compute doesn't depend on what you're made of,
meat or silicon.
6 The point had been made through the 60s that thinking was a non-material process in an immaterial soul – but it had little impact on ai research, having no evolutionary or explanatory mechanism behind it. It didn't fit in with the logical positivistic worldview then dominant that science was all.
Secondly, according to Turing's Principle of Equivalence, the details of any machine's functional architecture (the actual layout of the
circuits) is also irrelevant. These points were the full rationale behind Hard ai, and its proponents (still) believe that it's only a matter of time before computers will do everything a mind can do. Mental activity, they claim, is simply the carrying out of some well-defined sequence of operations (an algorithm). The difference between a brain, including all its higher activities and a thermostat is simply a degree of algorithmic complexity. Careers have been built on this assumption and its corollary: that when that algorithm is found, it will be runnable on a computer.
In its most extreme form, writers like Hans Moravic (Mind Children) have used Turing's Principle of Equivalence to claim that as the specific hardware is unimportant, then software is all important. What is our identity? they ask. It's not the particular constellation of atoms at time x: we are replaced several
times over through life. It's the pattern that's important. They claim that just as the words on a word processor can be saved on disk and reopened in the future exactly the same, so a person's individuality could be encoded in a similar form - indeed, the person's sense of awareness would travel with them into the disk.
These claims are reliant on the presumption that the brain is a digital computer and that no specific physical phenomena are being called upon when one thinks that might demand the particular physical structure the brains have – presumptions that in the last few years have been seriously challenged.
Now the Bad News
Back in the 60s, the initial results had looked good: computers were programmed to do all sorts of smart things like play chess, engage in simple dialogue, solve algebraic problems and so on. Performances continually improved as machines got bigger, faster and used longer, more complex programs. These rule-based systems consisted of a database of knowledge, often extracted or engineered from a human expert, plus a management system to apply these rules.
But many of the things researchers most wanted to do with ai - artificial vision, speech synthesis, automatic machine translation - proved almost completely impossible. In 1972, philosopher Hubert Dreyfus argued that the pattern of failure suggested computers were missing the vast treasure store of experience or inarticulate background knowledge that all humans have. MIT's ai guru Marvin Minsky came to appreciate this when he tried to build a block-stacking robot: it
kept trying to stack the blocks from the top down, repeatedly releasing them in mid air. No one had told it about gravity.
The experience, according to Minsky, changed his views of what 'intelligence' was. The secret of our success, he claimed, is not some spark of creativity but the simple common sense we pick up in our day-to-day existence. For computers to be intelligent the argument now went, they would have to be educated from the ground up.
This is what Doug Lenat is trying to do in Austin, Texas. He's building a database of common sense knowledge. The Cyc (encyclopedia) Project aims to write down as much as possible of what every child knows, taking newspaper clippings and encyclopedia entries and asking 'what a computer would need to
know to understand the piece?' For example, to understand The man drank the beer in the glass, it would not only have to know what beer and glass were (and distinguish this from the glass in a window), and that to be drunk out of, a glass must have its open end pointing up for the beer to stay in, and so on.
Lenat estimates he will need a few million entries to approximate everyday knowledge (compared to an estimated 20,000 pieces of knowledge needed for an expert system to encapsulate everything a law student learns in three years).
But work on vision in the late 70s and early 80s showed processing to be hugely intensive and taking much more time than any biological system. Despite a computer being a million times faster than a nerve, and clock frequencies being many times more rapid than any signal picked up by the brain, the tortoise still outran the hare. Constructing a relevant knowledge base is hard enough; accessing the contextually relevant bits gets harder as the database gets bigger.
The strong ai researchers admit that more than a database is needed to think like a human. What is needed is what Minsky calls a Society of Mind. He
illustrates what he means by looking at vision. What makes human vision so versatile is the many ways we have of interpreting a visual scene and the fact that we can use them all at once: to tell how far an object is away we may process its apparent size, brightness, the shadows it casts, its parallax motion and a dozenother visual clues. Although no method works all the time, at least one works. Programmes already exist that allow computers to use one - but only one - of them at a time. He speculates that we may be able to make an expert system, which uses them, all, but this is impossible until we have a program that allows each expert to access the body of knowledge of the others – and we don't know how humans do this yet.