It is hard to even to begin a debate about the possibility of artificial intelligence because so much semantic rubble needs to be cleared away before we can agree on what we are talking about.
For a start, does artificial intelligence imply artificial self-awareness, artificial consciousness? In my view, it should, as otherwise we aren’t really talking about anything except an advanced machine.
But some would disagree and say that the issue of consciousness is unimportant; the point is the building of an expert system to simulate human intelligence for practical purposes.
This may well turn out to be a question worth deciding since quite a few pundits are predicting that artificial intelligence (AI) will be achieved in the present century, and will pose a huge threat to human supremacy on this planet.
Another point, less urgent perhaps but equally interesting philosophically, is: can any intelligence be artificial? If a machine becomes self-aware, should not that condition be viewed as having been triggered, rather than “created”, by the human constructors of the physical fabric of the machine? After all, parents when they beget children are regarded as transmitters rather than creators of life.
In my view, if a machine is constructed that possesses a level of recursive complexity which causes self-awareness, this will be thanks to some attraction which complexity exerts on whatever level of reality governs the arrival of consciousness. In Philip K Dick’s phrase, the machine has “caught” life.
Yet another point: it is possible, in the more distant future, that machine automation may advance to such an extent that automated self-adjustments and adaptations start to bear a close analogy to biological evolution. In this case, the idea of “artificiality” is shunted further into the background, for machines in effect become part of nature, responding to natural conditions just as other creatures do. This idea is brilliantly portrayed in the Poul Anderson story, “Epilogue” (1962). Electronic templates, containing full information on the machines’ design, play the part of DNA. Hard radiation affects these recordings as it would affect an organic gene, and consequent mutations play their part in natural selection. The higher machines have something analogous to sexual reproduction (“…his body pattern flowed in currents and magnetic fields through hers… the two patterns heterodyned and deep within her the first crystallization took place”).
In the Orange Project, a series of tales set on the giant planet Uranus – not the Uranus familiar to astronomers but its more real, archetypal self – the process of machine evolution has resulted in ina category of beings, the Ghepions, which are part-organic components of cities, transportation devices or even of the landscape.
Having considered all this, what is left of the usefulness of the Turing Test?
This is the test suggested by Alan Turing (1912-54) in his 1950 paper “Computing Machinery and Intelligence”. To carry out the test, somebody questions both an unseen human and an unseen machine, and tries to distinguish between them by the quality of their answers. If the machine answers so well that it cannot be told apart from the human respondent, it has passed the test and it can be viewed as a successful imitator of the human mind.
Perhaps Turing himself was content to leave it there. If we are merely talking about assessing the degree of limitation, the test is a good one. But of course, it is impossible to leave it there, as wider philosophical issues cry out for attention. It is a pity that some writers such as Arthur C Clarke seem to think the Turing Test is something more profoundly useful than it is. It is as though they are saying that the question of self-awareness does not matter.
On the other hand perhaps I am underestimating Clarke; perhaps when he says that we are all machines (thus making the point that it is the pattern that counts and not the material), he is making a case for transcendent consciousness possessed by both organic and inorganic organisms once they reach a certain level of complexity. In other words, he is saying that complexity is consciousness – which is either a wise or a stupid thing to say, depending on whether, at the back of his mind, he is allowing for a higher level of reality into which consciousness can fit.
If he is not allowing for that higher level of reality, then all he can allow is a lot of particles and force fields interacting on the same monistic level. In which case no matter what the complexity, there is no room for anything qualitative. Without transcendence, you can’t even have sentience, let alone intelligence.
I base this statement not on my religious nature but on the absolutely fundamental fact/value distinction in philosophy. This distinction has never been convincingly refuted and must surely count as one of the few solid conclusions which philosophers have achieved in their millennia of intellectual strivings and disputation. You can’t derive a value from a fact. That is, you can’t derive an ought from an is without already presupposing a “better” and a “worse”.
If you don’t believe me, try it. Is life better than death? Yes? Why? Because life adds complexity and variety to the universe? But who says complexity and variety are better than simplicity and monotony? No good arguing from the fact alone. Value comes from its own dimension. It has its own origin, its own aspect or level of reality. If you could derive a value from a fact it would immediately cease to be a value. (“This mother died to save her children!” “Ah, she was only obeying her evolutionary imperative.”)
Perhaps sometime in the next few decades, a machine will “wake up” with a personality. Some people may use this to argue against spiritual beliefs, as though it proved we had brought mind down to earth and shown it was nothing but a refined circuit board. On the contrary, I would say: the creation of an artificial intelligence will be the final nail in the coffin of materialism.