The Turing test is perhaps the best-known detail of Alan Turing's work, if only because it is easy to understand. But it also approaches one of the deepest questions by asking whether a digital machine can think like a human being without attempting to prescribe in what human thinking itself consists. It poses only the comparative question concerning whether a digital machine's responses to questions are comparable to, or indistinguishable from, the responses a presumably intelligent human being would give to the same questions.
Wikipedia summarizes as follows: "The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think?'"
Because "thinking" is difficult to define, Turing chooses to "replace
the question by another, which is closely related to it and is expressed
in relatively unambiguous words."
Turing describes the new form of the problem in terms of a three-person
game called the "imitation game", in which an interrogator asks
questions of a man and a woman in another room in order to determine the
correct sex of the two players. Turing's new question is: "Are there
imaginable digital computers which would do well in the imitation game?"
This question, Turing believed, is one that can actually be answered.
In the remainder of the paper, he argued against all the major
objections to the proposition that "machines can think"." [Note that Turing is concerned with what is "imaginable", i.e. conceivable.]
Significantly, Turing's paper was published in the influential, establishment Mind journal, and the entire discussion in the Wikipedia entry is in terms of the human being conceived as a subject endowed with interior consciousness, as if this question were settled, cut and dried for all time, in fact, as if this question were a non-question that thinking does not have worry about.
One of the best-known objections to the Turing test was formulated by John Searle under the name of the Chinese room. Wikipedia summarizes this objection thus: "John Searle has argued that external behaviour cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking." His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality." [Intentionality is the directedness of the mind toward something.]
Note that Searle's objection rests on the distinction between internal consciousness and external behaviour, a more than obvious objection for any philosopher, like Searle, steeped in and captive to the ontology of subject-object. Without the supposedly self-evident distinction between inside and outside consciousness the objection makes no sense and has no force at all. Searle's Chinese room objection begs the question whether human being itself can be adequately conceived as subjectivity endowed with intelligent internal consciousness at all.
Let us ponder the presupposition that the human being is a subject a little further. Turing's test is set up to test whether a human subject in conversational interplay with a digitized computer operating in line with the algorithmic steps of Universal Turing Machines or, alternatively, with a living human being, conceived as a subject, is able to distinguish reliably between his of her interlocutors. In his paper, Turing is confident that a computer will one day pass the Turing test, becoming indistinguishable from a human interlocutor, thus vindicating Turing's own conception that human thinking is 'nothing other than' the computation of computable numbers somehow by neuronal brain activity.
It is a human subject that [not who] is required to make a judgement about the status of his or her interlocutors: real human being or artificial computer? As subject, the human underlies and is the source of the judgement made. Note that 'sub-ject' means literally 'that which is thrown under'; it is the Latin translation of the ancient Greek ὑποκείμενον (hypokeimenon) which, in turn, means literally 'that which underlies'. [For the Greeks the 'subjects' were what today are called 'objects'. We live in a topsy-turvy world in many respects, that doesn't seem to faze anyone.] It is thus presupposed for the Turing test that the human being underlies the judgement, but is the human being really the underlying, judging, discerning subject in this test situation in which thinking itself is at stake?
The judging, discerning human being already conceives him- or herself reflectively in some way as a human being, and this reflective self-conception in our age will be inevitably as a living being (i.e. a kind of animal) endowed with interior consciousness and a mind embedded in that consciousness vis-à-vis the external world of objects. This self-conception inevitably also includes the preconception that thinking consciousness is somehow located in the brain, perhaps also connected with the rest of the body via the central nervous system. This latter preconception is highly convenient and axiomatic for today's neuroscience with all its ongoing and fast progressing research into the brain in order to 'solve' the problem concerning what constitutes thinking as such. Without the inside/outside distinction there could be no neuroscience. The resolution of this problem goes hand in hand with ceaseless efforts to make Artificial Intelligence. The very endeavour under the name of AI makes no sense at all if there is not already the preconceived conviction that human thinking is basically 'nothing other than' computation, of which Turing himself was convinced.
This leaves open the possibility that, with the advancement of the self-serving conviction that thinking is to be conceived as computation carried out somewhere inside, the behaviour as well as the self-conception of the human him- or herself adapts to that of digital computers running on algorithms, with the consequence that it becomes all the more likely that a machine can pass the Turing test. This eventuality is not a consequence of more and more superb supercomputers with petaFLOPS of computational power being built, but of human beings themselves conceiving themselves more and more as computers. In this scenario, the human subject is thus not only adapting to, but is being absorbed by the cyberworld and thus becoming indistinguishable from a cyborg by thinking their selves as cyborgs. The underlying subject thus becomes in the human mind an algorithmically operated what. The cyberworld here is not only an artificially built electronic network run by algorithms, nor only an electronic medium in which we immerse ourselves, but also, and even prior to its being built as an electronic medium, a conception in the mind, i.e. a state of mind.
Those who promote, who are fired up and excited by the approximation of human being to computational being composed of Universal Turing Machines will presumably be among the first to judge that a computer has passed the Turing test. In so doing, they will be unwittingly begging the question concerning human being itself without even noticing it. In any case, the hermeneutic-ontological conception of human being as animal endowed with intelligent consciousness is no ontological bulwark against this possibility lying on the horizon of our historical future today.
The question, Who is the human being? is not even on today's philosophical agenda. It is dismissed without a second thought if it obliquely crops up somewhere. The reason is that academic philosophy has today become the handmaiden and whore of effective modern science, either stridently defending the unquestioned ontological presuppositions of modern science or timidly and vainly seeking some kind of rapprochement with the more strident and aggressive analytic and post-analytic philosophy that so far maintains its hegemony in the academy.
Related: Interview with Katina Michael.