“The Theological Objection”

This month’s post considers a little-remembered part of Turing’s otherwise famous 1950 paper on AI.

Just for once, this month, let’s not skirt around the generally problematic issue of ‘real intelligence’ compared with ‘artificial intelligence’ and ask what it means for a machine (a robot, if you like, for simplicity) to have the whole package: not just some abstract ability to calculate, process, adapt, etc. but ‘human intelligence’, ‘self-awareness’, ‘sentience’; the ‘Full Monty’, as it were.  Star Trek’s ‘Data’ if you like, assuming we’ve understood what the writers had in mind correctly.

Of course, we’re not really going to build such a robot, nor even come anything close to designing one.  We’re just going to ask whether it’s possible to create a machine with ‘consciousness’.  Even that’s fraught with difficulty, however, because we may not be able to define ‘consciousness’ to everyone’s satisfaction but let’s try the simple, optimistic version of ‘consciousness’ broadly meaning ‘a state of self-awareness like a human’.  Is that possible?

Well, there’s no other place to start other than by considering what consciousness is.  Where does consciousness come from?  What are the rules?  What can have consciousness and what can’t?  What are the requirements?

OK, this if difficult.  We’re never going to all agree – that’s not the point.  But, just like the Fermi Paradox, the question must have an answer.  However unsatisfactory any given answer might be, one of them must be right.  So what are the possibilities?

Well, here are those generally regarded as the more credible contenders; not each and every one individually, but roughly combined into ‘types’ of explanation:

  1. Consciousness is just the result of neural complexity.  Build something with a big enough ‘brain’ and it will acquire consciousness.  There’s possibly some sort of critical neural mass and/or degree of connectivity for this to happen.
  2. Similar to 1 but the brain needs energy.  It needs power (food, fuel, electricity, etc.) to make it work.
  3. Similar to 2 but with some symbiosis.  A physical substrate is needed to carry signals of a particular type.  The relationship between the substrate and signals (hardware and software) takes a particular critical form (maybe the two are indistinguishable) and we don’t know what it is yet.
  4. Similar to 3 but there’s a biological requirement in there somewhere.  Consciousness is the preserve of carbon life forms, perhaps.  How and/or why we don’t understand yet.
  5. Similar to 4 but whatever it is that’s special about carbon-based life will remain one of the great unknowns of the universe.  There might not be anything particularly remarkable about it from a philosophical perspective: it’s just beyond us.
  6. Similar to 5 but there is actually something very ‘special’ about the whole thing.  Way beyond us.
  7. A particular form of 6.  Consciousness is somehow separate from the underlying hardware but still can’t exist without it.
  8. Extending 7.  Consciousness is completely separate from the body and could exist independently.  We might call it a ‘soul’.
  9. Taking 8 to the limit?  Consciousness, the soul, comes from God.

Now, there’s a sort of progression suggested here.  Of course, the human brain could satisfy any of these explanations, which is why they’re credible.  (‘Credible’ here means logically non-contradictory: it’s got bugger-all to do with any of our particular preconceptions.)  It’s generally argued, however, that a robot could only get so far down the list.  In other words, with technology improving all the time, we’ve already done 1 and 2 and, with further advances, 3, possibly even 4, might be manageable.  So, if the first four aren’t the right definitions, we’ll eventually prove it by building a robot satisfying all the requirements, which doesn’t achieve sentience.  (We’ll not consider here the somewhat thorny question of how we’d necessarily know!)

The other way this ordering is often informally portrayed is as some progression from the ‘ultra-scientific’ to the ‘ultra-spiritual’.  Many people’s version of ‘common sense’ comes somewhere in the middle, often without them being able to precisely articulate it.

However, this is where Alan Turing throws a massive spanner into the works!  But, to understand it, we need to recap a little …

In his seminal 1950 paper, “Computing Machinery and Intelligence”, published in the journal, “Mind”, Turing starts:

“I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.  The new form of the problem can be described in terms of a game which we call the ‘imitation game’.”

That should be familiar enough by now.  What he had in mind was:

“It was suggested tentatively that the question, ‘Can machines think?’ should be replaced by ‘Are there imaginable digital computers which would do well in the imitation game?’ If we wish we can make this superficially more general and ask ‘Are there discrete-state machines which would do well?’ But in view of the universality property we see that either of these questions is equivalent to this, ‘Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?’”

And then there’s a famous prediction, now almost universally misunderstood as ‘The Turing Test‘.

“It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

That’s water under the bridge as far as this post is concerned.  Perhaps few people will ever read this part properly!

However, the interesting section, as far as our earlier discussion of consciousness is concerned, comes later in the paper.  Turing was only too aware of the impact, probably negative, his predictions were going to have in the scientific and spiritual communities and on a wider plane so he tried to head several of them off at the pass.  A large part of the paper is devoted to his consideration of several ‘objections’ presented in the form of an initial anticipated criticism followed by a counter-argument.  Top of the list, as predicted by Turing is the “The Theological Objection“, which he suggests as:

“Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.”

He then systematically dismantles the objection as follows:

“I am unable to accept any part of this, but will attempt to reply in theological terms. I should find the argument more convincing if animals were classed with men, for there is a greater difference, to my mind, between the typical animate and the inanimate than there is between man and the other animals. The arbitrary character of the orthodox view becomes clearer if we consider how it might appear to a member of some other religious community. How do Christians regard the Moslem view that women have no souls? But let us leave this point aside and return to the main argument. It appears to me that the argument quoted above implies a serious restriction of the omnipotence of the Almighty. It is admitted that there are certain things that He cannot do such as making one equal to two, but should we not believe that He has freedom to confer a soul on an elephant if He sees fit? We might expect that He would only exercise this power in conjunction with a mutation which provided the elephant with an appropriately improved brain to minister to the needs of this sort. An argument of exactly similar form may be made for the case of machines. It may seem different because it is more difficult to ‘swallow’. But this really only means that we think it would be less likely that He would consider the circumstances suitable for conferring a soul. The circumstances in question are discussed in the rest of this paper. In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.  However, this is mere speculation. I am not very impressed with theological arguments whatever they may be used to support. Such arguments have often been found unsatisfactory in the past. In the time of Galileo it was argued that the texts, ‘And the sun stood still . . . and hasted not to go down about a whole day’ (Joshua x. 13) and ‘He laid the foundations of the earth, that it should not move at any time’ (Psalm cv. 5) were an adequate refutation of the Copernican theory. With our present knowledge such an argument appears futile. When that knowledge was not available it made a quite different impression.”

Now, we need to be a little careful with this.  Firstly, Turing’s reading of some sacred texts may be questionable (possibly even offensive; although he would hardly be the first to have managed a dubious interpretation of scripture: we’re still doing that).  And we can see that he had no time for any God or religion whatsoever: that’s unambiguous.  However, the interesting bit is his ‘OK, just supposing …‘ position.

What Turing points out in his defence to the ‘theological objection’ is that saying intelligence (consciousness, just for the purposes of this post) isn’t (say) just the result of neural complexity is effectively telling God what He can and can’t do.  We might prefer the later options in the list above but it’s not we that decide.  For all we know, God may be just waiting for us to build a robot with a big enough brain for Him to put sentience (even a soul, if you like) into!  It’s His universe, His rules, His science: He’ll decide surely, not us!

So the perceived ‘ultra-scientific’ to the ‘ultra-spiritual’ progression in our 1-9 list above may not make sense at all, viewed in those terms; not if God can decide any of them are right.

In fact, we can actually take God temporarily out of the discussion and muddy the waters ourselves now, by rewriting the list but adding a new ‘option zero’:

0. Actually, everything has consciousness to some extent.  Even an apparently inanimate object like a rock.  We’re just not very good at seeing this.  (This is ‘panpsychism‘)

  1. Consciousness is just the result of neural complexity.  Build something with a big enough ‘brain’ and it will acquire consciousness.  0 is broadly right but there’s some sort of critical neural mass and/or degree of connectivity for consciousness to happen.
  2. Similar to 1 but the brain needs energy.  It needs power (food, fuel, electricity, etc.) to make it work.
  3. Similar to 2 but with some symbiosis.  A physical substrate is needed to carry signals of a particular type.  The relationship between the substrate and signals (hardware and software) takes a particular critical form (maybe the two are indistinguishable) and we don’t know what it is yet.
  4. Similar to 3 but there’s a biological requirement in there somewhere.  Consciousness is the preserve of carbon life forms, perhaps.  How and/or why we don’t understand yet.
  5. Similar to 4 but whatever it is that’s special about carbon-based life will remain one of the great unknowns of the universe.  There might not be anything particularly remarkable about it from a philosophical perspective: it’s just beyond us.
  6. Similar to 5 but there is actually something very ‘special’ about the whole thing.  Way beyond us.
  7. A particular form of 6.  Consciousness is somehow separate from the underlying hardware but still can’t exist without it.
  8. Extending 7.  Consciousness is completely separate from the body and could exist independently.  We might call it a ‘soul’.
  9. Taking 8 to the limit?  Consciousness, the soul, comes from God.

Now, 0 leads nicely into 1.  Panpsychism (0) suggests that everything has consciousness.  The next option (1) says its neural complexity just has to reach some critical point.  But, to some philosophies, 0 is pretty close to 9.  The ‘spirit of the universe’ in universal consciousness is some people’s way of saying ‘God’ and our taxonomy has gone out of the window.  In fact, it suddenly looks cyclic.

So, where do you nail your colours now?

About Vic Grout

Futurist/Futurologist. Socialist. Vegan. Doomsayer. Professor of Computing Futures. Author of 'CONSCIOUS' https://vicgrout.net/the-book/ View all posts by Vic Grout

One response to ““The Theological Objection”

So what do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.