This month’s post considers a little-remembered part of Turing’s otherwise famous 1950 paper on AI.
Just for once, this month, let’s not skirt around the generally problematic issue of ‘real intelligence’ compared with ‘artificial intelligence’ and ask what it means for a machine (a robot, if you like, for simplicity) to have the whole package: not just some abstract ability to calculate, process, adapt, etc. but ‘human intelligence’, ‘self-awareness’, ‘sentience’; the ‘Full Monty’, as it were. Star Trek’s ‘Data’ if you like, assuming we’ve understood what the writers had in mind correctly.
Of course, we’re not really going to build such a robot, nor even come anything close to designing one. We’re just going to ask whether it’s possible to create a machine with ‘consciousness’. Even that’s fraught with difficulty, however, because we may not be able to define ‘consciousness’ to everyone’s satisfaction but let’s try the simple, optimistic version of ‘consciousness’ broadly meaning ‘a state of self-awareness like a human’. Is that possible?