(The second of two posts distilled from a talk given at the 2011 Wrexham Science Festival. The first part, ‘The Singularity is Coming … Or Is It?‘, appears separately. However, both have a common thread and share some material.)
It seems that the next few decades may give us something really remarkable: truly intelligent computers; that, before the 21st century is a half, maybe a third, old, we could be living with machines capable of genuine, independent thought. Apparently, this is not science fiction or the ‘artificial intelligence’ of the 20th century but real intelligence. So many questions … Can that really happen? Will it? How? What does it mean? What will that world be like? What do we have to look forward to? Or to fear? How do we get from here to there … and do we want to? How does today’s AI technology develop into tomorrow’s thinking machine? What will we do with it when we’ve got it? What happens if we get it wrong? Can we ultimately build something ‘better’ than us? Will we be served by teams of intelligent robots or is there a risk that we could end up serving them? Or, as we and the machines both evolve, will the ‘natural’ distinction between human and computer eventually become blurred and ultimately unimportant? Pause for breath …
Well, some of these ideas are discussed in the previous post. But where to start with the intelligence thing? We hit an obstacle right away really. If we’re talking about intelligence, we probably need to define intelligence but that’s actually really difficult. OK, there are plenty of definitions out there but they’re very different and we have to be careful because we’re going to be trying ultimately to apply this to a machine. Here are some possibilities to start with. “Intelligence is …
- … being quick/accurate”
- … being adaptable/convincing”
- … being conscious/alive”
We’ve probably all used each of these definitions of intelligence in different situations in the past so, to some extent, we’re fooling ourselves if we think we even have a definition that works for us. 1, 2 and 3 are really markers on a spectrum of definitions of intelligence from the simple to the …, well what? Profound? As far as machine intelligence is concerned, we achieved 1 almost the moment we built the first computer. 2 appears broadly to be the goal of most AI developers. Whether 3 is possible rather depends on your beliefs rather than any technical knowledge, which is a problem if we’re going to have a sensible discussion about it.
So that’s really not a great start. There’s no particularly good, all-embracing definition of intelligence and, even if there was, we’d all start from different axiomatic positions as to what we’d accept as possible. OK then, let’s try a different tack. What would we expect from an intelligent machine? With maybe a little bit of licence, we can rewrite these options. “An intelligent machine would …
- … be better than a human”
- … appear to be like a human”
- … be self-aware”
On the surface, this might look a little odd because 1 now looks harder than 2 and anyway it doesn’t seem like we’ve achieved much. In fact though, we’ve made some progress. In computational terms, 1 is still pretty basic, in the sense that, whatever our metric, we’ll probably get there eventually, but on a certain level, the distinction between 2 and 3 becomes less important. (In practical terms, that is; there’s a huge philosophical distinction, of course.) To all outward appearances, to our eyes, it won’t matter much if a machine actually is self-aware or just appears to be; it’s how we’ll interact with it that will be significant. (Yes, OK, this is a huge simplification – it’s a cheat – but, like a cheat in a computer game, it lets us skip a few levels; to get on with the rest of the discussion and it’s partly why it’s been split between this and the The Singularity is Coming … Or Is It? post.)
It’s possible to take all kinds of positions on this consciousness thing; even trying to define the range is likely to upset many, but let’s try anyway. At one end we have the notion that consciousness is simply the result of neural complexity; make something big enough, with enough parts, connect it all together and give it some power and it will come alive. At the other end we have the point of view that says that consciousness is a human, or at least a biological, concept (maybe ‘God-given’, maybe not) that can’t be created artificially. There are problems with both arguments. If the human brain really is so special then we’ve not really isolated what it is that makes it so special and there’s no evidence that the ‘spirit’ lives anywhere other than in the brain. On the other hand, if consciousness is just the result of neural complexity then the Internet should probably have woken up by now. Somewhere in between, for example, we’ve questions like what would happen if we were able to make a viable biological computer artificially, which we may be able to eventually through DNA manipulation. Would that be alive? We know from our dealings with animals (and some people) that it appears to be possible to be conscious but still pretty thick; where does that leave our nice, simple definitions? Generally when people think they know the answers to these questions, they’re likely to be founded on beliefs (spiritual or scientific) rather than logic. We’re just not going to go there … at least in this post.
If we’re not going to worry about the difference between a machine that’s self-aware and one that, to all intents and purposes, appears to be (even though, yes, we probably should) then we can make some progress with this. Otherwise, we’ll always be running the two concepts in parallel. On the one hand, we’ll be thinking in terms of building bigger (or smaller) and better machines and artificially improving the illusion of intelligence; on the other, we’ll be looking to a point in the future where the machine effectively wakes up and then we’ll find out what happens. However, even this isn’t simple; we don’t actually have to build a conscious machine in order to kick off some automatic process of machine evolution; we could still make them do that ourselves. (Once again, see the The Singularity is Coming … Or Is It? post.) OK then, let’s agree; we’re not going to make a distinction between real intelligence and artificial but entirely convincing intelligence.
Let’s press on with the simple questions of what we need and how we get there. This is pretty obvious really; we need some fairly serious advances in both hardware and software. We could argue that we’re well on the way but there are problems with both. Although we’ve been doing very nicely in terms of hardware power over the last few decades, we can’t carry on much longer on the current trajectory. We’re approaching the atomic level in hardware miniaturisation and these ever-smaller components are becoming unpredictable in their behaviour at this scale. (If you’re actually counting atoms in a material then a change in a single atom, changes the behaviour of the material) Heat is also a big problem; it can cost as much to cool a modern system as it can to run it. Multi-core processors help with this but there’s a trade-off in communication and synchronisation so performance isn’t all it should be. (Essentially, the whole is less than the sum of its parts.) They’re pretty big as well, of course, which won’t help with any illusion we’re trying to get across. It’s likely that the next generation of computers will use radically different technologies. We might feel like discussing optical, biological or quantum computing at this point; that’s way off on a tangent for this post but it may be where we’re going eventually.
Software problems are subtler by comparison. OK, we know there are some things that can’t be done – some problems that can’t be solved but that’s a side-issue really. The difficulty is that no amount of number-crunching, on the surface, makes a computer seem very much like a human; it can almost be the opposite. While our conventional, deterministic programming has done well enough to date with speed and accuracy, it hasn’t got far with the human side of things. It’s possible that we may need new models for software as well as hardware and these may have to more like the ad-hoc ‘programs’ that our brains appear to run. In fact, if we think in terms of fuzzy software development, programs that work statistically rather than exactly, then that might actually suggest an approach to the unreliable atomic-level hardware as well. (Whatever operating systems our brains are running, they have to cope with some very fallible hardware.) Unfortunately, that might also leave us in a situation where 2 + 2 = 4, not because it’s an exact, deterministic calculation but because 8 out of 10 circuits said so. As is often the case, it might be our thinking that has to adapt more than the machines.
Given past progress though, it’s probably not unrealistic to suppose that we’ll solve both the hardware and software issues over time. Sooner or later, we’ll make things bigger/smaller, powerful and integrated enough to make the machines (at least appear) as intelligent as we want them to be. What then? What would we expect an intelligent machine to do; and how would we recognise it? That may not be as daft a question as it sounds because we’re inclined to overlook the best candidate for an intelligent machine we have, or are likely to for a fair while – the Internet.
It all depends on definition of course – we’ll come back to that – but the current capability of the Internet, its behaviour, if we’re going to try to treat it as an individual entity, is something like intelligent on some levels, particularly in terms of its semantic capabilities. (Obviously, if we’re using language like this, we’re not making any of the usual distinction between the Internet, the hardware and the Web, the software.) There’s one of those dubious ‘equations’ around today that states ‘WI = IT + AI’, which is curious on first inspection because it doesn’t actually appear to factor into this definition of ‘Web Intelligence’ the most obvious of the Internet’s attributes, its size. But should it? In truth, the Internet’s advancing semantic ability is developing to take advantage of it’s hugeness; it doesn’t actually have to be as big as it is to run the software that does it. (A much smaller computer could run the same software; it just wouldn’t achieve as much because it wouldn’t have access to the same data.) Big or small, is Web Intelligence getting close to real intelligence?
Let’s allow a little deviation for a quote; one that’s fun and relevant, if a little long: “Sherlock Holmes … shook his head with a smile as he noticed my questioning glances. ‘Beyond the obvious facts that he has at some time done manual labour, that he takes snuff, that he is a Freemason, that he has been in China, and that he has done a considerable amount of writing lately, I can deduce nothing else.’ Mr. Jabez Wilson started up in his chair, with his forefinger upon the paper, but his eyes upon my companion. ‘How, in the name of good-fortune, did you know all that, Mr. Holmes?’ he asked. ‘How did you know, for example, that I did manual labour. It’s as true as gospel, for I began as a ship’s carpenter.’ ‘Your hands, my dear sir. Your right hand is quite a size larger than your left. You have worked with it, and the muscles are more developed.’ ‘Well, the snuff, then, and the Freemasonry?’ ‘I won’t insult your intelligence by telling you how I read that, especially as, rather against the strict rules of your order, you use an arc-and-compass breastpin.’ ‘Ah, of course, I forgot that. But the writing?’ ‘What else can be indicated by that right cuff so very shiny for five inches, and the left one with the smooth patch near the elbow where you rest it upon the desk?’ ‘Well, but China?’ ‘The fish that you have tattooed immediately above your right wrist could only have been done in China. I have made a small study of tattoo marks and have even contributed to the literature of the subject. That trick of staining the fishes’ scales of a delicate pink is quite peculiar to China. When, in addition, I see a Chinese coin hanging from your watch-chain, the matter becomes even more simple.’ Mr. Jabez Wilson laughed heavily. ‘Well, I never!’ said he. ‘I thought at first that you had done something clever, but I see that there was nothing in it after all.’ ” (Arthur Conan Doyle, ‘The Red-Headed League’) This rather laboured point is that things don’t always seem so intelligent when they’re explained or, presumably in Dr. Watson’s case, when you’re used to them.
And there’s a real point here. Is the Internet intelligent? It’s the best candidate we’ve got so is it convincing? It has massive quantities of data at its disposal and is capable of making good use of it. It can use advanced, context-aware semantics to find the information you need or to gain your confidence to con you out of your bank details – and it’s getting better. ‘You can’t fool all of the people all of the time’; sometimes it works, sometimes it doesn’t – very much like a human. So is it intelligent? Does it seem intelligent to a computer scientist, a networker or programmer, an engineer? To the butcher, baker or candlestick-maker? To someone that uses it everyday, or only occasionally, or hasn’t yet? Would it have seemed intelligent a few decades ago, or a few centuries; in a time when a mobile phone, or a radio, or even a box of matches would have seemed like magic? If it doesn’t convince a particular person today, might it in ten years’ time? Clearly the answer to at least some of these questions is ‘yes’.
If we’re still not convinced, what would it take to win us over? What does a machine – not necessarily the Internet – have to do to be intelligent? Or what would we want to let it do? Do sums? Beat us at chess? Diagnose our illnesses? Operate on us? Run the economy? Start a war? We’ve managed most of these already, the rest are on their way and none are inconceivable. What more do you want? The problem is of course, as we know from our human world, power isn’t intelligence.
In 1950, Alan Turing outlined a test for computer intelligence. Hide a human, B, and a computer, C, behind a screen and allow them to communicate in the same way (electronic messages, for example) with a human, A, in front of the screen. The argument goes that if A can’t tell which of B and C is which then C must be intelligent. This is regarded as one of the defining principles of AI but it’s not above criticism. On the one hand, it could be argued that the Turing Test is more a test of A than C; or at least that a cleverer A would make it harder for C to pass. On the other hand, just as C might be expected to gain sophistication over time, so will A; if A is in tune with technological advancement (just as a user – not necessarily a developer) then he/she will be increasingly harder to fool. To put it another way, if a given C is just on the verge of passing the Turing Test today with today’s A, it would almost certainly pass with an A from ten years ago – but would it pass with a A from ten years into the future? As elegant as it unquestionably is, the Turing Test may not be an absolute measure. Worse still, there might not actually be one. It’s likely that our expectations from technology will drive our implicit definitions of intelligence upwards.
And there’s another factor that we mustn’t overlook. As the machines are becoming more like us, we’re likely to become more like them. This might sound rather sci-fi but in fact it’s pretty real. We’ve been using various forms of implants (mechanical and electrical/electronic) for decades now; over time they’ve become increasingly sophisticated. There’s clearly trouble brewing in the paralympic arena, for example, as to when a device intended to compensate becomes one that enhances. It’s only a matter of time before human ‘add-ons’ are available in the market place that significantly improve aspects of day-to-day life and, as these become smaller and more integrated, it’s going to be harder to tell who’s using them. A further step would be brain implants to improve cognitive ability (processing) or the ability to store and retrieve knowledge to and from the brain (memory). OK, this is slightly speculative but it’s unlikely that, in a decade or so, we won’t have made some progress in this direction even if the jury’s still out on the basic principles. (There are some sensible objections at the start of this piece, for example, before it lurches into the common mistake of thinking that some form of common morality will protect us; compare with the so-called laws of robotics.) Although we might be a bit hazy on the details, it does seem as if the boundaries between human and machine are likely to become somewhat blurred. (This is certainly broadly in line with what Ray Kurzweil proposes.)
If you put all this together; that:
- our expectations of, and our familiarity with, technology increase along with the technology, and
- the definitions of, and the distinctions between, human and machine may need reconsideration,
then we seem to be looking very much at a process of adaptation on both counts. The metrics and benchmarks we use today may well be due a major overhaul and this could be an ongoing process. An apparently simple question such as ‘Will we be living with intelligent machines in the future?’ suddenly doesn’t seen quite so simple because how we interpret the question itself may not be absolute. In a very simple sense, the answer might be ‘yes’ today but ‘no’ tomorrow. In general, we might be able to answer ‘yes’ or ‘no’ with impunity. Remember Gödel, anyone?