(The second of two posts distilled from a talk given at the 2011 Wrexham Science Festival. The first part, ‘The Singularity is Coming … Or Is It?‘, appears separately. However, both have a common thread and share some material.)
It seems that the next few decades may give us something really remarkable: truly intelligent computers; that, before the 21st century is a half, maybe a third, old, we could be living with machines capable of genuine, independent thought. Apparently, this is not science fiction or the ‘artificial intelligence’ of the 20th century but real intelligence. So many questions … Can that really happen? Will it? How? What does it mean? What will that world be like? What do we have to look forward to? Or to fear? How do we get from here to there … and do we want to? How does today’s AI technology develop into tomorrow’s thinking machine? What will we do with it when we’ve got it? What happens if we get it wrong? Can we ultimately build something ‘better’ than us? Will we be served by teams of intelligent robots or is there a risk that we could end up serving them? Or, as we and the machines both evolve, will the ‘natural’ distinction between human and computer eventually become blurred and ultimately unimportant? Pause for breath …
Well, some of these ideas are discussed in the previous post. But where to start with the intelligence thing? We hit an obstacle right away really. If we’re talking about intelligence, we probably need to define intelligence but that’s actually really difficult. OK, there are plenty of definitions out there but they’re very different and we have to be careful because we’re going to be trying ultimately to apply this to a machine. Here are some possibilities to start with. “Intelligence is …
- … being quick/accurate”
- … being adaptable/convincing”
- … being conscious/alive”
We’ve probably all used each of these definitions of intelligence in different situations in the past so, to some extent, we’re fooling ourselves if we think we even have a definition that works for us. 1, 2 and 3 are really markers on a spectrum of definitions of intelligence from the simple to the …, well what? Profound? As far as machine intelligence is concerned, we achieved 1 almost the moment we built the first computer. 2 appears broadly to be the goal of most AI developers. Whether 3 is possible rather depends on your beliefs rather than any technical knowledge, which is a problem if we’re going to have a sensible discussion about it.
So that’s really not a great start. There’s no particularly good, all-embracing definition of intelligence and, even if there was, we’d all start from different axiomatic positions as to what we’d accept as possible. OK then, let’s try a different tack. What would we expect from an intelligent machine? With maybe a little bit of licence, we can rewrite these options. “An intelligent machine would …
- … be better than a human”
- … appear to be like a human”
- … be self-aware”
On the surface, this might look a little odd because 1 now looks harder than 2 and anyway it doesn’t seem like we’ve achieved much. In fact though, we’ve made some progress. In computational terms, 1 is still pretty basic, in the sense that, whatever our metric, we’ll probably get there eventually, but on a certain level, the distinction between 2 and 3 becomes less important. (In practical terms, that is; there’s a huge philosophical distinction, of course.) To all outward appearances, to our eyes, it won’t matter much if a machine actually is self-aware or just appears to be; it’s how we’ll interact with it that will be significant. (Yes, OK, this is a huge simplification – it’s a cheat – but, like a cheat in a computer game, it lets us skip a few levels; to get on with the rest of the discussion and it’s partly why it’s been split between this and the The Singularity is Coming … Or Is It? post.)
It’s possible to take all kinds of positions on this consciousness thing; even trying to define the range is likely to upset many, but let’s try anyway. At one end we have the notion that consciousness is simply the result of neural complexity; make something big enough, with enough parts, connect it all together and give it some power and it will come alive. At the other end we have the point of view that says that consciousness is a human, or at least a biological, concept (maybe ‘God-given’, maybe not) that can’t be created artificially. There are problems with both arguments. If the human brain really is so special then we’ve not really isolated what it is that makes it so special and there’s no evidence that the ‘spirit’ lives anywhere other than in the brain. On the other hand, if consciousness is just the result of neural complexity then the Internet should probably have woken up by now. Somewhere in between, for example, we’ve questions like what would happen if we were able to make a viable biological computer artificially, which we may be able to eventually through DNA manipulation. Would that be alive? We know from our dealings with animals (and some people) that it appears to be possible to be conscious but still pretty thick; where does that leave our nice, simple definitions? Generally when people think they know the answers to these questions, they’re likely to be founded on beliefs (spiritual or scientific) rather than logic. We’re just not going to go there … at least in this post.
If we’re not going to worry about the difference between a machine that’s self-aware and one that, to all intents and purposes, appears to be (even though, yes, we probably should) then we can make some progress with this. Otherwise, we’ll always be running the two concepts in parallel. On the one hand, we’ll be thinking in terms of building bigger (or smaller) and better machines and artificially improving the illusion of intelligence; on the other, we’ll be looking to a point in the future where the machine effectively wakes up and then we’ll find out what happens. However, even this isn’t simple; we don’t actually have to build a conscious machine in order to kick off some automatic process of machine evolution; we could still make them do that ourselves. (Once again, see the The Singularity is Coming … Or Is It? post.) OK then, let’s agree; we’re not going to make a distinction between real intelligence and artificial but entirely convincing intelligence.
Let’s press on with the simple questions of what we need and how we get there. This is pretty obvious really; we need some fairly serious advances in both hardware and software. We could argue that we’re well on the way but there are problems with both. Although we’ve been doing very nicely in terms of hardware power over the last few decades, we can’t carry on much longer on the current trajectory. We’re approaching the atomic level in hardware miniaturisation and these ever-smaller components are becoming unpredictable in their behaviour at this scale. (If you’re actually counting atoms in a material then a change in a single atom, changes the behaviour of the material) Heat is also a big problem; it can cost as much to cool a modern system as it can to run it. Multi-core processors help with this but there’s a trade-off in communication and synchronisation so performance isn’t all it should be. (Essentially, the whole is less than the sum of its parts.) They’re pretty big as well, of course, which won’t help with any illusion we’re trying to get across. It’s likely that the next generation of computers will use radically different technologies. We might feel like discussing optical, biological or quantum computing at this point; that’s way off on a tangent for this post but it may be where we’re going eventually.
Software problems are subtler by comparison. OK, we know there are some things that can’t be done – some problems that can’t be solved but that’s a side-issue really. The difficulty is that no amount of number-crunching, on the surface, makes a computer seem very much like a human; it can almost be the opposite. While our conventional, deterministic programming has done well enough to date with speed and accuracy, it hasn’t got far with the human side of things. It’s possible that we may need new models for software as well as hardware and these may have to more like the ad-hoc ‘programs’ that our brains appear to run. In fact, if we think in terms of fuzzy software development, programs that work statistically rather than exactly, then that might actually suggest an approach to the unreliable atomic-level hardware as well. (Whatever operating systems our brains are running, they have to cope with some very fallible hardware.) Unfortunately, that might also leave us in a situation where 2 + 2 = 4, not because it’s an exact, deterministic calculation but because 8 out of 10 circuits said so. As is often the case, it might be our thinking that has to adapt more than the machines.
Given past progress though, it’s probably not unrealistic to suppose that we’ll solve both the hardware and software issues over time. Sooner or later, we’ll make things bigger/smaller, powerful and integrated enough to make the machines (at least appear) as intelligent as we want them to be. What then? What would we expect an intelligent machine to do; and how would we recognise it? That may not be as daft a question as it sounds because we’re inclined to overlook the best candidate for an intelligent machine we have, or are likely to for a fair while – the Internet.
It all depends on definition of course – we’ll come back to that – but the current capability of the Internet, its behaviour, if we’re going to try to treat it as an individual entity, is something like intelligent on some levels, particularly in terms of its semantic capabilities. (Obviously, if we’re using language like this, we’re not making any of the usual distinction between the Internet, the hardware and the Web, the software.) There’s one of those dubious ‘equations’ around today that states ‘WI = IT + AI’, which is curious on first inspection because it doesn’t actually appear to factor into this definition of ‘Web Intelligence’ the most obvious of the Internet’s attributes, its size. But should it? In truth, the Internet’s advancing semantic ability is developing to take advantage of it’s hugeness; it doesn’t actually have to be as big as it is to run the software that does it. (A much smaller computer could run the same software; it just wouldn’t achieve as much because it wouldn’t have access to the same data.) Big or small, is Web Intelligence getting close to real intelligence?
Let’s allow a little deviation for a quote; one that’s fun and relevant, if a little long: “Sherlock Holmes … shook his head with a smile as he noticed my questioning glances. ‘Beyond the obvious facts that he has at some time done manual labour, that he takes snuff, that he is a Freemason, that he has been in China, and that he has done a considerable amount of writing lately, I can deduce nothing else.’ Mr. Jabez Wilson started up in his chair, with his forefinger upon the paper, but his eyes upon my companion. ‘How, in the name of good-fortune, did you know all that, Mr. Holmes?’ he asked. ‘How did you know, for example, that I did manual labour. It’s as true as gospel, for I began as a ship’s carpenter.’ ‘Your hands, my dear sir. Your right hand is quite a size larger than your left. You have worked with it, and the muscles are more developed.’ ‘Well, the snuff, then, and the Freemasonry?’ ‘I won’t insult your intelligence by telling you how I read that, especially as, rather against the strict rules of your order, you use an arc-and-compass breastpin.’ ‘Ah, of course, I forgot that. But the writing?’ ‘What else can be indicated by that right cuff so very shiny for five inches, and the left one with the smooth patch near the elbow where you rest it upon the desk?’ ‘Well, but China?’ ‘The fish that you have tattooed immediately above your right wrist could only have been done in China. I have made a small study of tattoo marks and have even contributed to the literature of the subject. That trick of staining the fishes’ scales of a delicate pink is quite peculiar to China. When, in addition, I see a Chinese coin hanging from your watch-chain, the matter becomes even more simple.’ Mr. Jabez Wilson laughed heavily. ‘Well, I never!’ said he. ‘I thought at first that you had done something clever, but I see that there was nothing in it after all.’ ” (Arthur Conan Doyle, ‘The Red-Headed League’) This rather laboured point is that things don’t always seem so intelligent when they’re explained or, presumably in Dr. Watson’s case, when you’re used to them.
And there’s a real point here. Is the Internet intelligent? It’s the best candidate we’ve got so is it convincing? It has massive quantities of data at its disposal and is capable of making good use of it. It can use advanced, context-aware semantics to find the information you need or to gain your confidence to con you out of your bank details – and it’s getting better. ‘You can’t fool all of the people all of the time’; sometimes it works, sometimes it doesn’t – very much like a human. So is it intelligent? Does it seem intelligent to a computer scientist, a networker or programmer, an engineer? To the butcher, baker or candlestick-maker? To someone that uses it everyday, or only occasionally, or hasn’t yet? Would it have seemed intelligent a few decades ago, or a few centuries; in a time when a mobile phone, or a radio, or even a box of matches would have seemed like magic? If it doesn’t convince a particular person today, might it in ten years’ time? Clearly the answer to at least some of these questions is ‘yes’.
If we’re still not convinced, what would it take to win us over? What does a machine – not necessarily the Internet – have to do to be intelligent? Or what would we want to let it do? Do sums? Beat us at chess? Diagnose our illnesses? Operate on us? Run the economy? Start a war? We’ve managed most of these already, the rest are on their way and none are inconceivable. What more do you want? The problem is of course, as we know from our human world, power isn’t intelligence.
In 1950, Alan Turing outlined a test for computer intelligence. Hide a human, B, and a computer, C, behind a screen and allow them to communicate in the same way (electronic messages, for example) with a human, A, in front of the screen. The argument goes that if A can’t tell which of B and C is which then C must be intelligent. This is regarded as one of the defining principles of AI but it’s not above criticism. On the one hand, it could be argued that the Turing Test is more a test of A than C; or at least that a cleverer A would make it harder for C to pass. On the other hand, just as C might be expected to gain sophistication over time, so will A; if A is in tune with technological advancement (just as a user – not necessarily a developer) then he/she will be increasingly harder to fool. To put it another way, if a given C is just on the verge of passing the Turing Test today with today’s A, it would almost certainly pass with an A from ten years ago – but would it pass with a A from ten years into the future? As elegant as it unquestionably is, the Turing Test may not be an absolute measure. Worse still, there might not actually be one. It’s likely that our expectations from technology will drive our implicit definitions of intelligence upwards.
And there’s another factor that we mustn’t overlook. As the machines are becoming more like us, we’re likely to become more like them. This might sound rather sci-fi but in fact it’s pretty real. We’ve been using various forms of implants (mechanical and electrical/electronic) for decades now; over time they’ve become increasingly sophisticated. There’s clearly trouble brewing in the paralympic arena, for example, as to when a device intended to compensate becomes one that enhances. It’s only a matter of time before human ‘add-ons’ are available in the market place that significantly improve aspects of day-to-day life and, as these become smaller and more integrated, it’s going to be harder to tell who’s using them. A further step would be brain implants to improve cognitive ability (processing) or the ability to store and retrieve knowledge to and from the brain (memory). OK, this is slightly speculative but it’s unlikely that, in a decade or so, we won’t have made some progress in this direction even if the jury’s still out on the basic principles. (There are some sensible objections at the start of this piece, for example, before it lurches into the common mistake of thinking that some form of common morality will protect us; compare with the so-called laws of robotics.) Although we might be a bit hazy on the details, it does seem as if the boundaries between human and machine are likely to become somewhat blurred. (This is certainly broadly in line with what Ray Kurzweil proposes.)
If you put all this together; that:
- our expectations of, and our familiarity with, technology increase along with the technology, and
- the definitions of, and the distinctions between, human and machine may need reconsideration,
then we seem to be looking very much at a process of adaptation on both counts. The metrics and benchmarks we use today may well be due a major overhaul and this could be an ongoing process. An apparently simple question such as ‘Will we be living with intelligent machines in the future?’ suddenly doesn’t seen quite so simple because how we interpret the question itself may not be absolute. In a very simple sense, the answer might be ‘yes’ today but ‘no’ tomorrow. In general, we might be able to answer ‘yes’ or ‘no’ with impunity. Remember Gödel, anyone?
April 20th, 2013 at 7:07 pm
I suppose the crucial point would be when computers can evolve for themselves somehow, without a need for human intervention. This is where they could turn there intelligence into power.
April 25th, 2013 at 8:14 pm
Hi Vic,
I agree with Gödel in the fact that he saw God in science! Isn’t science what makes up all these intelligent machines?
April 25th, 2013 at 8:52 pm
For me personally the key to all this is your observation that “As the machines are becoming more like us, we’re likely to become more like them”. Terrifying! Can’t help thinking that the more we converge with the cold hard machinery of the technological age the more of our hummanity we ultimately relinquish…
And at the risk of sounding like a complete luddite… what’s the point? Has technology made our lives easier or just faster?
We could spend the gross national income of 10 impoverished countries creating a computer that can play the violin… or just give the violin to a human being!
Maybe we should be assessing our relationship with the computer age all together… and stepping back rather than wading in further?
That’s my tuppence worth anyway!
Thanks for posting mate!
Thought provoking stuff!
April 25th, 2013 at 10:45 pm
After playing Portal and Portal 2, the idea of an intelligent machine really appeals to me a lot less.
April 26th, 2013 at 1:44 am
I still like being a human! 🙂
April 26th, 2013 at 3:41 am
That we willingly turn our tasks and decisions over to machines bothers me.
People in my family that I know are conscious, conscientious, and intelligent will follow GPS directions before they trust their own eyes. They’ll accept all the suggested corrections when they type. They look at what’s “recommended” for them on Amazon, Netflix, etc. They’ll ask Siri the same damn question 20 different ways and when it still can’t answer it, rather than find the answer, they just let it go. That’s the kind of behavior and adapting that concerns me.
Minsky wrote a piece years ago titled Will Robots Inherit the Earth?
I wouldn’t have believed that we would turn over the kinds of decisions that Minsky hopes we will allow machine intelligence to make for humanity. But in the last 10 years, I’ve started to think that this is inevitable because it appears that if people believe a technology is intelligent or has their best interest in “mind,” regardless of how they decide to define intelligence, they trust it.
April 26th, 2013 at 4:49 am
When I read this, it makes me think about a short story I read about Multivac. if humans continue to create machines that are becoming more and more intelligent, won’t they finally invent humans from those machines? What I mean is we humans evolve into creatures with the most complex brains, so if we ourselves create machines that are very complex, doesn’t that means we create humans?
April 26th, 2013 at 9:04 am
Hi …. I’m afraid I have not read your article except for the first para and the heading ….. so let me inform you of the nitty gritties of life …. there exists God the creator and we humans are his army … baby spirits .. created in his likeness and learning to understand who we are through this life. Read my article on the aura. Thinking is what we all are capable of … but thinking ourselves into oblivion is the job of idiots who imagine themselves to becoming machines. What an idiotic thing to do … to move back in time literally. We are at the cusp of an spiritual awakening where we may learn to fly and you idiots want to turn the clock back to becoming machines. How far can machines move on their own. Do you guys ever think .. or have you left thinking to birds and bees. Go ahead and think yourself into a machine. In your next life … find yourself parked on somebody’s office table for the duration of your life … lets see how you enjoy being a thinking machine then. Good luck .. I hope you are the next thinking machine .. with your type of thinking .. i.e. no analysis where that may take you in the long run is a study in disaster. I don’t suffer fools gladly … life is tough enough as it is … limit it to writing stories.
April 26th, 2013 at 10:34 am
Reblogged this on cleansamurai.
April 26th, 2013 at 1:15 pm
“Somewhere in between, for example, we’ve questions like what would happen if we were able to make a viable biological computer artificially, which we may be able to eventually through DNA manipulation. Would that be alive?”
The fertilized egg with it’s DNA instructions for cell specialization and building of structures looks like and appears to be a biological building machine that follows certain rules of chemistry and physics. Nothing can be measured that would show it is not an organic machine. However, for anyone who is aware of themselves as an entity with free will, they must postulate that a non-material entity that is not required to follow pre-determined rules or automatic reactions to stimuli has entered into a connection with the machine. In short, for free will, a soul must enter the bio-machine. And then:
…we’ll be looking to a point in the future where the machine effectively wakes up and then we’ll find out what happens…
Nothing will be resolved because a “soul” will need to enter or be connected to the future machine. And still there will probably be no “measurement” possible for the non-material entity interacting or controlling the future machine.
April 26th, 2013 at 9:42 pm
This is a fascinating post. I’ve been wondering if consciousness (your 3rd definition) is just a function of complexity. If the internet were conscious now, would it be obvious?
I’ve written a post on my blog to about the achievements of science. It’s a bit tongue-in-cheek and some people have assumed I’m a moron after reading it! I’m not! http://www.acertaindoubt.wordpress.com
April 28th, 2013 at 12:30 am
Nice Work!
April 29th, 2013 at 9:25 am
How can a finite “machine” become more intelligent than its maker (humans)? The only advantage of machines is their ability to process faster and remember almost perfectly. Other than that, everything about them is designed by humans — even those which will have the ability to evolve themselves, or even construct other better (by their eventual reference) machines.
As to intelligence that is merely an illusion and its indifference to the true intelligence, I guess the true intelligence is still important. The ability to know what is Love, Justice, and Beauty for example will never be achieved even by the most evolved machines.
Thusly, you are correct when you wrote: “our expectations of technology increase along with the technology” because the drivers of technology are human minds. I beg to disagree with you when you wrote: “the definitions of, and the distinctions between, human and machine may need reconsideration” because what is lacking today is a truly deep thinking about human being, not of what machines are.
April 29th, 2013 at 9:32 am
[…] via Dawn of the Intelligent Machines? | Turing’s Radiator. […]
April 29th, 2013 at 7:07 pm
[…] will finally become like us: conscious and intelligent. As Vic Grout points out in his article: “Dawn of the Intelligent Machine,” that is quite the ambiguous goal, when we take into account all the aspects of personhood that […]
April 29th, 2013 at 11:53 pm
Maybe, just maybe we have been through the events before,the world holds too many unexplained mysteries, Like 1600 ton blocks of stone
that modern technology cannot lift, but where lifted and moved in the past.
April 30th, 2013 at 5:48 pm
It may be that computers can calculate math and science problems faster and with less error than humans, perhaps even multitasking in a way that would make one poor human brain fail miserably. However, can a machine paint like Van Gogh from the anguish of a soul in confusion? Can a machine hold a baby and provide comfort? Can a machine survive without a human? There is more to human intelligence than analysis and computation and search functions. I would argue that machines have no emotions, and thus are not truly intelligent. They do not have instinct. They do not know angst. But I like them intensely, and feel quite proud of their abilities. It’s almost magical the things you can do with a smart phone. 🙂 I don’t feel I’m becoming more machine-like in my life. I certainly spend way to much time at a screen, though. Do you?
May 9th, 2013 at 12:29 pm
Reblogged this on jbrittholbrook and commented:
Good read via Andy Miah. Singularly good … or is it?
May 13th, 2013 at 2:26 am
Reblogged this on truthemultimedia.
May 14th, 2013 at 9:52 am
Reblogged this on snoozeyalose.
May 17th, 2013 at 12:35 am
[…] Dawn of the Intelligent Machines?. […]