(The first of two posts distilled from a talk given at the 2011 Wrexham Science Festival. The second part, ‘Dawn of the Intelligent Machines?, appears separately. However, both have a common thread and share some material.)
It’s sometimes said in the media and entertainment world that you haven’t made it until you’ve been ridiculed on South Park. The technological equivalent is probably that a concept isn’t mainstream until it’s featured in Dilbert. If that’s the case, then ‘the singularity’ has passed a necessary (if not sufficient) condition. But does that make it any more real?
It’s hard to avoid the feeling that, in popular technical circles, a lot more people are convinced that ‘the singularity’ is coming that actually seem to know what it is. Actually, it’s quite easy to sympathise with that position; a vague notion that something a bit scary is on the horizon is a lot easier to cope with than trying to focus on it with any clarity. It also depends a lot on who you talk to or what you read; a passing observation of Internet ‘singularity’ terminology over the last few years shows a gradual shift from a quasi-numeric definition of ‘brain size’ to a discussion of ‘what might happen’ when whatever it is happens (although the former hasn’t disappeared entirely).
What?
Essentially, ‘the singularity’, taken as some sort of average across definitions, is the predicted point in the future when machines become clever enough to ‘sort it out for themselves’. They’ll be smarter than us, faster than us and they’ll design and create the next generation of machines to be even smarter and faster. So before we know it they’ll be replicating smarter and faster still (i.e. ‘evolving’) and leaving us behind … to what? The concept’s not new really; it’s been thrown around by science fiction writers and ‘futurologists’ (and with a certain generosity of spirit, we’ll assume those are two different groups) for several decades now in one form or another. So what is it really?
Well, for starters, it isn’t a singularity. A singularity has a very precise meaning in mathematics and science. It’s a catastrophic break in a process or system so that there is no other side on the current trajectory. It isn’t merely a sudden change or something a bit dodgy. A black hole is a singularity where the laws of physics break down; if you go in, you don’t come out. (We’re separating science from science fiction, remember?) In mathematics, the function y = 1/x has a discontinuity at x=0; you cannot follow the curve smoothly from negative to positive. In contrast, you might be scared on the way to the highest point of a rollercoaster, and you won’t be able to see what’s coming, but the track carries on nonetheless. Rolling a die to decide which bus to get on may not be a good way to get home but it isn’t Armageddon. No amount of seriousness or fear makes an uncertainty into a discontinuity. If ‘the singularity’ happens one night, say, we’ll go to bed the evening before and wake up the morning after. The world might be a different place but it’ll go on … for some, at least. It isn’t a singularity.
When?
That piece of pedantry aside, what is likely to happen, when and how, and what will it mean? This is where it gets difficult to separate science fact and science fiction. Part of the problem is that there isn’t that much science fact to go on so the void has been largely filled with speculation. Estimates of ‘when’ vary between about 15 and 40 years and ‘not at all’ and that’s the easy bit. What exactly is in store is an argument that has split, no fragmented, technical commentators and made a good number of reputations and careers – the more apocalyptic the better. No-one commissions an article or pays to interview a futurologist whose basic message is that “it’ll kinda be OK”. But will it? (For a couple of fairly extreme views on a broad spectrum, take a look at the opposing opinions of Ray Kurzweil and Paul Allen.)
Part of the problem in getting to grips with all this, and another reason why ‘singularity’ is such a poor term, is that the doomsday scenarios actually require a number of things to happen – different, at least partially independent things that probably won’t occur at the same point and that are linked, if at all, by effect, not cause. That’s not to say that they won’t come to pass in their own good time but it mitigates against the catastrophe model and gives more time for adaption, which may be the key. There’s a lot here that’s open to argument and interpretation but, at the very least, the ‘singularity’ needs three essential components.
How?
Firstly, and fairly obviously, the machines have to become better than us in a measurable, but meaningful, sense, which applies (crudely) to both hardware and software. There’s no absolute definition of this and there probably never will be. In some areas (moving quickly, performing calculations, playing chess, etc.), they’re already better, often way better; in others (interpreting meaning, spatial reaction, etc.) they’re a long way off. We’ve made huge advances in some areas but very little in others. The bottom line is that we’ve not much more than the vaguest idea of the technical spec that a machine would need to be able to say “I’ll take over from here”. If we did, we’d be much closer to producing it and presumably it will be at precisely that point, when we get there, that our own understanding becomes eclipsed. Almost by definition, that’s very hard to predict with much accuracy. It way well come, but ’10 years’, ‘100 years’; who can really say? Most current predictions seem to be wet-finger-in-the-wind science.
There’s also a related, but distracting, factor here. A lot, possibly most, of the familiar diagrams doing the rounds, purporting to show us approaching some point of no return, indicate increasing machine complexity over time. The implication is that neural complexity will somehow define the arrival of ‘the singularity’ in some sense or other. Whether there’s any correlation between the size of a computer’s ‘brain’ and ‘intelligence’ is a separate argument (and a separate post – see later) entirely but, even without any such assumptions, there’s too much being taken for granted here. Although there are analogies between (say) the Internet and the human brain in terms of network structure (‘scale-free networks‘), they’re far too different, in many ways (or at least they might be), to assume that what can be done by one can be done by the other if we simply match the number of neurons. If consciousness is merely the product of neural complexity, for example, the Internet should have woken up by now but that really is a different debate (see the ‘Dawn of the Intelligent Machines?’ post – coming soon).
Secondly, we’re assuming that the attributes of one machine generation can be passed on to the next. That’s not a foregone conclusion either and what we’ve managed so far is some way off the mark. Designing a new chip layout is an entirely separate process from manufacturing it, for example, and neither is currently entirely automatic. Even if both processes are able to be combined in future and built into an automatic ‘evolutionary’ framework, there’s a world of difference between making components and producing complete systems. True, it does seem as if this piece of the jigsaw is a lot easier to find than some of the others but it’s clearly independent of the other pieces and probably won’t turn up ‘in the box’ at the same time. There’s no implicit connection between the level of sophistication that would make a machine ‘better’ than a human and that needed to self-replicate. Evolving (self-replicating and improving) is another concept still.
Finally (well, possibly not but that’s probably enough to be getting on with) there’s a sense in which we (the human race) will have to lose control of this process. All the time that this technological advancement (self-replication > improvement > evolution) happens when we press a button, we can effectively play God and press the ‘off’ button as simply as the ‘on’ one. Like real evolution, of course, there won’t be a single ‘species’ of machine developing in the future any more than there is now. There will be diverse branches of the machine ‘kingdom’ splitting off the main tree and trying their (or our) luck. Good ones will survive, bad ones won’t – initially by our own hand, with us deciding what’s ‘good’. So long as we retain control of the evolutionary process, we’ll be able to terminate developments that either don’t work or we don’t like the look of. The day may come, of course, when we do lose control but it’s another separate event. In fact, by obvious definition, it has to come some time after we reach the self-evolution stage in the previous paragraph.
If!
There are plenty of other obstacles on the road to the singularity. It’s impossible to list them all but Mark Anderson and Ramez Naam, for example make some pretty reasonable objections. Rather than deal with each of these individually, we’re trying here to focus on the essential things that would need to happen for the singularity to occur. We’ve discussed three and there are almost certainly more. Admittedly, taken together, they’re not entirely independent in a statistical sense but most have a measurable, if small, probability that they won’t happen. (Yes, that is true; there’s so much we’re speculating about here.) If each necessary event has a certain probability of occurring, admittedly very difficult to quantify, and some are less than one, then the probability of them all happening, their product, may be considerably less than certain. Even if they do all come true, they’re likely to arrive at different points in the future so the development of the singularity over time (see, it really is a truly awful term) will be gradual and may give (us) time to adapt.
This time may be key. We have an impressive history of adaptation as a race ranging from seeing off ice-ages to coping with the unforeseen effects of the very technology we’ve inflicted upon ourselves. There’s no law that says this has to continue, of course, and it’s certainly never been plain sailing, (there have been social, political and environmental effects of all technology revolutions throughout history) but we could interpret the signs as good if we wanted to. The fact that the ‘singularity’ is really no such thing, either in definition or timescale may well assist in the process of adaptation. Each of the necessary developments described above might well be manageable in isolation even if they’re too much to handle together. However, we may have to accept that there may come a time when each of these individual contributory events has taken place and we could be living in a world in which the ‘singularity’ has arrived or is in the past, even if technically it never happened.
So?
On the other hand, we could easily take the opposite view that we’re heading for disaster. True enough, if the machines are better than us in a meaningful sense, if they do find themselves on some accelerated evolutionary programme and if we have lost our control mechanisms, then it probably is reasonable to ask what’s going to happen to us. What will become of the (clearly) inferior race? Again, there’s a need to separate fact from fiction here; the so-called ‘laws of robotics‘ don’t help us. Who’s going to enforce such laws? The ruling class? There’s no great history of that working, at least on this particular planet. While we’re thinking on that scale, what about the other planets? How are they getting on? There’s a theory that we’ve never seen extra-terrestrial life (the Fermi Paradox) because all civilisations naturally wipe themselves out before they become capable of long-distance space travel. It might not exactly be ‘the singularity’ to blame … but it might be implicated in the conspiracy. The counter-argument would then be, of course, why aren’t the machines talking to us? Maybe they’re waiting for us to sign off; maybe they only want to talk to the machines? Maybe they don’t survive the catastrophe either.
A final possibility worth considering is that the distinction between us and the machines disappears. In fact, this sub-theory forms a part of the Kurzweil view. If machines evolve to become more like us and we take advantage of their technology to become more like them, then, in a very real sense, there’s no-one to take over from anyone else. But that’s also taking us into the ‘Dawn of the Intelligent Machines’ discussion.
We’re not really talking about a singularity here, but a massive uncertainty. How can we possibly know? We really don’t have a good track record of predicting the technological future if you stop to think about it. (About the only times we’ve got it right over the last few centuries is when we’ve just assumed that not much would happen for a few years and it so happened nothing did. We’re hopeless at predicting actual events.) The very fact that we’re talking about a process involving our race potentially being made redundant, eclipsed mentally, means we can’t really comprehend it. It’s speculation. There’s nothing necessarily wrong with speculation; it’s often fun and sometimes it helps us randomly get somewhere near the truth – not that we’d know. It’s speculation. As is often the case with these discussions, we’re focusing on a few threads that we understand that may or may not be significant in the long term. It’s a pretty reasonable bet that there’s something big out there that we’ve never even thought of. What might that be? Well, we wouldn’t know, would we?
September 20th, 2013 at 6:54 pm
[…] here on two levels: the technological and the social. It’s relatively easy to discuss, say, The Singularity in technical, maybe even environmental, terms but the social fallout is frankly anyone’s […]