How Singular is the Singularity?

If recent headlines are anything to go by, opinion on the likelihood – and impact – of the ‘Technological Singularity’ is diverging rapidly. Is this largely because we don’t even agree on what it is?

Artificial Intelligence’ (AI) is certainly in the news at lot at the moment.  But so are robots; and Kurzweil’s Singularity; and machine evolution; and transhumanism.  Are these the same thing?  Are they even related?  If so, how?  What exactly should we be arguing about?  Are we worried precisely because we don’t know this stuff?

Well, perhaps to make a start, we should point out that intelligence isn’t the same thing as evolution (in any sense).  That’s obvious and accepted for ‘conventional’ life-on-earth but we seem to be getting a bit confused between the two when it comes to machines.  Developments in both may proceed in parallel and one may eventually lead to the other (although which way round is debatable) but they’re not the same thing.

Biological evolution, as our natural example, works by species continuing to adapt to their environment.  If there’s any intelligence at all in that process, it’s in the ingenuity of how the algorithm itself solves the problem– not the species in question.  Depending on what we mean by intelligence (we’ll have a go at this further on), an individual within a species may or may not possess intelligence – if the individual doesn’t, then a group of them might – but either way, it’s not required.  Evolution works through random mutations producing better specimens; neither the species nor an individual can take credit for that – it’s all down to the algorithm fitting the problem space.  Many species are supremely adapted to their environment but their individuals would fail most common definitions of intelligence.

For a slightly tangential example, we might reasonably expect ongoing engineering advances to lead to continually improving travel (or communications or healthcare or safety or comfort or education or entertainment or … ) but these improvements might arise in other ways too.  Engineering isn’t the same thing as TravelIntelligence isn’t the same thing as Evolution.  So which of these is involved in ‘The Singularity’?

Well, the clearest – but somewhat generic and by no means universally accepted – definition of the ‘Technological Singularity’ (TS) is a point in the future where machines are able to automatically build other machines with better features than themselves.  There’s then an assumption that this process would soon accelerate so that new generations of machines would appear increasingly quickly and with increasing sophistication.  If this improvement in performance becomes widespread and/or general – i.e. it goes beyond being simply better suited for a particular, narrow role – then it becomes a bit hard to see where it might all end.  It’s debatable, in a pure scientific sense, whether this makes for a genuine ‘singularity’ (compare with black holes and y = 1/x at x=0) but it would clearly be a period of considerable uncertainty.

robopocalypse-image
And it’s not a particularly mad idea really.  We already use computers to help design the next generation of machines, including themselves; in fact, many complex optimisation problems in layout, circuitry, etc. are entirely beyond human solution today.  We also have machines producing machines – or components of machines, from simple 3D printers to complex production lines; and, once again, the efficiency and/or accuracy of the process is way beyond what a human could manage.  In principle, all we have to do is merge together automated design and automated production and we have replication.  Repeated replication with improvements from generation to generation is evolution.  No-one’s explicitly mentioned intelligence.

OK, there are a couple of reality checks needed here before we go much further.  Firstly, the technology still has a long way to go to get to this point.  The use of software and hardware in design and production is still pretty piecemeal compared to what would be necessary for automatic replication; there’s a lot of joining up to do yet.  Computers largely assist in the process today, rather than own it; something altogether more complete is needed for machines ‘giving birth’ to new ones.  On the other hand, common suggestions for the arrival of the TS (although almost entirely for the wrong reasons) centre around 2045.  This is quite conceivable: three decades is a huge time in technological advancement – almost anything’s possible.

Secondly, we may not have explicitly mentioned intelligence in the road to automatic replication but some of this adaptation might sound like it?  Autonomously extending optimisation algorithms to solve new problem classes, for example, certainly fits most concepts of ‘intelligent software’.  This is more difficult and it depends on definitions (still coming) but we come back once more to cause not being effect.  In a strict sense, replication (and therefore evolution) isn’t dependent on intelligence; after all, it isn’t with many conventional life forms.  It’s possible to imagine, say, an industrial manufacturing robot, which was simply programmed to produce a larger version of itself – mechanically difficult today, certainly, but not intelligent.  Anyway, the thing that might worry us most about a heavily-armed human or robot wouldn’t necessarily be its intelligence; in fact, it might be its lack of it.  (More on this later.)

So intelligence isn’t directly required for the TS; rather the establishment of an evolutionary process.  In particular, when people say things like “The TS will occur when we build machines with the neural complexity of the human brain”, they’ve missed the point spectacularly – both conceptually and, as it happens, even numerically (still to come).  However, it can’t be entirely denied that some form of machine ‘intelligence’ will probably have a hand in all this.  At the very least, developments in AI are likely to continue alongside the filling-in-the-gaps necessary for machine replication so we’re going to have get to grips with what it means somehow …

And right here is where it gets very difficult. Because there is simply no standard, accepted, agreed definition of ‘intelligence’, not even for conventional life; in fact the word is clearly used to mean different things in different contexts …

We won’t even begin to attempt to describe all the different, and mutil-dimensional, definitions of intelligence here.   Even on a single axis, they sit somewhere on a spectrum from the crude intelligent=clever extreme to the (in fact, equally crude but with a deceptive air of sophistication) intelligent=conscious.  It will even upset many to use ‘self-aware’ and ‘conscious’ as synonyms, but we will here for simplicity.  No single definition works.  By some, conscious life isn’t intelligent if it isn’t ‘clever enough’; by others, an automaton might be if it solves fixed problems ‘fast enough’.

And of course, it gets worse when we try to apply this to computers and machines.  By some definitions, a pocket calculator is intelligent because it processes data quickly; by others, a robot, which was superior to a human in every single mental and physical way, wouldn’t be if it was conventionally programmed and wasn’t aware of its own existence.  (Is an AI robot more or less intelligent than a dog, or a worm?)  We sometimes try to link AI to some level of adaptability – a machine extending its ability beyond its initial design or configuration to new areas – but this proves very difficult to tie down in practice.  (At which point is a computer really writing its own code, for example.)  Furthermore, there are two philosophically different types of machine intelligence to consider: that which is (as it is now) the result of good human design (artificial intelligence) and that which arises from the machine somehow ‘waking up’ and becoming self-aware (real intelligence) …

This fundamental difference in types (not definitions) of intelligence is possibly less problematic than interesting.  We won’t digress in this post to consider the implications of ‘real’ machine intelligence (such as the ethics of ‘robot rights’, for example) or the different models of intelligence that might allow ‘real’ intelligence to be created (neural complexity, panpsychism, the biological dimension, spirituality, etc.)  That’s an argument in itself.  But for now, anything that’s exciting or scary about the TS applies broadly the same if we’re dealing with something that really is intelligent or just appears to be.  And remember, the TS is primarily a question of evolution: intelligence is a worthy but related secondary issue.

In fact, this might be a good point to dispel another myth in relation to the TS.  It has nothing whatsoever to do with the circuit complexity of any given processor or any collection of them.  The point at which a computer’s neural mass (presumably measured in number of logic gates) reaches that of the human brain is often portrayed as some significant point in AI development – sometimes even as the TS itself – but this is nonsense.  The almost endless reasons why this doesn’t make sense include:

  • We’ve already reached this point.  If you multiply the neural complexity of the Internet itself by that of the machines that comprise and connect to it (i.e. the Internet’s graph structure and the devices’ own circuitry – most calculations overlook the latter) then we’ve been there for some time.  Yes, the Internet has become steadily more powerful but there’s been no catastrophe.  If consciousness is merely the product of neural complexity, for example, the Internet should have woken up long ago.
  • The structure and speed of a computer device (including any network of them) is utterly unlike the brain.  Each brain neuron is directly connected to many, many thousands of others.  Signals, however, move fairly slowly – just a few metres per second.  By comparison, computer/network nodes (gates, switches, etc.) generally connect to just a handful of neighbours but signals travel at an appreciable fraction of the seed of light.  In graph terms, the brain is dense but slow while a computer is sparse but fast.
  • It’s often pointed out that we don’t know what algorithms the brain runs so it would be difficult to replicate them.  The truth, in fact, is that we don’t really know if it runs algorithms at all – in any sense that we would recognise.  The conventional notion of software running on hardware may have no equivalent in the brain.  Its structure and operation may be inextricably linked in a way that we can’t (yet) recreate in a machine.  (There may be a biological foundation, for example.)  Its hardware and software may be inseparable.  We may eventually understand how this works and seek to design machines on this basis but we’re not even close now.

So a computer is very unlike the brain; the TS can’t be measured or counted.  It’s what happens that’s important.

The real question we have to somehow get to grips with is how we might expect these highly-evolved machines to behave.  This seems to be the focus of most of the recent scare stories.  Again, intelligence may be something we have to consider here but it isn’t the driver.  A new race of machines (in fact, probably many different species of them), superior to humans in every physical and mental way, could clearly be considered a threat.  But it’s not obvious, in this respect, that an ‘intelligent’ machine would be any more worrying than one that wasn’t – that ‘strong, fast and clever’ is more dangerous than ‘strong, fast and thick’ – because, for example, we know the human (and animal) world often doesn’t work that way.  And all of this is made more difficult by having never really worked out what these terms mean in the first place.

An obvious key point here is whether we’re going to remain in control of what these machines do.  The implied concern behind a lot of the AI-related headlines is that we won’t.  If, over the long-term and beyond the TS (and the notion of ‘beyond’ may be why ‘singularity’ isn’t such a great term), machines only ever do what we tell them to, then humans remain responsible for whatever use and abuse may occur.  The machines are effectively extensions of ourselves (tools) so, even accepting that legislation often struggles to keep pace with developments in technology, we might hope that ‘conventional’ human moral, ethical and legal codes can be eventually applied (not to the actual machines, of course – that wouldn’t make sense – but to the way we use them).  Whether these human social codes, in themselves, are fit for purpose is way out of the scope of this post.

A much more serious situation arises if, as is generally expected or feared, machines evolve to the point of (at least appearing) to think for themselves, either by the autonomous extension of ‘artificial’ intelligence to new domains or the acquisition of ‘real’ intelligence.  At this point, we have to genuinely consider the rules or framework by which such a machine might ‘think’ and therefore ‘behave’ and, if what’s gone before was difficult, this takes us into entirely new, deeper, uncharted and murkier depths …

Frankly, what axioms do we have for dealing with this?  Why do we even think the way we do?  OK, we have many models but they range from hard neuroscience, through different psychological theories, including and leading to concepts of the soul – and they’re intersected by various arguments for and against pre-determinism and free-will.  C.S. Lewis, for example, describes the Moral Law binding humanity, and there are more scientific versions available for the spiritually faint-hearted, but can any of this this be a foundation for predicting the way machines will think and behave?

Dilbert
On the whole, humans try their best to apply logic to an, albeit difficult to define, moral foundation.  We’re not particularly good at this in practice.  First of all, few of us really know what this starting point is and we have even less idea where it comes from.  Second, we’re not expert logicians in following an optimal line.  Third, real life usually gets in the way of the logic and a form of ‘needs must’ thinking overrides clinical reasoning.  Fourth, we often knowingly deviate from what’s clearly the right course of action because we’re all – to a greater or lesser extent – flawed, which might for some even include not wanting to try in the first place.  (Obviously there’s a sense of fundamental human ‘goodness’ in this model, which isn’t universally accepted.)  However, in principle at least, we have a sense of direction through all of this.  We either make some attempt to stay on course or we don’t.

So the question is can or will this sense of ‘moral direction’ be instilled in – and remain with – artificially-programmed intelligent machines and/or will it be evident in machines achieving their own consciousness?  In particular, what would be their initial moral code?  This seems like a very important question because we might reasonably assume that the machines’ logic in putting the (moral) code into practice would be impeccable and not prone to diversion as it tends to be with us.  But does the question even make sense?  (Let’s be utterly clear about this – Asimov’s Laws of Robotics, in this context, are useless: simple fiction and already frequently violated in the real world.)  What might highly-evolved, super-powerful (possibly intelligent) machines regard as their ‘purpose’, their raison d’être?  Would they serve, tolerate, use or replace humanity?

And we just don’t know.  We can define the question in as many ways as we like and analyse it every which way we can.  But we just can’t say.  We can easily pluck unsubstantiated opinions out of the air and defend them with as much energy as we wish but there’s really nothing to go on.  Just as we can only speculate on what would drive an alien race from a distant planet, it’s anyone’s guess as to what might drive a new technological species that either we’ve created or has evolved by itself.  (This is all assuming we’ve surrendered control of the process by then.)  In this respect at least, some amount of concern in relation to the TS seems justified – even if only because we can’t be certain.  It’s taken us a long time to get to this position of uncertainty but concern relating to uncertainly isn’t irrational.

transhumanist-evolution
Looking to tie this up somehow, if it’s difficult to say whether we can ultimately coexist with intelligent robots then is transhumanism our insurance policy?  As we put more human features into machines, will we take on more of theirs?  Is the future not competition between ‘natural’ and ‘technological’ species but their mergingCyborgs?  Some futurologists see transhumanisn as a fairly inevitable destiny but does it really help?

Well, maybe.  But it’s a maybe with the same problems as the uncertainties of the TS itself.  Because it still depends on how the ‘pure’ machines will see the world.  If ordinary humans are tolerated then probably so will ‘enhanced’ humans be.  If not, then this level of improvement still might not be enough if machine logic takes a ruthless line.  Again, the standard futurologist’s view of transhumanism implies we’ll still have some control but it remains to be seen if that’s the case.

And finally, possibly even optimistically, a word of caution … If this potential elimination of humanity by a robot master race (repeated across equivalent worlds) might seem like an answer to the Fermi Paradox, we might have to think again.  (Another version of the ‘civilisations naturally create their own destruction before they can travel far enough’ theory.)  Even if the ‘developer race’ was lost in each and every case across the universe, why aren’t the machines talking to each other?  (Or are they?)

And there we are … Three thousand words and we don’t know.  Many people have written a lot more, and a lot less, and they claim to know but they don’t.  There are just too many unknowns and we’ll have to wait and see.  Should we be scared by the TS or not?  Well, in the sense that it’s uncertain and unpredictable, yes.  But lots of things are uncertain and unpredictable.

So, is the TS really a ‘singularity’?  In a strictly Gödelian sense, it might be.  Probably, we’ll know when we get there …

About Vic Grout

Futurist/Futurologist. Socialist. Vegan. Doomsayer. Professor of Computing Futures. Author of 'CONSCIOUS' https://vicgrout.net/the-book/ View all posts by Vic Grout

4 responses to “How Singular is the Singularity?

So what do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.