Can we continue to make computing devices smaller and/or faster? Can we do this without limit? If so, how? What’s the next generation?
Microchip designers use a wonderful armoury of terminology, most of it (deliberately, one suspects) impenetrable to outsiders. However, one of the – on the surface at least – least alarming, and certainly most charming, is the phrase “To the finish”. It’s an intriguing term and behind it is the spirit of an admirable intention. The only problem is no one really seems to know exactly what it means.
“To the finish”, in its broadest sense, is some mythological-technological future in which logic circuits have shrunk to such an extent that individual components are measured on the atomic scale. On one level, although in nominally different research fields, this is comparable to the “intelligent dust” predictions of the most enthusiastic Internet of Things proponents.
One of the reasons, but hardly the only one, that this concept is so difficult to tie down is that things have got pretty murky at the forefront of chip design recently and for the computer scientists, engineers and physicists working on the next generation of the transistors that form the essential building blocks. Although Moore’s Law seems to have kept going now for several decades, and may do for a while longer yet, no one is particularly sure what exactly’s being measured any more.
The problem is twofold. In the simplest sense, there’s been a breakdown recently in the relationship between component density on a chip and the performance of that chip. For various reasons, doubling the density of transistors on a regular basis, in accordance with Moore’s Law, hasn’t always yielded a direct return on performance. On the other hand, by tweaking other aspects of circuit design, chip designers have been able to provide improved performance in ways other than by simply increasing the number of logic gates in a unit area. Also, the move from conventional two-dimensional to three-dimensional chip design has made these traditional measurements somewhat meaningless.
When is a nanometre not a nanometre?
On a subtler level, the roles of engineering and marketing in labelling technological advance have also become blurred. The next generation of chips are likely to be described as “fourteen nanometre” or “sixteen nanometre”: “14 nm” or “16 nm”, simply because this continues the progression that the manufacturers, and their publicity teams, have used for several years now. In the good old “32 nm” (say) days, this meant something. Unfortunately, “14 nm” or “16 nm” doesn’t really relate to any particular measurement that it’s possible to find on a chip nowadays. It used to be how close to each other connecting wires could be packed around components but it simply isn’t that any more. Engineers’ component density, connection grids, performance figures and sales departments have got their wires crossed, certainly in a metaphorical sense; possibly a physical one too.
However, leaving the terminology aside, there are some probably more serious reasons why this rate of change, whether it be measured in density or performance, will be hard to maintain. Existing manufacturing methods are about at their limit with the current generation of chips and may well struggle, with existing techniques and technology, to take it much further. Moreover, there are good – often simple – physical reasons why components and connections can’t get closer, smaller and thinner and still conduct, insulate, etc. Also, most of the ‘one-off’ shortcuts (such as shaving off the ends of logic gates and shortening connections) have already been taken and will not extend easily to the next generation – certainly not “to the finish”.
Atomic Computing?
Quite possibly the deal-breaker though, is simply what happens to the materials themselves when we approach the atomic level. Conventional chip design, and the logic components that comprise them, rely inescapably on their physical properties. Currents flow, or don’t flow, charges are held, or not held, in well understood, deterministic ways. Logic circuits always behave in the correct manner because that’s how they’ve been built and we have faith in their components to do this. This, in turn, is because we understand the behaviour of the materials of which they’re made. Everything is nicely predictable, as it has to be.
Unfortunately, this determinism falls apart when we get down to counting individual atoms. Materials ‘down there’ cannot be modelled in this nice, simple way. If components become so small that individual atoms can be counted, then a random change of a single atom, or a small number of atoms, means you’re working with a different material, one which probably won’t behave in the way that the ideal material would have. Logic circuits, in turn, become unpredictable – they essentially fail – and the worst thing is that they will do so randomly and inconsistently over time. How can we build reliable systems from unreliable components?
Brain Computers?
Well, in fact, we do have some experience in that. At least, we’re very familiar with using a particular instance of such a system. Although rather different in a computational sense, our brains are excellent examples of complex systems built from unreliable components. The individual neurons and synapses, which make up our brain, are each subject to random failure and indeed, over a lifetime, a percentage of the brain fails permanently. However, the complete system carries on working without much in the way of noticeable degradation. We must be managing this chaotic process somehow – and pretty effectively. Is this the solution then? Should our brains be the blueprint for future generations of computer systems?
Unfortunately, it’s never going to be as simple as that. Firstly, we really don’t understand how the brain works on any meaningful level so using it as a model for circuit design is ludicrously ambitious at the present point in time. (Is the brain running some collection of algorithms? If so, what?) Secondly, what we do know about the brain tells us that it’s a very different form of ‘computer’ entirely. Whereas the key components (logic gates) of a conventional digital computer are fairly sparsely, and locally, connected and operate at ultra-high speeds, neurons within the brain have a level of interconnectivity several orders of magnitude higher than any existing computer but, in turn, communicate at much lower speeds. They are very different things in a fundamental sense and any future attempt to build computers on the brain model has some major obstacles to overcome. It’s being tried though …
If we’re not going to take it “to the finish” with an extension of existing chip design, or a complete remodelling based on brain structure, what are the alternatives? Are there other technologies that can step in to save the day … or tomorrow? What might take the place of conventional – to use a simple term, ‘electronic’ – computing? Three possibilities that could be worth discussing would be quantum computing, bio-computing and optical computing. However, with each of these possible solutions, comes restrictions and limitations, and, for the time being … problems.
Optical Computing?
Probably the easiest place to start is with optical computing. This is a relatively simple concept. Replace electronic devices and signals with their optical counterparts; that sounds like a good start. Light is ‘free’ and considered ‘more efficient’ than electricity so what’s the problem? Let’s ‘do’ computing with light instead of electromagnetism. Of course, it’s never as simple as that. Optical signals, particularly over short distances, actually use more power to be transmitted than electronic ones and this is only within an optical system; signal loss between optical and electronic systems is also very high. It’s also a lot harder to store data optically, either temporarily or permanently. There is some dispute among active researchers as to whether optical computing has any viable future but it’s very clear that, if it has, it’s not in the short term.
Bio-Computing?
How about bio-computing then? We won’t attempt here to make any distinction between ‘biochemical’ computers, ‘biomechanical’ computers and ‘bioelectronic’ computers. The essential principle for all of them is a computational system based on biological molecules, typically DNA and/or protein. Again, it works well enough in principle; these systems do indeed store and process data in a very real sense within our own bodies (for example). We’d just need to harness it somehow. A huge advantage of this type of ‘natural technology’ would be the ability to ‘manufacture’ these systems automatically using the self-replicating features of biological systems. (Note: there’s no attempt here to discuss any ethical dimension to any of this – that’s going to have to be saved for elsewhere.) The downside is our current inability to manipulate the components and circuits, and consequently the data that they store, in any reliable and meaningful manner – or on any practical scale. Once again, the jury is still out but we clearly can’t expect short-term solutions.
Quantum Computing?
So that leaves quantum computing – probably the most hyped of them all. Is this the panacea that we’ve been looking for? Well, it might be, but certainly not for a while yet. The essential idea appears to be sound: because quantum bits (qubits) can be processed in a number of different but simultaneous states by appropriate quantum logic circuits, huge numbers of conventional calculations, or program steps, can be reduced to a much smaller number of ‘quantum steps’. This approach would not explicitly seek to reduce the size of computer circuitry; rather it would look to use this new circuitry in a dramatically more effective way. Small, but viable, quantum computers have now been built and, on the surface at least, the principle seems to be established, although – and there’s a possibly analogy with the brain model here – some quantum algorithms and processes are actually inexact and need to be repeated for results to be reliable.
However, there’s an obvious and a less obvious objection to quantum computing. Firstly, the technology is in its earliest infancy. Over the last few years, quantum computers have been built, which, for example, have successfully factorised small numbers such as 15 and 21 and, more recently, 143. True enough, they use very efficient algorithms to achieve this factorisation, and factorisation is thought to be a difficult problem, but these numbers themselves are hardly breathtaking. Can quantum computers be built on any significant scale? Opinion is divided on this too. A less obvious problem may be the interface between the ‘conventional’ and ‘quantum’ world. Do we really understand the complexity involved in initialising quantum calculation – in setting up a real-world problem in the quantum world – and reading back the results? Problems of decoherence are well-recognised and understood but relatively unsolved. There’s no point in running a linear or polynomial algorithm on a quantum computer if it takes an exponential amount of time to configure it in the first place. Experience tells us that complexity can often be disguised.
Can we get there?
There are undeniable problems with all the proposed technologies. So the big question is: will it happen? OK, we know we’re not going to get any of this immediately but will it ever arrive? Can we take it “to the finish”? Can we build computers from atoms? Or sidestep the problem with lateral thinking? Can we deliver intelligent dust?
Well, it might happen; that’s probably all we can say. We’ve identified at least four possibilities for taking it “to the finish” (in the broadest of mythological senses and focusing on the ends rather than the means): optical computing, biological computing, quantum computing or even conventional electronic computing but with a human brain model (and there are others). Surely, if there is any reasonable probability – even a small one — of any of these technologies ‘coming off’ individually, then the probabilistic product suggests that at least one will?
On the other hand, it might not. Common sense, of course, tells us that we can’t keep doing this forever. We can’t keep doubling density and halving size. Is there a limit? Is it atomic? Or is it larger or smaller? As Albert Allen Bartlett once noted in an entirely different context: “The greatest shortcoming of the human race is our inability to understand the exponential function.” “To the finish”? What finish?
So what do you think?