The Problem with ‘Futurology’

What’s your favourite terrible technological prediction?  There are plenty to choose from, that’s for sure.  The following is just a brief list of the most infamous computing-based futurology howlers (oldest to newest):

  1. “I think there is a world market for maybe five computers”, Thomas Watson: IBM chairman (1943) (* or was it someone else?)
  2. Computers in the future may weigh no more than 1.5 tons”, Popular Mechanics (1949)
  3. “I have traveled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year”, Prentice Hall: Business Books Editor (1957)
  4. “There is no reason anyone would want a computer in their home”, Ken Olsen: DEC founder (1977)
  5. “640K ought to be enough for anybody”, Bill Gates (1981) (* or did he really?)
  6. “We will never make a 32-bit operating system”, Bill Gates (1989)
  7. “Spam will be a thing of the past in two years’ time”, Bill Gates (2004)
  8. “Next Christmas the iPod will be dead, finished, gone, kaput”, Alan Sugar (2005)

(* In truth, quotes 1 and 5 are apocryphal; there’s no real evidence that they were ever said, at least by the people they’re attributed to.  But, as Mark Twain is supposed to have said, “Never let the facts get in the way of a good story.”  Also, these are, of course, IT/Computing/Computer Science specific gaffs.  If you’re looking for poor crystal ball gazing across the wider technological sphere, try here and here.  For an interesting retrospective, see this.)


So the big question is what comes next?  What are we ready to get wrong now?  In fact, the chances are it’s already been said so what’s going to amuse us in 2033 about what we think in 2013?

Actually, of course, this is a little unfair.  Along with the hopelessly inaccurate predictions over the past few decades, there have been many that have proved justified to some extent, on one level or another.  Rather than ridicule the well-intentioned folk in the list above, let’s play a slightly different game.

Track down any sci-fi book or TV programme from the 60s, 70s or 80s.  (Star Trek is probably an obvious one.)  OK, on the surface, there’s a fundamental difference in purpose between sci-fi, whose intention is to entertain, and ‘futurology’, which sees itself as a semi-formal academic process.  However, any sci-fi story set in the future must be making some attempt at predicting it so how did they do?

Well, results vary considerably, of course, but for any of them, their content can be broadly categorised into three classes:

  1. Concepts and technologies that the book or programme predicted that broadly came to pass
  2. Concepts and technologies that the book or programme predicted that on the whole didn’t
  3. Concepts and technologies that the book or programme didn’t see coming at all

So for category 1, we might offer up mobile communications (up to a point), sophisticated weaponry (sadly), improved medicine and healthcare (nicer) and ubiquitous computing.  (Although if you want to see the last one given the full sci-fi and social treatment, read Michael Moorcock‘s Dancers at the End of Time.)  On the other hand, category 2 might include jet packs, (Star Trek) matter transporters and flying cars.  (Is it always going to be the really fun stuff that we don’t get?)

Category 3 is possibly the most interesting and easy to overlook.  The Internet of Things (IoT), for example, social and technological integration and just the whole concept of ‘big connectivity’ and ‘big data’ has rarely been given a decent treatment in sci-fi; although it has, of course, in more forward-looking social commentaries such as (rather obviously) George Orwell‘s 1984.  (There’s actually a fair bit that Star Trek didn’t see: “I can’t reach Spock on the communicator, Bones.” “Don’t worry, Jim; just leave a message on his Facebook page!”)

Of course (and before anyone argues too much), these collections don’t have tight boundaries and what we’ve seen in most areas is partial fulfillment and improvement rather than complete achievement.  The HCI of the Star Trek computer, for example, hasn’t quite arrived but we’ve achieved progress.  The Holodeck seems a fair way off but inroads have been made in some areas.   But the notion of the three categories is still a useful one because, when you consider all three together, it highlights just how bad we’ve tended to be at this futurology malarkey in the past.  Why might this be?

Once again, it’s easy to ridicule past attempts from a position of knowledge and we have to remember that the main intention of sci-fi was to give us a good time and it certainly managed that.  A much more interesting question is … Are the ‘real’ futurologists doing any better?  Will they?  can they?  This isn’t so straightforward.

In a simple sense, futurology is a branch of philosophy, and philosophers, therefore futurologists, whatever high opinions they may have of themselves, are products of their age (A History of Western Philosophy: Bertrand Russell).  It’s very difficult to think freely about the future without breaking free of the conventions of the present, and this applies here on two levels: the technological and the social.  It’s relatively easy to discuss, say, The Singularity in technical, maybe even environmental, terms but the social fallout is frankly anyone’s guess.

But there’s a deeper problem.  Social, philosophical, even technological, thinking progresses in cycles, or at least in complex patterns. The notion that we, at this point in time, have an unassailably clear view of where we are and where we’re going is rubbished by the simple realisation that every leading civilisation has believed exactly the same for millennia.  In a few short years’ time, people will talk about our misguided thinking with the same condescending pity with which we look back now.  It’s simply arrogant to think otherwise, to over-inflate our own significance.  It’s maybe natural to think of ourselves as being near the end of the scientific journey of discovery but many have thought that before and it’s simply wrong.

Possibly the best way to describe the philosophical phase we’re currently in is to consider it a time of scientific and technological confidence.  That might seem like an odd claim considering the immense uncertainty we have about the future right now but, in fact, the faith we currently have in science and technology – or at least our understanding of them – parallels the Victorian view of the world; that there were almost no more problems left to solve.  Victorian technologists believed that a bit more engineering – a few more nuts and bolts – would deliver a technological utopia; the prevailing wisdom now is that a bit more science – a few more equations – will unlock the mysteries of the universe.  These are just different manifestations of scientific and technological fundamentalism.  And with fundamentalism comes intolerance.  There’s a lot that we do understand about the universe and a lot that we don’t.  There could be anything behind the bits we don’t understand and it doesn’t have to be the same as for the bits we do.  OK, let’s relate this back to computing technology and consider a couple of examples …

If The Singularity happens at all, and that itself is up for debate, the effects may be anything on a scale of extremes from:

  1. Elimination of the human race and/or destruction of the planet … to …
  2. Nothing,

with various ‘in between’ options such as the blurring, or possibly the loss, of the boundary between machine and human.  Getting to grips with these concepts requires something better than a bar-room awareness of:

  • the defining features of ‘humanness’
  • what intelligence is
  • what consciousness is
  • social and ethical issues in technology
  • whether there’s an essential difference in any of these between humans and robots

to name but some.  Few people posses the knowledge necessary in all these areas to offer reliable analysis – we still don’t really understand how our own brains work, for example – but that won’t stop anyone that’s ever heard of Kurzweil’s theories having an opinion.  And, of course, possibly because it has access to the highest platform, it’s the predominantly technological view that prevails.  In no logical sense does that make it necessarily right.

Similarly, there’s clearly some technical fun ahead with the Internet of Things (IoT) and the wider concepts of ‘big data’ and ‘big connectivity’ but socially, the ground is shifting under our feet.  Already, the younger generation don’t find 1984 and Big Brother as scary as previous generations because they were born into a world where that had already started to happen.

(“Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”  Douglas Adams)

With the IoT, etc. this trend is set to continue.  But the IoT itself is simple: tags, sensors and networks, hardware and software.  What’s complicated is the human, social effects of a new generation forced to be part of a global database, with nowhere to hide from each other.  What will happen?  How will we react and change individually and collectively?   Will society become more open and honest or more ruthlessly protective and secretive?  Will we unite or fragment?  Why are we letting technologists – people with a proven disposition towards social inadequacy – answer those questions for us?

Exciting times lay ahead without doubt.  (Excitement can be a good or bad thing, of course.)  Whatever form The Singularity takes, the impact will be immense.  The Internet of Things will make the world a different place.  Both may change society beyond recognition.  Along with the opportunities come huge challenges and credible solutions to these are unlikely to be solely technological.  Nor will narrow-minded fundamentalism be of much use, whether it be religious, political, moral, social, philosophical or scientific.  When has that ever worked in the past?  (In fact, the cracks are already beginning to appear among the intolerant – they always will – and the infighting has already started – it always does.)  Again, history tells us that there’s not much correlation between those with the loudest voices and those with the clearest heads.  It’s always a time to worry when views straying from the complacent majority are shouted down but sadly we usually worry too late.

So what does the future hold?  Clearly, it’s not easy to say with much certainty – and that was never really the point of this piece.  Actually, that’s not just because we’re scientifically blinkered but because our perceptions and starting points change too and that can be really hard to keep pace with.  Frankly, Star Trek looks a little boring now, doesn’t it?  With some of the cool stuff we have already, surely we can do better than that?  What, no always-on big social connectivity?  What sort of a world is that?  And those clothes!

About Vic Grout

Futurist/Futurologist. Socialist. Vegan. Doomsayer. Professor of Computing Futures. Author of 'CONSCIOUS' View all posts by Vic Grout

7 responses to “The Problem with ‘Futurology’

So what do you think?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: