Passions run high among sci-fi fans: never in this blog’s history has it been so necessary to note ‘this is only a bit of fun‘ in advance of a post!
How ‘good’ are we at predicting the future? More precisely, how could we measure how good anyone is at it? Well, we’re going irritate some straight away here by treating ‘serious’ academic researchers, professional ‘futurists’ and science fiction writers as doing essentially the same thing. Perhaps, on that basis, we should narrow our notion of science fiction to ‘hard’ sci-fi loosely set somewhere in humanity’s future; but, that aside, that’s exactly what we’re going to do.
The ‘Just because we’ll be able to, will we? Should we? Must we?’ discussion revisited in the light of the news that CCTV cameras will be compulsory in English abattoirs soon. Again, a reminder that technological ethics have more to do with ethics than technology.
But we’ll start with a different topic: one that isn’t as controversial as it used to be …
There was a time when smoking wasn’t just seen as socially acceptable but positively beneficial …
All that healthy smoke filling your lungs, seeping its goodness into your bloodstream was just the boost your body needed. You’d live years longer if you smoked … and it was SO COOL.
The concept of a ‘reasonable amount of time’ figures a fair bit in abstract computational complexity theory; but what is a ‘reasonable amount of time’ in practice? This post outlines the problem of balancing between the two competing ideals of determinism and adaptability and offers a flexible working definition. (Not to be taken too seriously: it’s summer vacation time.)
A standard text on combinatorial problems and optimisation algorithms – perhaps discussing the TSP, for example – might read something like:
“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities change as the input problem size (n) increases. Suppose then, that for a given problem, we can solve a problem instance of size n = L in a reasonable amount of time. What then happens if we increase the size of the problem from n = L to n = L+1? How much harder does the …?”
or, filling in a few gaps:
“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities change as the input problem size (n) increases. Suppose then that we can solve a TSP of 20 cities on a standard desktop PC in a reasonable amount of time. What then happens if we increase the number of cities from 20 to 21? How much longer does …?”
All good stuff, and sensible enough, but what’s this ‘reasonable amount of time’?