OK, this blog has made some pretty wild predictions over the years; from loss of privacy & security, through societal decay from social media & 24/7 connectivity, mass unemployment by AI & automation, to industrial environmental catastrophe and a technocapitalist Armageddon. Now there’s clear evidence of the first of these forecasts coming true, any chance of taking any of the others seriously? Maybe before it’s too late might be a good idea?
We’ve argued many points over the years on this blog: mostly essentially variants of the principle that future technology can’t be entrusted to a system where the driving motive is profit. To paraphrase: Technocapitalism is going to kill us all! (Probably quite soon)
Broken down, some of these have been/are:
- We’re constantly misled as to what emerging technology actually is and what it might do. This ranges from simple, often unintentional, trivialisation in the mainstream media to deliberate, aggressive misinformation from governments and corporations.
- The AI/big-data driven 24/7 ubiquitous connectivity of the future will make personal privacy and any level of (personal or organisational) security practically impossible.
- Social media and associated apps will extend and increase social division along tribal (national, political, religious, etc.) lines, ultimately destabilising society to crisis point.
- AI/automation/robotics will cause massive global unemployment leading to increased inequality and worldwide poverty on a hitherto unimagined scale with an effectively ‘surplus’ workforce forming the vast majority of the global population, which ‘someone’ will have to decide ‘what to do with’.
- Fed by all of these in turn, the increase in destructive technology (weaponry, yes, but also irresponsible use of natural resources) combined with the decay of the current democratic veneer into full-blown fascism and dictatorship will inevitably lead to a terminal global war.
[None of this has to be like this of course. The technology itself is ‘neutral’. We’ve constantly made the point here that technology in the hands of a fair political/economic system would be a good thing. A robot workforce, for example, doing everything for us could be lead to lives of luxury, with the human race finally freed to contemplate higher things; but it won’t because all that technology will be owned by an elite, looking – as always – to benefit themselves at the expense of the many. Technology won’t change (say) what it means to be unemployed: it’ll be horrible – just as it is now; only changing the framework we put the technology into can do that. All the above points could be the opposite but they won’t be, of course, because the underlying political/economic framework won’t change. But we’ve made this point repeatedly here so we’ll not hammer that one again this time.]
Anyway, here’s the thing …
Over four years ago, this blog published Shazam for People, noting that, with the increasing accuracy of face-recognition, combined with a growing array of other methods of identification, it would soon be possible to identify strangers in the street. From there, searching the Internet and reporting back information about them in real time would be trivial.
Then that was extended, through other articles; The ‘Prof on a Train’ Game and A Real Marauder’s Map, to describe a future in which none of us would have any privacy at all. Maybe it seemed a bit far-fetched at the time: surely either technical constraints, legislation or moral outrage would prevent it happening? Certainly there was little evidence of anyone taking the threat seriously.
But, guess what? Yes, it’s happened. Shazam for Faces, Blippar‘s Face Recognition API, is now over 99.5% accurate and ‘can be integrated into any app or website’ to ‘Give viewers or readers the power to scan a famous face on TV, in a magazine, or even in person (should they be so lucky to spot one) to instantly discover who it is and what they do. Or any other relevant content you would like to share.’
OK, it’s currently only working for a few thousand ‘celebrities’ and, so far, they’ve managed to keep some data private still. But that’s frankly unlikely to last long. Who defines a ‘celebrity’ anyway? The distinction is meaningless. If someone can extract a profit from you somehow, trust me, you’ll become a celebrity quick enough! And GDPR and suchlike won’t protect us in the long run.
[This is something we just haven’t yet come to terms with about data privacy: sometimes there’s no data! Examples: 1. There might be distributed information about an individual all around the Internet: none of it’s illegal and nobody ‘owns’ it. But putting it all together produces a ‘conclusion’ that might be both detrimental and profitable (for different people). However, any material that might violate data protection directives effectively never existed. 2. If AI can (say) scan a woman’s diet to determine she’s pregnant before she knows (it can) then that information can be exploited, and the original material deleted before GDPR gets anywhere near it.]
The prediction was right – or at least, it’s quickly becoming so.
So what’s next? Here’s just some of the rest …
- ‘Will the Robots Take Our Jobs?’ Isn’t Really the Important Question
- Is it Time for the ‘SuperApp’?
- Good Robot? Bad Robot?
- Fake News Had to Happen; But Why?
- Dude, you broke the Future!
- The Singularity (Still) Isn’t Simple!
- Well, Don’t Say You Weren’t Warned!
- What Will it Take for Humanity to Survive? (And Why is Trump Such a Complete Bellend?)
Seriously, we’ve had the gentle warning. Now we should be talking about the rest of this stuff before it’s too bloody late!