No, we’re not talking technological singularity; something a bit more down to earth: just good old fashioned fake news really but with a new twist. A fairly, short, simple, not terribly deep piece this month, but combining with what’s gone before to lead to next month’s proposition broadly along the lines of “Is it possible for a race to ‘stupid’ itself into extinction?”
In an early original episode of Star Trek, Jim Kirk gets into trouble when some recorded video evidence is falsified, appearing to show negligence. As the storyline unfolds, it’s generally accepted that few people would have had the necessary expertise to do this, which eventually points the way to the falsifier. In fact, this concept continued to turn up in many Star Trek series and films as the years passed.
At the time (of the initial episode), in the real world, of course, such an idea would have been almost unimaginable. Back then, it was hard to credibly manipulate still photographs, let alone moving pictures. And it’s hard to say if many people were even speculating so far as, “I wonder how long it will be before we can do that?” Really, it was just bonkers.
But we’ve come a long way.
Photoshop is old hat now and even video can be artificially generated if you have enough data to work with. In particular, you can use known voice fragments or patterns to make famous people say things they didn’t say. Or, in Nancy Pelosi‘s case, make them say it a different way. Or, in the case of ‘Deepflakes’, well …
But recently, Samsung have taken it that bit further with software that can take a single image and use AI techniques to generate video of the subject saying anything at all.
Exciting, yes from a research perspective, but scary too?
Because it’s getting harder and harder to tell real news from fake.
OK, a quick reality check here. We’re not at the stage yet where this sort of falsification is going to pass forensic investigation, so legal integrity might be good for a while longer, but two obvious points:
- That day might not be that far away, and
- For some mischief, it doesn’t matter.
We’ll focus on the second point here although the first is at least as important so we’ll come back to it.
We’ve all seen enough evidence now to know that people – particularly online – and particularly particularly on social media – largely believe what they want to believe, and there’s no shortage of other people who will exploit this for their own financial or political ends.
This is a huge addition to their arsenal. A big, entirely unsubtle bomb that will nevertheless take out as many gullible people as are willing to believe whatever tosh they’re fed.
Take (say) a politician you don’t like, construct a video of them saying something inflammatory or incriminating and get it out there. Yes, many will spot the fake right away but, political divisions being what they are these days, probably as many will swallow it whole. And follow up evidence is never as powerful as that first impression.
Add this to the idea that the quality of the falsification will improve over time, whether or not it gets to the level of surviving forensic analysis, and, for many people, the evidence of their own eyes won’t be enough any more.
“The party told you to reject the evidence of your eyes and ears. It was their final, most essential command. His heart sank as he thought of the enormous power arrayed against him, the ease with which any Party intellectual would overthrow him in debate, the subtle arguments which he would not be able to understand, much less answer. And yet he was in the right! They were wrong and he was right.” George Orwell, ‘1984’.
Yet another example of technology-driven social upheaval it’s hard to see a way through?
Next month, we’ll add this to our previous concerns regarding security, safety, privacy, etc. and ask … well, we’ll see, shall we?