Sorry folks, I haven’t had time to write a CS-based Christmas story this year so here’s one from a few years ago!

*Merry Christmas!*

Sorry folks, I haven’t had time to write a CS-based Christmas story this year so here’s one from a few years ago!

*Merry Christmas!*

*OK, some of this material isn’t new but I’ve been asked to edit a special (Information) journal edition on (something like) ‘Will AI, Big Data and the IoT Mean the End of Privacy?’ The plan is to circulate a ‘discussion paper’ to encourage submissions. What follows is an early draft of that (extended from The Prof on a Train Game) so it won’t hurt to get it ‘out there’ as soon as possible. Comments welcome below, by email, message, whatever …
*

**Abstract**

*The embodiment of the potential loss of privacy through a combination of AI, big data and IoT technology might be something like an integrated app capable of recognising *anyone, anytime, anywhere*: a sort of ‘Shazam for People‘, but one capable of returning seriously personal material about the individual. How credible is such a system? And what might stop it?*

**Introduction: A Future Scenario?**

It’s 2025 or thereabouts. You meet someone at an international conference. Even before they’ve started to introduce themselves, your IoT augmented reality glasses have told you everything you needed to know … and a lot more you didn’t.

*Jerry Gonzales. Born (02/11/1970): Glasgow, UK, dual (plus USA) citizenship; 49 years old. Married 12/12/1994 (Ellen Gonzales, nee Schwartz), divorced 08/06/2003; two daughters (Kate: 23, Sarah: 17); one son (David: 20). Previous employment: Microsoft, IBM, University of Pwllheli; current: unemployed. Health: smoker, heavy drinker, recurrent lung problems, diabetic, depression. Homeowner (previous); now public housing. Credit rating: poor (bankruptcy 10/10/2007); Insurance risk: high. Politics: Republican. etc., …, Sport: supports Boston Red Sox and Manchester United FC. …, Pornography: prefers straight but with mild abuse …, etc., etc.
*

*The concept of a ‘reasonable amount of time’ figures a fair bit in abstract computational complexity theory; but what is a ‘reasonable amount of time’ in practice? This post outlines the problem of balancing between the two competing ideals of determinism and adaptability and offers a flexible working definition. (Not to be taken too seriously: it’s summer vacation time.)*

A standard text on combinatorial problems and optimisation algorithms – perhaps discussing the TSP, for example – might read something like:

“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities *change* as the input problem size (*n*) *increases*. Suppose then, that for a given problem, we can solve a problem instance of size * n = L* in a reasonable amount of time. What then happens if we increase the size of the problem from *n = L* to *n = L+1*? How much harder does the …?”

or, filling in a few gaps:

“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities *change* as the input problem size (*n*) *increases*. Suppose then that we can solve a TSP of *20* cities on a standard desktop PC in a reasonable amount of time. What then happens if we increase the number of cities from *20* to *21*? How much longer does …?”

All good stuff, and sensible enough, but what’s this ‘reasonable amount of time’?