*The concept of a ‘reasonable amount of time’ figures a fair bit in abstract computational complexity theory; but what is a ‘reasonable amount of time’ in practice? This post outlines the problem of balancing between the two competing ideals of determinism and adaptability and offers a flexible working definition. (Not to be taken too seriously: it’s summer vacation time.)*

A standard text on combinatorial problems and optimisation algorithms – perhaps discussing the TSP, for example – might read something like:

“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities *change* as the input problem size (*n*) *increases*. Suppose then, that for a given problem, we can solve a problem instance of size * n = L* in a reasonable amount of time. What then happens if we increase the size of the problem from *n = L* to *n = L+1*? How much harder does the …?”

or, filling in a few gaps:

“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities *change* as the input problem size (*n*) *increases*. Suppose then that we can solve a TSP of *20* cities on a standard desktop PC in a reasonable amount of time. What then happens if we increase the number of cities from *20* to *21*? How much longer does …?”

All good stuff, and sensible enough, but what’s this ‘reasonable amount of time’?

Continue reading