*The concept of a ‘reasonable amount of time’ figures a fair bit in abstract computational complexity theory; but what is a ‘reasonable amount of time’ in practice? This post outlines the problem of balancing between the two competing ideals of determinism and adaptability and offers a flexible working definition. (Not to be taken too seriously: it’s summer vacation time.)*

A standard text on combinatorial problems and optimisation algorithms – perhaps discussing the TSP, for example – might read something like:

“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities *change* as the input problem size (*n*) *increases*. Suppose then, that for a given problem, we can solve a problem instance of size * n = L* in a reasonable amount of time. What then happens if we increase the size of the problem from *n = L* to *n = L+1*? How much harder does the …?”

or, filling in a few gaps:

“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities *change* as the input problem size (*n*) *increases*. Suppose then that we can solve a TSP of *20* cities on a standard desktop PC in a reasonable amount of time. What then happens if we increase the number of cities from *20* to *21*? How much longer does …?”

All good stuff, and sensible enough, but what’s this ‘reasonable amount of time’?

Well, this is a problem because it depends entirely on the circumstances. Extreme examples are:

- We’re planning a space launch for twelve years’ time. We need to have the trajectories planned seven years before launch. A ‘reasonable amount of time’ is
*five years*. (Possibly on some thumping HPC system) - Packets are being switched, via routing tables/algorithms, across the Internet. A ‘reasonable amount of time’ is
*a fraction of a second*. (On a rusty old router in a tunnel somewhere)

And we’re going to hit the same issue if we try to *fix* *a definition*: it won’t work for everyone. Not only are situations not the same; nor is the available equipment; nor are *people*. Even in the research lab, some things (and some people) can have patience; others can’t (or won’t). We need something *defined* … *but not rigidly defined*.

So here’s the solution – tailor-made for academics and researchers …

Define a *‘Grout-reasonable’* amount of time to be the time it takes for any given academic/researcher to:

- Press [ENTER] (or similar) to start the algorithm;
- Walk to their preferred refreshment outlet;
- Order a coffee/tea/other drink;
- Drink it (possibly in conversation with others)
- Return to their desk

The algorithm has finished in Grout-reasonable time if it’s finished by the time the academic returns. If it finishes exactly *as* the researcher approaches their desk, the process is *Grout-optimal* since it maximises the problem size solvable in Grout-reasonable time.

This definition has the advantage of being flexible enough to fit the requirements of any individual academic and their habits, whilst providing a base unit for effective comparison if needed. It also implicitly defines the computational power available as being that on the desk in any given case. Any necessary adaptation/translation/conversion can then proceed as follows:

*“The algorithm isn’t Grout-reasonable for n = 15 for me: more like 4 Grout Units (GUs), I think”**“Ah, but my GUs are worth five of yours (my computer is faster but our coffee shop is further away and I walk and drink slower) so it should be Grout-reasonable for me”**etc.*

You get the idea! Extensive research at various UK universities suggests an individual’s personal GU to be anything between nine minutes and three and a half hours! (No names – no pack drill!)

*Happy holidays!*

## So what do you think?