*It’s the holiday time of year: so a slightly lazy post for August. Adapted from a letter published in this month’s edition of Mathematics Today …*

For anyone who’s worked in the ‘Twilight Zone’ between mathematics and computer science for any time, June’s Mathematics Today article, *Urban Maths: A Roundabout Journey* [on rounding issues in computer calculations], would have struck a distinct chord. They will have often come across situations in which mathematicians and computer scientists don’t quite see eye to eye. The following, fairly well-known, combinatorial exercise is another good example.

How many *ordered* ways are there of summing contributions of *1* and *2* to a given integer, *n*? So, for example, *1+1+2+1+1* and *2+2+2* are two of the *13* different ways of making *6*. Call this number *f(n)* so that, in this example, *f(6) = 13*.

The standard combinatorial approach is to consider the first term. The only two options, *1* (leaving *n-1* to make) and *2* (leaving *n-2*) lead readily to the recurrence relation *f(n) = f(n-1) + f(n-2)*. Easy enough, yes, but the interesting question now is *what to do with it?*

Computer scientists like these recursive definitions a lot and, without appropriate restraint, will quickly code a function that calls itself; most programming languages allow this freely. So, when run, *f(n)* is broken down into *f(n-1)* and *f(n-2)*, which in turn are broken down to *f(n-2)* & *f(n-3)* and *f(n-3)* & *f(n-4)*, etc. all the way down to some known values, *f(1) = 1* & *f(2) = 2*, say. These results are then passed back up the function stack until we get the answer to *f(n)*.

This all looks pretty elegant but turns out to be a nightmare from an efficiency point of view. The solution is exponential, in *n*, in its complexity and, depending on the processor, anything beyond *n = 50* becomes difficult. *n = 100* is effectively impossible.

The trick, of course, is to apply the recurrence relation *the other way around*. Starting with values that are easy to work out, *f(1) = 1* & *f(2) = 2* again, we can calculate *f(3) = f(2) + f(1) = 2 + 1 = 3*, then *f(4) = f(3) + f(2) = 5*, *f(5) = 5 + 3 = 8*, *f(6) = 13*, and so on until we get to the *n* we want. It’s *Fibonacci* and we know how easy that is: linear complexity. One to the mathematicians.

But mathematicians can overstretch themselves too! If we continue the standard combinatorial exercise, then *f(n) = f(n-1) + f(n-2)* can be solved, via the characteristic equation, to give the explicit expression, *f(n) = [((1+*√*5)/2) ^{n+1} – ((1-*√

*5)/2)*√

^{n+1}]/*5*.

Now, this is all well and good but becomes a problem if it’s sold to the programmer as a *better* way of calculating *f(n)*. On the surface, it may look like it (a single calculation instead of the Fibonacci iterations) but the computer scientist will object to the powers and roots. Neither of these are trivial within the computer’s depths and will require either simple iteration of its own or successive (iterative) approximation. Although some pre-processing can help (√*5 *calculated once then stored, for example) in most environments, the calculation method (with its implicit iteration) proves to be *less* efficient that the calculation method (explicit iteration) below a certain value of *n*.

*An honourable draw?
*

(Anyone interested in the actual code for these solutions, can read the full discussion at How to Write a really Bad Program)

*Vic Grout, Wrexham Glyndŵr University*

## So what do you think?