*(Does nature ‘run programs’? And, if it does, are the ‘algorithms’ any good?)*

So, what is an *algorithm*?

No, not that; this is a far more fundamental question. What *IS* is algorithm? Animal? Vegetable? Mineral? Ah …

OK, start with something simpler; often the first class discussion on a pure maths degree … What *IS* a *number*? How would you answer that? Now, if you’ve pointed at something, perhaps even *pictured* something, you’re wrong.

A number is an entirely *abstract* concept, which doesn’t really exist in our world. You can find numbers *at work* in nature but you don’t *see* them. You can see two people, two cars, two sheep, two of anything at all but you’re *not seeing the number two*. You can take a pencil and paper and draw a *representation* of the number two in as many ways as you like (the figure ‘2’, two dots, or anything else) but that *isn’t the number two*. There might be two sides to an argument or two ways to the post office but you still don’t see the number two and you never will – at least, not in this life.

And much the same is true of an algorithm. We can write algorithms on paper and load them in digital form to a computer. We can discuss them, test them, compare and refine them, invent new ones, etc. but an algorithm itself remains abstract. And, as with numbers, algorithms aren’t just concepts we’ve dreamed up; they’re at work in nature too, possibly the most dramatic of all being the A*lgorithm of Evolution* (*AoE*) – the algorithm that got us to where we are today. We can’t *see* the algorithm itself but we can describe it and we can hardly miss the result.

The AoE can be crudely described like this …

- Start with an
*initial population*of*individuals*, each with certain ‘characteristics’, and some ‘living conditions’ - Evaluate the ‘fitness’ of each individual’s characteristics for these living conditions
- Repeat
- Allow the fittest individuals to reproduce, giving offspring with characteristics combining the those of their parents but also with small variations (mutations)
- Allow living conditions to change (they don’t
*have*to on each loop but they might) - Re-evaluate the fitness of each individual
- Remove some of the least fit individuals

- Until: End of World

It might not be entirely clear where we get the ‘initial population’ from. (The ‘Primordial Soup’? Outer Space? Genesis?) But, once we’ve got something, we’re off … Thus the strongest survive and the weakest die in a stable *or* changing environment.

Now, whether or not this algorithm actually *works*, is rather difficult to answer. It certainly does *something*, but *what*? In simple terms, it’s produced *us* (together with a whole lot of other stuff) but is that … um, the ‘right answer’? In algorithmic terms, is it (are we) *optimal*?

If we look at the basic structure of the AoE, it’s this …

- Start with an initial solution
- Repeat
- Evaluate the quality of this solution
- Experiment with small variations on this solution
- Adopt the best variations into the current solution

- Until: Something tells us to stop

This is an example of a *local search* algorithm. In conventional terms, the initial solution is just some starting point – possibly a best guess – and we’d stop when we’d found the best answer *or* we couldn’t be bothered to look any more *or* (somewhere between the two) when, although we may not have found the *best* answer, it looks as if we’re not going to find anything *better*. That just leaves the definition of ‘best’ and ‘better’ in terms of solution ‘quality’.

*Optimisation algorithms* generally work by trying to *minimise* the *cost* of *solutions* to a given *problem*. Cost – defined mathematically as a *function* of the parameters describing solutions – is usually a measure of how *bad* a solution is so the *lowest cost* solution will be the *best*. Examples are the Minimum Spanning Tree (MST) problem and the Travelling Salesman Problem (TSP). But there’s a crucial difference between these two: the MST has a simple algorithm that produces a perfect (optimal) solution so we don’t need local search algorithms like the one above; the TSP, on the other hand, probably doesn’t – so we might.

Local search algorithms look in a ‘neighbourhood’ of the current solution for something better (i.e. lower cost). This generally means varying some parameters of the current solution but probably not too many at once and/or not too much (otherwise it gets too complex). If anything ‘good’ is found then that becomes the new current solution and the process repeats – usually until there aren’t any more improvements. Unfortunately, that doesn’t guarantee optimality (the best possible solution). There are various ways to visualise this – this being the simplest ‘one-dimensional’ view …

Here, different solutions have different costs and we’re looking for the lowest. Starting from the initial solution, *A*, the local search neighbourhood includes the better solution, *B*, which becomes the new solution. Starting again from *B*, the search reaches *C*, which is better still, and then finally *D*. Sadly, the scope of the search is never wide enough to reach *E*, the true optimal solution. In two dimensions, it might look like this …

This suggests the algorithm identifying solutions *A*, *B*, *C* and *D* in sequence but heading off ‘in the wrong direction’ to find the perfect *E*. You can imagine the hills and valleys of the 1-D model in this 2-D version too. Now imagine looking for the lowest point in the dark, with a lantern that could only light up so much area around you. You’d end up in some valley or other (a *local minimum*) but you might not find the lowest of all (the *global minimum*). Although optimisation algorithms generally work in much higher dimensions (because they have many input variables), both these simple views illustrate their basic operation and show why they’re flawed.

There are various flavours of local search algorithms, essentially with different ways of scanning the local neighbourhood (varying the solution parameters) and identifying improved solutions. These are just some:

- The simplest approach is to look for
*any*local improvement and immediately adopt this as the new solution *Greedy Algorithms*look for the*best*improvement in varying one – or a fixed number of parameters – in the local neighbourhood*Tabu Search*(or*Taboo Search*in English) algorithms mark failed areas of the solution space as not to be revisited – saving processing resources for worthwhile searches*Simulated Annealing*algorithms (inspired by the odd way metal cools) allow occasionally worse solutions to be considered on the way to better ones*Genetic Algorithms*‘code’ each solution, as if representing it by its DNA, then apply mutations to these codes in order to search for improvements

With this last one, of course, we’ve come full circle. It’s no surprise that genetic algorithms – now used to solve a variety of optimisation problems – were first inspired by the AoE itself. (Some algorithmists like to reserve the term ‘local search’ for a *particular* *type* of algorithm but really these are all local search processes in the sense that they look for improvements in a solution within a neighbourhood of limited scope.)

So, in turn, it’s pretty obvious that *the AoE is a local search algorithm*. But what about it’s *cost* measure? If homo sapiens evolved from neanderthals because they have ‘lower cost’, *what* cost? This is where things get a bit messy.

There *is* no *explicit* cost function in the AoE. Instead, cost is im*plicitly* built into the algorithm itself, in the way individuals interact with their environment; more precisely, in the way that individuals’ characteristics are evaluated by their living conditions leading to these living conditions modifying the next generation of characteristics. In a sense, the algorithm is *distributed*; the individuals produce their mutations but the environment does the evaluation (selection). Also, of course, these living conditions change over time in the real AoE and can be different in different parts of the world. A theoretical genetic algorithm (or any other local search) would produce a single solution but the AoE has produced a huge range of species – often co-exisitng but many found only in certain environments.

Nevertheless, the AoE *is* a local search algorithm, which means it *may or may not be optimal*. (Some simple local searches acting on certain problems *will* produce the global optimum but many *won’t*.) So, how has it done?

Well, it’s produced *us*. And how good is that then? We like to think that we’re the most evolved species on the planet so are we as good as it gets or could a more sophisticated algorithm have done better? We certainly have to accept that we’re the result of a local search process so *we might be only locally optimal*. We can’t be sure without analysing the underlying problem a bit more (and we don’t really know how to do that) but it’s at least possible that there might be a better solution further away in the solution space. (What *is* known about certain types of local search is that they can ‘go wrong’ very early in the process; there are certain types of greedy algorithm, for example, that can be proven to produce the worst possible solution. Food for thought?)

There’s a reasonable chance that we’re about the right height, weight, proportion, that our skin is the right thickness and our sight and hearing as good as they need to be because any subtly different, better design would probably be found by the AoE’s iterative local search process acting on the current solution and finding improvements, if there are any, within its range. (Although it’s possible it may not have quite got there yet: we could still be following our mysterious cost function downhill.) But how about more drastic changes; solutions much further away in the solution space?

Let’s take a specific example … Why don’t we have *wheels*? Surely, they’d be better than legs. After all, we don’t put legs on cars and bikes. Why haven’t we evolved wheels? Even if it didn’t work out, why hasn’t the AoE given it a go?

Richard Dawkins tries to explain this with the ‘*Selfish Gene*‘ model … If you have wheels, you need roads; and roads suggest cooperation; and life doesn’t do cooperation (apparently). But the real explanation is far simpler … A wheel rotates on an axle, the wheel and the axle being separate pieces. For a biological system, this either means an incredibly complex system for transmitting blood, nerves, etc. to the wheels – one that doesn’t get all twisted up as the wheel turns – or the wheel has to be a separate, self-sufficient biological unit. Either way, that’s a massive evolutionary step – *with no obvious local minima along the way* – and the local search simply won’t *find* such a distant solution. Dawkins himself describes this as *“The wheel may be one of those cases where the engineering solution can be seen in plain view, yet be unattainable in evolution because it lies the other side of a deep valley”*. In our terms, it’s the other way around: it’s the deepest valley we want … but it’s too far away and hidden between the hills.

Is evolution an astonishing, wonderful process that’s produced the diverse beauty that surrounds us? Yes. Is it/are we perfect? No.

February 3rd, 2014 at 10:11 pm

There are billions of supremely rigid algorithms at service throughout the known universe. Let’s take an example of the incredibly simple algorithm of human waste removal. It works perfectly fine in normal circumstances; however, if this algorithm was not any GOOD, human would ultimately Blast!! Eventually there would be billions of Blasts every day. As for numbers, we could ask these profound questions to Arab scientists who set the stage of modern sciences (Acquired Knowledge) from 7th century, especially Abū ʿAbdallāh Muḥammad ibn Mūsā al-Khwārizmī who came up with ALGEBRA, Arithmetic, Astronomy, Trigonometry, Geographical concepts , but he died long ago.

Thanks to read

http://en.wikipedia.org/wiki/Mu%E1%B8%A5ammad_ibn_M%C5%ABs%C4%81_al-Khw%C4%81rizm%C4%AB

February 9th, 2014 at 1:35 am

There are a special feedback in man’s behavior. Mind can see the future and this means a more complex evolutionary schema. If you use this hipotessis, there is fascinating speed in local algorithms and reality teach us that some solutions are not at scope.

February 1st, 2015 at 8:40 pm

[…] If there’s any intelligence at all in that process, it’s in the ingenuity of how the algorithm itself solves the problem– not the species in question. Depending on what we mean by […]