Yes, this is indeed the way it might work!
Category Archives: Computer Science
The ‘Technological Singularity’ debate rolls on with the publication of a special issue of MDPI’s ‘Information’ Journal: “AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?”
Papers published in the special edition, to date, include:
- Thinking in Patterns and the Pattern of Human Thought as Contrasted with AI Data Processing
- Technological Singularity: What Do We Really Know?
- Conceptions of Artificial Intelligence and Singularity
- Cosmic Evolutionary Philosophy and a Dialectical Approach to Technological Singularity
- Can Computers Become Conscious, an Essential Condition for the Singularity?
- The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence
One of the papers, with an outlook (entirely unsurprisingly) in line with this blog is Vic Grout‘s, “The Singularity isn’t Simple! (However we look at it) A random walk between science fiction and science fact”. The abstract reads as follows:
The concept of a ‘reasonable amount of time’ figures a fair bit in abstract computational complexity theory; but what is a ‘reasonable amount of time’ in practice? This post outlines the problem of balancing between the two competing ideals of determinism and adaptability and offers a flexible working definition. (Not to be taken too seriously: it’s summer vacation time.)
A standard text on combinatorial problems and optimisation algorithms – perhaps discussing the TSP, for example – might read something like:
“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities change as the input problem size (n) increases. Suppose then, that for a given problem, we can solve a problem instance of size n = L in a reasonable amount of time. What then happens if we increase the size of the problem from n = L to n = L+1? How much harder does the …?”
or, filling in a few gaps:
“… so we tend not to be as interested in particular complexity values for individual problem instances as how these complexities change as the input problem size (n) increases. Suppose then that we can solve a TSP of 20 cities on a standard desktop PC in a reasonable amount of time. What then happens if we increase the number of cities from 20 to 21? How much longer does …?”
All good stuff, and sensible enough, but what’s this ‘reasonable amount of time’?