The ‘Technological Singularity’ debate rolls on with the publication of a special issue of MDPI’s ‘Information’ Journal: “AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?”
Papers published in the special edition, to date, include:
- Thinking in Patterns and the Pattern of Human Thought as Contrasted with AI Data Processing
- Technological Singularity: What Do We Really Know?
- Conceptions of Artificial Intelligence and Singularity
- Cosmic Evolutionary Philosophy and a Dialectical Approach to Technological Singularity
- Can Computers Become Conscious, an Essential Condition for the Singularity?
- The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence
One of the papers, with an outlook (entirely unsurprisingly) in line with this blog is Vic Grout‘s, “The Singularity isn’t Simple! (However we look at it) A random walk between science fiction and science fact”. The abstract reads as follows:
“It seems to be accepted that intelligence – artificial or otherwise – and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular – but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take it from there (and take over from us). But such wisdom and debate are simplistic in a number of ways: firstly, this is a poor definition of the singularity; secondly, it muddles various notions of intelligence; thirdly, competing arguments are rarely based on shared axioms so are frequently pointless; fourthly, our models for trying to discuss these concepts at all are often inconsistent, and finally, our attempts at describing any ‘post-singularity’ world are almost always limited by anthropomorphism. In all of these respects, professional ‘futurists’ often appear as confused as storytellers who, through freer licence, may conceivably have the clearer view: perhaps then, that becomes a reasonable place to start. There is no attempt in this paper to propose, or evaluate, any research hypothesis; rather simply to challenge conventions. Using examples from science fiction to illustrate various assumptions behind the AI/singularity debate, this essay seeks to encourage discussion on a number of possible futures based on different underlying metaphysical philosophies. Although properly grounded in science, it eventually looks beyond the technology for answers and, ultimately, beyond the Earth itself.”
The full paper can be read here. It’s long and winding and covers a lot of ground. But the essential thrust, as always, is that this stuff is more complicated than we think. Oh, and we’re all going to die before we find out anyway! The final section concludes:
“But, in many ways, the real take home message is that many of us are not on the same page here. We use terms and discuss concepts freely with no standardization as to what they mean; we make assumptions in our self-contained logic based on axioms we do not share. Whether across different academic disciplines, wider fields of interest – or simply as individuals, we have to face up to some uncomfortable facts in a very immediate sense: that many of us are not discussing the same questions. On that basis it is hardly surprising that we seem to be coming to different conclusions. If, as appears to be the case, a majority of (say) neuroscientists think the TS cannot happen and a majority of computer scientists think it can then, assuming an equivalent distribution of intelligence and abilities over those disciplines, clearly they are not visioning the same event. We need to talk.”
Might be good if we started doing that?
April 20th, 2018 at 7:33 pm
I took a brief look at the paper and will read in more detail later but I like what I read. I touched on some of this in one of my posts.
https://broadspeculations.com/2015/01/19/thinking-about-thinking/
Thinking, consciousness, and intelligence are used so much in discussion but are seldom well-defined.
My view of intelligence is somewhat unique.I think is a physical process that is involved with maximizing the diversity and/or utility of future outcomes to achieve optimal solutions. It doesn’t require consciousness and emerges in networked and integrated systems. This could include intelligent behavior of slime molds or machines. Consciousness I reserve for biologically entities with nervous systems.
October 1st, 2018 at 12:13 pm
[…] The Singularity (Still) Isn’t Simple! […]