Category Archives: Computer Science

Why has the Shadbolt Review been Delayed?

Publication of the much-anticipated review of computer science degree accreditation and graduate employability by Professor Sir Nigel Shadbolt, has been delayed.  Why?  Is this political?  Does it not say what it should?

Terms of reference for the Shadbolt Review of Computer Science Degree Accreditation and Graduate Employability were published in February 2015.  The background to this was some contested data showing that Computer Science graduates had the highest levels of unemployment across all academic subjects in Higher Education.  Since then CS departments in UK universities have awaited the outcomes with some trepidation, possibly expecting something of a mauling.

Despite a fair amount of work in pointing out that the figures might not really mean what they appeared to (a lot of biasing influences, for example), this concern was hardly helped by the Chair of the Government’s Science and Technology Select Committee, Nicola Blackwood, at a PICTFOR (Parliamentary Internet, Communications and Technology Forum) during a speech at an evening reception at the House of Lords in December, saying – to all intents and purposes – that CS graduate unemployment was high because CS lecturers in UK universities didn’t know how to teach CS.  Concern in the HE CS community quickly evolved into outright fear.  Rumours about possible content of the Shadbolt review were rife.

However, there’s now a growing suspicion among CS academics that this was uninformed (both Blackwood’s comments and the rumours) and that the review, originally expected in April this year, doesn’t actually say this: that it might not give the universities the kicking the government would like to see them get.  The question has to be asked: is this why there’s been a reluctance to publish?

Continue reading


“The Theological Objection”

This month’s post considers a little-remembered part of Turing’s otherwise famous 1950 paper on AI.

Just for once, this month, let’s not skirt around the generally problematic issue of ‘real intelligence’ compared with ‘artificial intelligence’ and ask what it means for a machine (a robot, if you like, for simplicity) to have the whole package: not just some abstract ability to calculate, process, adapt, etc. but ‘human intelligence’, ‘self-awareness’, ‘sentience’; the ‘Full Monty’, as it were.  Star Trek’s ‘Data’ if you like, assuming we’ve understood what the writers had in mind correctly.

Of course, we’re not really going to build such a robot, nor even come anything close to designing one.  We’re just going to ask whether it’s possible to create a machine with ‘consciousness’.  Even that’s fraught with difficulty, however, because we may not be able to define ‘consciousness’ to everyone’s satisfaction but let’s try the simple, optimistic version of ‘consciousness’ broadly meaning ‘a state of self-awareness like a human’.  Is that possible?

Continue reading


Known Unknowns

This month’s post may make a valid point.  Or it may not.  Or it may be impossible to tell, the concept of which itself may or may not make sense by the end of the piece!

How do we handle things we don’t know?  More precisely, how do we cope with things we know we don’t know?  All right then: how do we handle things we know we can’t know?

As is the nature of this blog, the examples we’re going to discuss are (at first, at least) taken from the fields of computer science and mathematics; but there are plenty of analogies in the other sciences.  This certainly isn’t a purely theoretical discussion.

On the whole, we like things (statements or propositions) in mathematics (say) to be right or wrong: true or false.  Some simple examples are:

  • The statement “2 > 3” is false
  • The statement “There is a value of x such that x < 4” is true
  • The proposition “There are integer values of x, y and z satisfying the equation x3 + y3 = z3” is false

OK, that’s pretty straightforward but how about this one?

  • “Every even number (greater than 2) is the sum of two prime numbers”

Continue reading


Seeing the Bigger Picture: ‘STEEPLED’ and ‘The Great Curtain’

Futurology is a difficult and inexact science, with a poor history of getting it right.  However, there are ways of giving yourself a chance or, at least, avoiding some of the more obvious mistakes and oversights.  This post looks at a tool for considering the bigger picture in futurology and reflects on the results of using it with various user groups.

We’ve made the point before that technologists aren’t necessarily (or solely) the best people to ask what the future may hold because:

  1. they only tend to think about technology, or
  2. when they think about things other than technology, they’re not very good at it.

Of course, there’s probably a parallel observation to be made about any focused specialist in a particular field (economists, lawyers, politicians, etc.) but the observation doesn’t invalidate 1 and 2: it just shares the blame around a bit.  So, what can be done to help, and where does it take us?

Continue reading