Epicenter, Mäster Samuelsgatan 36, Stockholm, Stockholm
Will the ongoing acceleration of technology result in a "technological singularity"?
Eminent thinkers such as John von Neumann, Alan Turing, Vernor Vinge, and Ray Kurzweil have, in various ways, foreseen a forthcoming "intelligence explosion" in which self-improving artificial intelligence will rocket past the unaided capabilities of human intelligence. Trying to predict what would happen next is hard, since the resulting ultraintelligence may become motivated by considerations that we cannot presently imagine. Hence the term "singularity" - a point beyond which all predictions break down.
Here's how former Bletchley Park code-breaker I.J. Good described the concept in 1964:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make..."
Some critics, however, have described the concept of the technological singularity as confused, misguided, and wishful thinking. Forecasts of the rise of superintelligence are foolish, we hear - they're a distraction from more pressing problems.
In this talk, the Chair of London Futurists, David Wood, argues that we need to give the concept of the technological singularity very serious attention. He will outline future scenarios in which a superintelligence might occur in as little as ten years. Topics he will cover include:
• Factors governing the speed of improvement of artificial intelligence
• Alternative scenarios for the future of artificial intelligence (including rough probability estimates)
• The possibilities for humans to become radically "augmented" (or to merge with superintelligence) in order not to become surpassed by artificial intelligence
• Key unsolved problems in the evolution of superintelligence, including the control problem and the value-alignment problem
• The significance of superintelligence in the overall landscape of risks to human civilisation
• How society can take measures to increase the likelihood that the singularity turns out profoundly positive rather than profoundly negative.