The first came with the age of Enlightenment, with our ability to “imagine what it would be like to organise collective enterprises as though they had the durability of machines”.Īuthor David Runciman ‘likens the idea of government to an algorithm’. The “singularity” that tech evangelists talk about – the eventual symbiosis of man and machine – would really be the “second singularity”, Runciman argues, persuasively. The blueprint for negotiating these challenges, Runciman believes, has been established over several centuries by the related threats from state and corporate power. This book expands on the arguments of that chapter, principally Runciman’s contention that the challenges “we the people” face from AI – threats to our individuality and agency as citizens – are, while urgent, not as revolutionary as we may think. One of those was devoted to “Technological Takeover!”, which, with parliament then in Brexit meltdown and Trump in full spate, examined the ways in which we were outsourcing our politics to digital media, and the implications of that for our shared future. The closest he came to it was a series of ironic exclamation marks that revved up the chapter headings in his 2018 book How Democracy Ends. The Cambridge politics professor and, until it ended last year, the ever-erudite host of the terrific Talking Politics podcast is not given to apocalyptic prophecy. The checks and balances we have applied to governments and corporations must be made relevant to artificial intelligenceĭavid Runciman’s far more sober book is in part an analysis of the first phase of that latter proposition. Another, somewhat more credible vision saw a superintelligence “hijacking political processes, subtly manipulating financial markets, biasing information flows” to bring about first our superfluity, then our extinction. One projection involved an AI system building covert “nanofactories producing target-seeking mosquito-like robots might then burgeon forth simultaneously from every square metre of the globe” in order to destroy us. Bostrom’s book presented several scenarios in which our fate may be sealed by the machines we create. In the seven febrile years since, less so. And the third was his insistence that in important ways, artificial intelligence posed a more imminent threat to the survival of our species than, say, climate crisis or pandemics or nuclear war.Īt the time, that idea seemed to me inflected with too many outlandish tropes from science fiction. The second was that the germophobic Bostrom was the first interviewee I’d met who insisted on fist bumps rather than handshakes (the shape of things to come). The first was that when I arrived a bed was being delivered to the institute, cementing the belief that anxiety about impending catastrophe was, these days, a 24/7 kind of occupation. If I think about that encounter, I remember three things. Bostrom’s institute, which sought to weigh the apocalyptic potential of various humanity-threatening forces, had just been given a £1m grant by Elon Musk. The book outlined the existential risk to democracy and humanity implied by advances in machine learning. Back in 2016, a month before the EU referendum, I went along to the Future of Humanity Institute in Oxford to interview its director, the Swedish-born philosopher Nick Bostrom, who had just written a book called Superintelligence.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |