Martin Ford documents in “Rise of the Robots,” the job-eating maw of technology now threatens even the nimblest and most expensively educated. Lawyers, radiologists and software designers, among others, have seen their work evaporate to India or China. Tasks that would seem to require a distinctively human capacity for nuance are increasingly assigned to algorithms, like the ones currently being introduced to grade essays on college exams. Particularly terrifying to me, computer programs can now write clear, publishable articles, and, as Ford reports, Wired magazine quotes an expert’s prediction that within about a decade 90 percent of news articles will be computer-generated.In his new book, Rise of the Robots, Ford considers the social and economic disruption that is likely to result when educated workers can no longer find employment.
Sir Martin Rees on rise of AI:
What about other future technologies — computers and robotics, for instance? There is nothing new about machines that can surpass our mental abilities in special areas. Even the pocket calculators of the Seventies could do arithmetic better than us. In the Nineties, IBM’s “Deep Blue” chess-playing computer beat Garry Kasparov, then the world champion. More recently, another IBM computer won a television game show that required wide general knowledge and the ability to respond to questions in the style of crossword clues.
We’re witnessing a momentous speed-up in artificial intelligence (AI) – in the power of machines to learn, communicate and interact with us. Computers don’t learn like we do: they use “brute force” methods. They learn to translate from foreign languages by reading multilingual versions of, for example, millions of pages of EU documents (they never get bored). They learn to recognise dogs, cats and human faces by crunching through millions of images — not the way a baby learns.
Deep Mind, a London company that Google recently bought for £400 million, created a machine that can figure out the rules of all the old Atari games without being told, and then play them better than humans.
It’s still hard for AI to interact with the everyday world. Robots remain clumsy – they can’t tie your shoelaces or cut your toenails. But sensor technology, speech recognition, information searches and so forth are advancing apace.
Google’s driverless car has already covered hundreds of thousands of miles. But can it cope with emergencies? For instance, if an obstruction suddenly appears on a busy road, can the robotic “driver” discriminate whether it’s a paper bag, a dog or a child? The likely answer is that it won’t cope as well as a really good driver, but will be better than the average driver — machine errors may occur but not as often as human error. The roads will be safer. But when accidents occur they will create a legal minefield. Who should be held responsible — the “driver”, the owner, or the designer?
And what about the military use of autonomous drones? Can they be trusted to seek out a targeted individual and decide whether to deploy their weapon? Who has the moral responsibility then?