Trust Google to spoil a perfectly inoffensive demonstration of the latest in artificial intelligence by accidentally offering a glimpse of a dystopian robot future. It happened this week at the search company’s annual developer conference, an outdoor gathering of 7,000 people that felt like a cross between a religious revival meeting and a Californian summer festival. By stringing together its latest AI chips into “pods”, Google claims to be able to bring to bear almost as much power as the world’s fastest supercomputer, all for the purpose of training its machine-learning models. The company’s demonstrations showed this tremendous resource being put at the service of stunningly innocuous tasks, such as enhancing the brightness of underexposed photos or finding matches online for fashionable clothes that catch your eye. Then came the show-stopper. A human-sounding robot phoned to book a table at a restaurant, carrying on an extended conversation without the person on the other end realising they were talking to a machine. It was as though your diary had developed a personality of its own and gone out into the world to organise your life. This prompted just the kind of uncomfortable thoughts about AI that Google’s other demonstrations had managed to avoid. Machines pretending to be human have always elicited a “creepiness” factor, known in robot circles as the Uncanny Valley. But what happens when the robot leaps over the valley and manages to interact with humans on their own terms?
Machines that fool humans into thinking they are dealing with another person raise obvious ethical concerns. AI that weakens the sense of what is real promises to accelerate the arrival of a post-truth world. And isn’t a computer voice that can trick you into engaging in conversation destined to become the political robocall from hell? The demonstration spoke volumes, not just about the predicament Google faces in how far to push its new AI powers into everyday situations but also about the challenges for all companies looking to engage their customers with technologies such as these. That is particularly the case as voice-enabled services become a more common feature of corporate software. The Google demonstration — of a technology it calls Duplex — underlined two points about the design of these more “naturalistic” human-robot interactions. One is that it is important for people to know when they are dealing with a machine. Besides the obvious ethical worries, there are very practical reasons for this. Today’s machine-learning systems often seem magical but they are also brittle: they can fail suddenly and unexpectedly, and language is notoriously hard for a computer to master. So it helps for a person interacting with a robot to be prepared. This does not just apply to language systems. Drive.ai, a driverless car start-up that began a trial passenger service in Texas this week, has gone out of its way to distinguish its cars from other cars on the road. They are painted bright orange, and a screen on the front is used to explain the car’s intentions to pedestrians, with messages such as: “Waiting for you to cross”. The aim is to destroy the illusion that such intelligence systems will behave just like human drivers, says Andrew Ng, an AI pioneer and Drive.ai board member. People need to know they are dealing with a robot. A second message is that it is sometimes better to bury the intelligence in the background and use it to make people smarter, rather than allowing it to intrude directly into interactions best left to humans. Some of the same technologies behind Google’s robot caller — such as speech recognition, which identifies the words being spoken, and natural language understanding, which tries to derive the speaker’s meaning — are being put to use in voice AI systems embedded in corporate software. TalkIQ, for instance, uses these technologies to listen in on phone calls made by salespeople and customer service representatives. The software makes suggestions for how the humans can improve their performance, or offers information that might be useful during a call with a customer. The prospect of people being guided to act by such systems raises other worries. Imagine two people holding a conversation in which each is being prompted how to respond by AI operating in the background. At what point does human will recede and machine will take over? And when that happens, why not move as quickly as possible to a full computer-to-computer negotiation? This shows how hard it will be to avoid uncomfortable situations as human and machine intelligences start to overlap. As Google Duplex shows, the technologies are already here and companies that stand to benefit cannot avoid putting them to use. But they will have to stay on the right side of the creepiness line.
Copyright The Financial Times Limited 2018. All rights reserved.