After Google demonstrated its fascinating Duplex technology this week at the company's I/O conference, the awe quickly gave way to worry, and then ire. How could Google attempt to fool people on the phone into believing that they were talking to fellow humans, when it was really just Google Assistant with a natural-sounding voice? Where is the transparency? Is this even ethical?
Responding to the controversy, Google subsequently said that it would indeed let call recipients know that a robot was on the other end of the line. It had initially left that detail out of its presentations. It's still not quite clear yet how the company will disclose that you're talking to a bot. In a statement, Google said:
We understand and value the discussion around Google Duplex -- as we've said from the beginning, transparency in the technology is important. We are designing this feature with disclosure built-in, and we'll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.
Case closed, right? Not according to David Ryan Polgar, a tech ethicist who writes, speaks and researches the impact of social media and tech.
He, too, was impressed by Google's demo, but he immediately saw the possible dangers and pitfalls.
"Watching that display of Google Duplex was certainly exciting and impressive, especially the second one, where they're calling the restaurant to make reservations," Polgar said. "That had natural language, and Duplex could understand even a heavily accented human. I think the other part of it, though, is it's pretty easy to imagine how this could potentially be misused or abused or kind of perverted."
In the wrong hands, this technology could certainly be used to scam people, Polgar said.
"Telemarketers must be having a frenzy about this," he said. "They must be sitting there saying, 'Wow, what can we do with this?'"
But the bigger hidden danger with Google Duplex is what it could do to everyday people who decide to use it. Essentially, by outsourcing conversations we'd rather not have to bots in order to get more things done, we may chip away at our own humanity.
"What I worry about is that, here we are, so worried about humanizing our bots, but are we botifying as humans?" Polgar warned. "We're basically automating our intimacy."
During one of the Google Duplex calls, the bot was smart enough to not only respond to questions from a hair salon receptionist but also insert pause words, like "Umhmm," when the receptionist went to check the schedule, and it even said, "Give me one second."
The bot sounded very natural overall, but there was one part in the conversation, when the bot said "That's fine," that could have been mistaken for the caller being curt or even sarcastic. Google says a human could always take over, but what if the human is busy doing other things?
At first, Google says, Duplex will be used only for tasks like making appointments, booking restaurant reservations and inquiring about business hours. And there are some benefits to the overall concept.
"We are so worried about humanizing our bots, but are we botifying as humans? We're basically automating our intimacy." — David Ryan Polgar, ethicist
"It seems kind of outdated when we have all these apps like Uber that are drastically impacting the efficiency of our life. And then we're waiting on the phone for 20 minutes to talk to an insurance company," Polgar said. "So I do think there is a need and a value in simplifying that and saying, this is transactional behavior that could be simplified and could be automated. And that's perfectly fine, as long as both sides are knowledgeable about it."
To Polgar, these kinds of "transactional conversations" have practical value and can save us time, so at least in terms of the scenarios it has chosen, Google Duplex is focusing on the right things. But will we somehow view service workers in these fields as less human as a result? And will we be less equipped to communicate in person overall the more we decide to automate?
"Even in this conversation, I'm imagining you, like maybe you're sitting at a desk. Maybe you're in an office. Now, you're in New York," Polgar said. "There's things that I'm imagining, but if I have no idea if I'm talking to a real person or not, that completely, I think, messes with our ability to form bonds."
Ultimately, Polgar envisions a scenario where bots will just be talking to bots for various tasks we want performed. Humans will just be one step removed on both sides of the equation. But even that has potentially dangerous ramifications.
"In our pursuit to really communicate with more people than is humanly possible, we're potentially deleting the very value of our communication," Polgar said. "And that's what I worry about.