Domino’s Robot Reality Check

If there’s one thing that sci-fi movies almost always skip over, it’s that awkward period when humanity is struggling with new technology it doesn’t quite grasp yet. In movies, teleportation doesn’t interrupt social lives — it accelerates them; flying cars don’t mean every teenager must apply for a pilot’s license — they just already know how to do it.

Witness, then, the gap between what well-meaning companies like Domino’s want to do with self-driving delivery machines and how consumers are brazenly confident that those same machines will be beaten, battered and abused.

The story starts March 17 when Domino’s Australia took to Facebook to announce the Domino’s Robotic Unit, or DRU, “the world’s first autonomous pizza delivery vehicle” — essentially a four-wheeled, self-navigating pizza oven meant to traverse the streets of Australia and New Zealand with piping-hot cargos inside. The announcement came replete with an overwrought video reveal, which may explain why hundreds upon hundreds of Facebook commenters were driven to point out an obvious fact: If people already readily mess with human pizza delivery workers, what’s to stop them from incapacitating Domino’s DRU and taking the pizza within for themselves?

DRU ostensibly hides its pizzas from public consumption with a private code meant only for the purchaser, but Domino’s — or the social media intern with the keys to the Facebook account — felt compelled to explained that they “take pizza protection extremely seriously at Domino’s, and DRU takes it even more seriously. That’s why he takes every precaution necessary to ensure the pizza is safe, including surveillance and security, etc.”

For as much as everyone loves pizza and to-the-door delivery, Domino’s isn’t quite running the high-margin business that could sustain the kind of “surveillance and security” an autonomous robot would require — let alone a fleet of them trundling across The Outback. It’s more likely than not that DRU is more of a publicity stunt than the next generation of pizza delivery technology, but even if it is, the quick and universal response from Domino’s Australian consumers — it’ll get messed with — shows that brands and the buying pubic still have a lot of growing up to do when it comes to interactions with robots.

Look no further than hitchBOT, the barebones “robot” designed to do nothing but get picked up and dropped off like a normal hitchhiker by human drivers kind enough to do so. In the summer of 2015, hitchBOT started its maiden voyage from Boston looking to make it all the way to San Francisco, but just two weeks and several states down on its transcontinental sojourn, hitchBOT was savagely dismembered and violently decommissioned by an unknown party while in Philadelphia.

“Usually, we are concerned whether we can trust robots, e.g., as helpers in our homes,” Frauke Zeller and David Harris, the Canadian research team behind the experiment, told The Atlantic. “But this project takes it the other way around and asks: Can robots trust human beings?”

The fate of hitchBOT, as well as the initial public response to DRU, say clearly not — or, at least, not yet. Whether it’s malice or just curiosity that draws humans to interfere with autonomous machines that would rather be left to their tasks, it’s obvious that humans just aren’t familiar enough with self-sufficient technology to let it alone without closer, and occasionally disruptive, examination. Instead of incompatibility between the two, it might be more likely to see current human-robot relations as a sort of warming-up period; until human actors understand how to operate naturally in a world that also contains autonomous robotics, a few speedbumps along the way are to be expected.

And while the stakes for successful human-automaton interaction in these two cases were low, there will very soon be more and more situations where the stakes are indeed as high as can be. Self-driving cars, for instance, are no longer the pie-in-the-sky concept they were a decade or even five years ago, and as Fortune noted, Google has been very upfront and even defensive in some cases about the fact that the overwhelming majority of incidents involving its self-driving vehicle fleet buzzing around Palo Alto were actually caused by reckless or careless human drivers in other cars.

Does that mean humans are just worse drivers than computers? Or does it point to the fact that we’re now seeing two different ways of climbing behind the wheel of a car — one is data-driven and expects everyone to follow the rules of the road, the other is individual and assumes the worst from its fellow drivers — and the growing pains that occur when these two paradigms meet at the crossroads?