On the simulation of amae

Robot falls

Kacie Kinzer‘s Tweenbots made the internet rounds earlier this month on sheer charm. Unlike the awkward double-jointed humanoids in mainstream robotics labs learning to gingerly pour cups of tea for the elderly, Kinzer’s Tweenbots are just boxes on wheels with stylized smiles and taped-on notes indicating a destination and asking for help in their quest to arrive there.

And most of the time, they get it. Kinzer claims that as “not one” Tweenbot got lost or damaged as it made its way through the city. As she puts it:

[T]his ad-hoc crowdsourcing was driven primarily by human empathy for an anthropomorphized object. The journey the Tweenbots take each time they are released in the city becomes a story of people’s willingness to engage with a creature that mirrors human characteristics of vulnerability, of being lost, and of having intention without the means of achieving its goal alone.

In other words, Tweenbots are masters of amae, the art of childish, irresistible dependency. This has intriguing implications for human-robot interaction even if the Tweenbots themselves are more art project than anything else.

It might seem surprising that this idea should come out of the US. Wasn’t Japan the country gradually filling up with roly-poly companions along the lines of those in Tezuka Osamu’s Astroboy and other manga, while the American market favors no-nonsense machines that are faceless, creepy, or both? (Relatively thoughtful example of this narrative: “Why Should We Be Friends?” in Newsweek last year.)

When you think about it, though, Japan’s superstar robots already rely on amae in a very deep and existential way. An AIBO isn’t quite as helpless as a Tweenbot, but it was very carefully designed to appear to be. (Even the sound design was contracted out to Takemura “Child & Magic” Nobukazu.) ASIMO speaks like a child, looks like a whimsical space elf, and acts like a clumsy servant. You could hire a human to do what the ASIMO does much more smoothly, and it would probably even be cheaper — but people forgive the ASIMO its failings because it seems to be trying its hardest to please.

A Roomba, on the other hand, relates to two things: furniture and dirt. It isn’t even designed to simulate awareness of humans, let alone deference. (You can imagine a Roomba doggedly cleaning a post-apocalyptic wasteland, Wall-E-style, but can you imagine an AIBO frolicking there?)

Tweenbots are closer to the Roomba pole than the ASIMO one. They, too, are insects blindly following one very simple algorithm: “move forward while looking cute”. They don’t interact with humans; humans act on them. That humans interpret this as an interaction is an artifact of the human programming to be suckers for cute, helpless creatures. People get attached to their Roombas, too, naming them and treating them like pets, despite the fact that a Roomba literally cannot distinguish a human from an end table.

An AIBO or an ASIMO performs amae, while a Tweenbot embodies it. In robotics, specialization usually increases efficiency, which means that a Tweenbot’s amae comes cheaper and easier. But it also makes them a developmental cul-de-sac: they are designed to need us more than we need them.

This is not the case for the ASIMOs and AIBOs of the world. The hope for them is that they will one day mature into new models dextrous and capable enough to allow us to indulge in amae. Let’s be brutally honest: the manga model for robot-assisted aged care is less Astroboy than Doraemon, the robo-amae fantasy par excellence. If you want a picture of the future, imagine an old man pleading for a takecopter — forever.

Until then, though, we indulge them like the children they are, indulging their weaknesses, applauding proudly when they manage to stand upright unsupported, and waiting patiently for them to grow up — something which Tweenbots, like Peter Pan, will never do.

April 30, 2009

Matt Treyvaud is a writer and translator living near Kamakura. He is Néojaponisme's Literature/Language editor and the proprietor of No-sword.

4 Responses

  1. Ratiocinational Says:

    The Tweenbots concept is amazing. So much so that I now want to try it myself. I’m curious as to how people will react differently if the bots have emotions other than happiness portrayed on them.

    I’m going to argue with you about Roombas too, although it’s a little nitpicky. If you get an iRobot Roomba with some sensors, with sufficient programming it can differentiate between objects! A friend of mine in college made programmed one to recognize loud sounds. It was rudimentary, but it worked. It was pretty cool. It makes me wish I had done something with robotics for the course instead of what I did. Simulating human opinion flow in social networks via a simple AI is almost as cool as robots, I suppose.

  2. Leonardo Boiko Says:

    That bring to mind a recent experiment I can’t seem to google now, where they compared toddlers’ reactions to a moving robot and to a stationary one. The conclusion was that the babies assumed the inanimate robot was a toy, but treated the animated one as one of them —to the point where they covered it with blankets when it went to a power-supply to recharge.

  3. Matt TREYVAUD Says:

    If you get an iRobot Roomba with some sensors, with sufficient programming it can differentiate between objects!

    Huh, really. Is this like a post-purchase hack, or something the company offers?

    Eventually I would expect some future iteration of the Roomba to learn that foot-shaped objects that appear and disappear unpredictably are the bottom edges of people, and everything else is furniture… but I don’t ever expect them to care.

  4. Ratiocinational Says:

    Originally you had to hack them to program them, but since then they’ve started releasing vacuum free versions as programmable robots. It’s pretty neat.