I recently wrote about some of the difficulties beyond realistic rendering that developers face when trying to make us emotionally attached to a character. Human behaviors and emotions are so much more difficult to mimic than those of animals, no matter how abstract. You'd find me silently weeping for the destruction of little Metal Gear Mk. II long before I'd be shedding tears for Solid Snake. Why is it easier to evoke a nurturing and protective instinct in a virtual pet than in a virtual human?
For starters, we might ask if the uncanny valley applies as equally to animal simulations as it does to humans. The quick and dirty answer is an obvious no. There's a reason why the computer animated films put out by Pixar and Dreamworks almost always feature non-humans: insects, marine life, monsters (The Incredibles are a notable exception). Computer generated animals don't creep us (or our kids) out as easily. As a rule, though, this doesn't apply so much to physical pet simulations. The first time I saw a friend's AIBO I nearly leapt out of a window, and there are so many evil Furby stories floating around on the internet that I wouldn't be surprised if the DSM-IV had an entry on Furbaphobia.
What really sets the EyePet apart is something we haven't seen before in a dedicated pet simulator - us! We will be able to take pictures with our pet (I can't wait to see the next generation family portrait: 2.4 kids, cat, dog, and Simian Sammy). Our home movies can include our new pet crawling up our arm and resting peacefully on our shoulder (no doubt wearing a limited-edition, pre-order only Sackboy outfit). We can physically tickle it, pet it, and hopefully slap it around when it's bad (Black & White, anyone?). But its most important quality is that we are supposed to view this pet as if it were really interacting with the very world that we inhabit every day.
This is an example of augmented reality (AR): overlaying digital information over our view of the physical world. It has garnered a good deal of press among psychologists for its ability to affect its users emotionally, in some cases helping them to overcome extreme phobias. More practical examples of AR include Heads-Up-Displays that attach the information they are providing "physically" to objects in the world. Imagine looking through transparent VR glasses (or contacts) at the food in the grocery store and having them display price and nutritional information. If AR has the ability to affect us to the degree that psychologists think, then the relevant question may not be whether our EyePet can be made real enough not to evoke revulsion. The question may be whether it can become too real, creating attachments that mimic or even exceeding the way we bond with real pets.
Think of it. This animal has a fluidity of muscle movement that is all but impossible with consumer marketed robots. Yet it interacts with our world, running up and down our arm and jumping into the pile of dirty laundry we left sitting in the corner. Combine this with the ability to learn from us, to recognize speech patterns or even learn, Furby-like, to speak back to us and a small child probably won't be able to tell the difference between their EyePet and their cat. Sure it's not human, but is this bringing us dangerously close to a point at which our emotional attachments no longer differentiate between real and virtual, between entities with a recognized consciousness and those without? I don't know about you, but if this is a hint of what lay on the other side of the uncanny valley, I'm wondering if we will like whatever view there is from the top.