I can imagine a day when I will walk into my house after a rainy walk home and step into my built-for-purpose VR/AR room. Once I change out of my street clothes and into my haptic suit and goggles, I’ll step into my omnidirectional treadmill. With a voice command, I’ll log into a virtual world. There I’ll meet my friends, and we’ll walk along the beach. As we walk, we’ll tell stories, smile and laugh our evening away. We’ll feel the refreshing wind on our warm sunlit faces, smell the salty air as it wafts up our noses, hear the song of seagulls in our ears, our toes tickled by the warm surf as combing against our sandy wet feet.
Over the years, we’ve seen many new virtual world features aimed that enhance our experiences and avatars to better simulate real life, including windlight, avatar physics, mesh, lighting and shadows, more realistic skins and the trend towards realistic body proportions and more immersive camera positioning.
A significant and early attempt to bridge the gap was Linden Lab’s introduction of voice chat. Vivox, the provider of voice chat services for Second Life claims that “over 35% of Residents (are) talking in channel at any given time… In Second Life today, the Vivox voice service supports… over 1 billion minutes of voice communications per month”. Considering that there are only about 40-50,000 concurrent users of Second Life at any given moment, I’m impressed with the amount of voice being used.
Whilst some residents embrace voice today, many reacted to the news negatively, suggesting that voice would destroy or radically reduce resident’s enjoyment and use of Second Life. Clearly, it did not. For this reason, the introduction of voice serves as a good example of how alternative communication options are not to be feared, as long as they stay optional.
Thus, I have been surprised when I’ve heard similar objections to the potential introduction of dynamic facial expressions (e.g. some people won’t use it, it’s intrusive, facial expressions are now too low quality). I’m not talking about the static type we can turn on and off with a tool as we pose for photos, rather, I’m talking about dynamically rendered facial expressions that synchronously mirror our physical facial expressions as we talk into 3D cameras, as this example illustrates.
Personally, I would love to have this option in Second Life. Just last night I was in a group discussion and caught myself staring straight forward, as I addressed a friend that sat behind me. This is different from what I might naturally do in the real world, which is turn my head to face her. Despite being very accustomed to this lack of simple body language, I still feel ‘rude’ in not turning my head.
Wouldn’t it be sweet to smile inworld, when we smiled without resorting to emotes or emoticons? Wouldn’t it feel good, to watch someone laugh with us, when we shared a funny story? How much more rich would our experiences feel, if we could see and share the raising of a piqued eyebrow, a knowing curl of a lip, or the furrowing of a worried brow?
Facial expression is touted as an important aspect of avatar communication in Philip Rosedale’s High Fidelity:
What I find very exciting is that Linden Lab CEO Ebbe Altberg has publicly stated in interviews that “ultimately over time, as [real world] cameras improve, if you’re willing to be in front of a camera, there are things you can obviously do to really transmit your real-world facial expressions onto your avatar, and we’re going to look at that further out.” With that said, I’m very much hoping that this is in the works for Linden Lab’s next generation virtual world.
Beyond facial expressions, I’m looking forward to the prospect of experiencing body language (enabled by technologies like Leap Motion and the Oculus Rift), and how that might enrich our communications inworld, as is rudimentary foreshadowed in this video from High Fidelity tests (ignore the test avatars, instead, look at what they are doing with their arms, hands and fingers).
There is no doubt that body language will impact our day to day communication in virtual worlds. As a director of the performing arts, I’m very excited by what I see in this next video from Esimple, when they used Unity3D and Microsoft Kinect to capture human movement, mirrored in real-time to an avatar.
Bleak, real-world challenges we face aside, with these new technologies, we are getting closer and closer to Star Trek’s holodeck. Soon, I hope, will arrive the haptic suits, a force-feedback and electrical muscle stimulation wearable device that we’ll wear as we negotiate the virtual world. Beyond body language, these devices will enable us to feel edges, curvature, texture, vibrations, and temperature – all the electrical impulses of touch.
The Financial Post reported In 2013 that “IFTech Inc., a startup based in Port Hope, Ont., launched a Kickstarter campaign hoping to finance the “As Real As It Gets” (ARAIG), a multi-sensory wearable suit that provides gamers with localized sound, as well as haptic feedback on the torso and upper arms.” Perhaps before its time, the Kickstarter campaign failed, only raising $126,625 of its $900,000 goal.
Technology of this kind has been in development since the mid 1990s, and will most likely be commercially available in our lifetimes. Much of it already is. Coupled with the Oculus Rift, Leap Motion, and the Virtuix Omnidirectional Treadmill, we will feel like we are literally inside the virtual world in which we exist.
To borrow the slogan behind our would be haptic suit – it’s going to really be “as real as it gets”.