I attended the Friday session of Metaverse University 2009 at Stanford last week. Here are some of my observations:
Themes: interoperability, open source, simulations, visualizations, breaking down the walls, and being stuck with Second Life. Little emphasis on chat and social networking, per se. Much more emphasis on architectures & component solutions.
Trends from Virtual Worlds Roadmap: simulation & training, health care, augmented tourism, mixed-reality museums, live sporting events in VW’s, virtual meetings.
While the hype over virtual worlds has faded, many serious researchers continue to do fascinating work in the territory. Monetization of a good VW strategy is still needed but this goal seems to have receded into the future for many of the speakers, as well as many of the enterprise-scale companies investigating these spaces who seem less interested in making money (either through direct development and monetization or by riding the public hypetrain) and more interested in gaining efficiencies and trimming overhead (teleconferencing, remote collaboration).
Google (O3D), Intel (Cable Beach), Sun Microsystems (Project Wonderland), Samsung (Virtual Worlds Roadmap), and Nokia (supporting REalXtend) were all present, as well as many Stanford researchers, including the folks building Sirikata. Many are working to extend the OpenSim fork of the Second Life platform. None of them seem to be working towards direct productization (though Google wants O3D to be the in-browser standard for 3D content) but each were working to advance the platform and explore future possibilities.
With monetization off the table and money drying up, researchers are moving to embrace open source solutions (OpenSim, ScienceSim, Ogre3D) and pushing for open standards (OpenId, OAuth, XMPP) and flexible API’s. Almost everyone mentioned a desire to move away from the proprietary walled-garden approach towards an integrative one that looks to the success of social network strategies. While celebrating open source development of Second Life forks, almost everyone bemoaned being stuck on the platform, often underscoring the feeling with a groan that “there’s nothing else”.
Authoring was rarely addressed with content instead being re-purposed from upstream solutions, eg using 3DSMax & Maya content to build world content. Collada was uniformly mentioned as the exchange format. Most developers still want to shoehorn other modalities (eg PowerPoint, web browsing, document collab, etc) into the VW space. Some examples inadvertantly showed the clunkiness of current solutions. I asked why a technology like PowerPoint is any better in 3D than in 2D, eliciting a long pause from the presenter. There’s still a lot of ambition on the part of developers but not always a ton of common sense.
However, IBM’s manager of service design and service systems research, Susan Stucky, gave me the most reasonable answer I’ve heard yet about why it’s important to move 2D modalities into 3D. She said that for collaborative telepresence it was very helpful to have access to everything you would normally have access to in a meeting. Speaking with her at the break, she told me how IBM has found that the greatest use of their Second Life investment has come from the ability to bring employees and clients from around the world together into a collaborative space. They’ve held conferences, run meetings, and explored simulations of project management strategies. For her, the ROI was gained by telepresence & simulations.
And for me, I had a breakthrough speaking with Susan. One of the most compelling yet least-obvious values of collaboration in virtual worlds is the sense of embodiment conveyed by the presence of the avatar. Identity, social cohesion, team building, and friendship arise more naturally when those engaged are perceived as physically present. Self-awareness and the projection of self onto others is still quite bound to our physical bodies. Perhaps combining the embodiment of avatars with in-world access to knowledge & productivity tools represents a more effective modality for non-local collaboration. I’m not sure how this compares to video teleconferencing but I feel there’s a lot of depth to be explored in how virtual embodiment reinforces social cohesion & collaboration (attn: PhD candidates).
Other notables: Henry Lowood (Stanford Curator of History of Science, Media, & Genetics) speaking on The Ultimate Archive: building virtual museums of virtual world platforms inside virtual worlds (eg a virtual museum with a room that lets you play the first Doom level as it was originally). He noted both “perfect capture” (all the data can be archived) and “perfect loss” (experiences, emotions, and deleted content cannot be captured) in VW archiving. Sheldon Brown (Center for Research in Computing in the Arts, UCSD) showed his mind-bending work Scalable City and called for procedurally deriving world assets and behaviorally deriving world experiences.
Virtual Worlds have lost funding and are presently in the Valley of Hype. Effective monetization strategies have yet to reveal themselves. However, there is value to the enterprise in leveraging virtual worlds for telepresence and collaboration, simulation & training. The VW community is moving the R&D towards openness: open source components, open standards, interoperability, and engaging with the platforms and principles of social networks to enhance connectivity and move away from the Walled Garden. The most interesting work with virtual worlds continues to be in the deeper realms of behavior, psychology, telepresence, and simulations. Graphically, everyone is apparently stuck in Second Life. A smart, well-funded private investor would build a platform with the competitive graphics capabilities (surface mesh, brep, kinematics, HLSL, etc), a powerful and scalable object model that can push to XML/RDF/RSS, a powerful simulation engine with an expressive visualization/analytics front-end, a REST/JSON API capable of talking to agents, tools, and other VW’s (as well as Twitter, Facebook, LinkedIn, SMS, Playstation Network, XBox Network, etc), integrate ActiveX embedding of 2D tools (Office apps, browsers, etc), enable a content marketplace built around highly expressive and personalizable avatars and fetish objects, and cultivate a 3rd part service ecosystem supporting all of the above.
Is this so hard? ;)