We’re running around getting all the ducks in line for our AR Dev Camp this Saturday, December 5th at the Hacker Dojo. I’ve been amazed at the number and caliber of folks signed up to attend & contribute to both the Mountain View event and the simultaneous New York City AR Dev Camp. I think we all understand the scale of opportunities and challenges in forging this new domain. This will be an opportunity to come together and flesh out the many considerations needed to build a broad, robust, and open architecture for augmented reality. We have the hindsight of the internet revolution to offer examples of pitfalls and best practices alike. Indeed, we’re not building a new internet nor terraforming new worlds. Augmented reality is simply the next logical interaction layer to the increasingly ubiquitous cloud of data & relationships permeating our lives, so it’s critical that we architect services & experiences that smoothly integrate across existing protocols.
Open interoperability across platforms, universal standards for markups & messaging, geospatial data representation, 2D & 3D rendering, identity & transaction management, strong security & encryption, structured data and portability, content & markup ownership, and solutions driven by design & user experience. All these considerations & more require tremendous coordination to converge on a set of platform specifications that enable a strong and extensible ecology of developers, users, and content creators. In the rush to plant flags and colonize the new AR domain, it’s critical that we balance competition and collaboration to avoid the walled-garden balkanization and impossible hypemachine expectations that sent virtual reality to an early grave.
So go to the signup page, add a topic on the Session Topics page, and come join us this weekend for heady, juicy, AR goodness! If you’re not in the SF Bay Area or NYC, check out the other AR Dev Camps listed or get some co-conspirators and plan your own.
I’m working with Mike Liebhold, Gene Becker, Anselm Hook, and Damon Hernandez to produce an AR Dev Camp in Mountain View, Ca. on December 5th, 2009. It’s intended to be a technical unconference considering the elements necessary to create a robust and open augmented reality platform. Here are the details:
AR DevCamp 2009
The first Augmented Reality Development Camp (AR DevCamp) will be held in the SF Bay Area December 5, 2009. After nearly 20 years in the research labs, Augmented Reality is taking shape as one of the next major waves of Internet innovation, overlaying and infusing the physical world with digital media, information and experiences. We believe AR must be fundamentally open, interoperable, extensible, and accessible to all, so that it can create the kinds of opportunities for expressiveness, communication, business and social good that we enjoy on the web and Internet today. As one step toward this goal of Open AR, we are organizing AR DevCamp 1.0, a full day of technical sessions and hacking opportunities in an open format, unconference style.
AR DevCamp: a gathering of the mobile AR, 3D graphics and geospatial web tribes; an unconference
* Timing: December 5th, 2009
* Location: Hacker Dojo in Mountain View, CA
* Sponsorship: please, to cover basic costs of food/drink etc.
* Attendance: AR DevCamp interest list
Among other topics, we’ll discuss are implications of how the various layers of an open augmented reality stack will fit together to support the following straw man requirements:
* support for both fundamental kinds of AR requiring semantic frameworks be harmonized: 1. Image Triggered and 2. Location Based.
* support for many image trigger types, and many coordinate systems.
* a description of what happens on the focal plane of the view, including user interface conventions, and rendering rules.
* a description of the properties of a specific object or place, including data type, decoding and rendering requirements and resources
* support for local media types produced by many applications domains including 2D Web, 3D web, web maps, GIS, CAD, BIM, 3D game and virtual worlds
* support for local rendering rules and coordinate systems for specific places and objects e.g. html, CAD objects and spaces, video, rendered graphics game objects, etc.
* harmonization and interoperable semantic framework with adjacent semantic domains within overlapping computing and media domains, e.g. web, CAD, mapping, games, virtual worlds, etc.
* support for secure transactions and data exchange
* support for sensors and sensor networks
* social network interoperability, managing groups, permissions, and privacy
* messaging, communication, and collaboration
Via Tish Shute at UgoTrade:
“Imagine an environment where most physical objects know where they are, what they are, and can, (in principle) network with any other object. With this infrastructure, reality becomes its own database. Multiple consensual virtual environments are possible, each oriented to the needs of its constituency. If we also have open standards, then bottom-up social networks and even bottom up advertising become possible. Now imagine that in addition to sensors, many of these itsy-bitsy processors are equipped with effectors. Then the physical world becomes much more like a software construct. The possibilities are both scary and wondrous.” (Vernor Vinge – intro to ISMAR 2009)
BCI technology and the convergence of mind & machine are on the rise. Wired Magazine just published an article by Michael Chorost discussing advances in optogenetic neuromodulation. Of special interest, he notes the ability of optogenetics to both read & write information across neurons.
In theory, two-way optogenetic traffic could lead to human-machine fusions in which the brain truly interacts with the machine, rather than only giving or only accepting orders. It could be used, for instance, to let the brain send movement commands to a prosthetic arm; in return, the arm’s sensors would gather information and send it back.
In another article featured at IEEE Spectrum, researchers at Brown University have developed a working microchip implant that can wirelessly transmit neural signals to a remote sensor. This advance suggests that brain-computer interface technologies will evolve past the need for wired connections.
Wireless neural implants open up the possibility of embedding multiple chips in the brain, enabling them to read more and different types of neurons and allowing more complicated thoughts to be converted into action. Thus, for example, a person with a paralyzed arm might be able to play sports.
MindHacks has discusses the recent video of a touch-sensitive prosthetic hand. This is a Holy Grail of sorts for brain-machine interface: the hope that an amputee could regain functionality through a fully-articulatable, touch-sensitive, neural-integrated robotic hand. Such an accomplishment would indeed be a huge milestone. Of note, the MindHacks appraisal focuses on the brain’s ability to re-image body maps (perhaps due to it’s plasticity).
There’s an interesting part of the video where the patient says “When I grab something tightly I can feel it in the finger tips, which is strange because I don’t have them anymore”.
Finally, ScienceDaily notes that researchers have demonstrated rudimentary brain-to-brain communication mediated by non-invasive EEG.
[The]experiment had one person using BCI to transmit thoughts, translated as a series of binary digits, over the internet to another person whose computer receives the digits and transmits them to the second user’s brain through flashing an LED lamp… You can watch Dr James’ BCI experiment at YouTube.
One can imagine a not too distant future where the brain is directly transacting across wireless networks with machines, sensor arrays, and other humans.
From Tish Shute’s excellent UgoTrade article on Total Immersion and the “Transfigured City:” Shared Augmented Realities, the “Web Squared Era,” and Google Wave:
The recent emergence of “magic lens” augmented reality apps for our smart phones – Wikitude, Layar, Acrossair, Sekai Camera, and many others now – have given us a new window into our cities. But we are yet to realize the full potential of the AR/ubicomp base pair that can “make visible the invisible” and give us new opportunities to relate to the invisible data ecosystems of our cities, not merely as a spectator experience, but as an interactive, in context, real time opportunity to reimagine social relations.
In parallel, I had a brief Twitter exchange with Ian Hughes about the inevitable merging of virtual worlds and augmented reality layers. While in-box, narrative-driven worlds like WoW will continue to gain more traction, I expect to see a new form of VW that draws itself over the Real World. While it’s still strange to see augmented cyborgs walking around talking to themselves (ie people with bluetooth headsets) imagine the silliness that will unfold when legions of teens are running about the streets interacting with some invisible layer of reality unseen by the uninitiated…