Category: augmented

inviting machines into our bodies – big think

I have a new article over at Big Think looking at trends in wireless implant technology and the vulnerability profile presented by our emerging integration with connected biodevices. This article builds on my previous post here, Ubicomp Getting Under Your Skin? So Are Hackers.

From the intro:

In what amounts to a fairly shocking reminder of how quickly our technologies are advancing and how deeply our lives are being woven with networked computation, security researchers have recently reported successes in remotely compromising and controlling two different medical implant devices. Such implanted devices are becoming more and more common, implemented with wireless communications both across components and outward to monitors that allow doctors to non-invasively make changes to their settings. Until only recently, this technology was mostly confined to advanced labs but it is now moving steadily into our bodies. As these procedures become more common, researchers are now considering the security implications of wiring human anatomy directly into the web of ubiquitous computation and networked communications.

More…

New Aesthetic Filmmaking – Clouds: DSLR + Kinect + RGBD Toolkit

The work of James George caught my attention when he began publishing still images generated by mixing inputs from a DSLR camera paired with a Kinect scanner. He & partner, Jonathan Mind, recently produced this thoroughly compelling future-now video from the same process, using their open framework software, RGBD Toolkit, to manage the mapping and in-scene navigation. The camera is fixed but since the Kinect produces a 3D scene you can navigate around the captured image. Where forms in the camera field cast shadows, i.e. where the Kinect cannot scan past e.g. an occluding arm & hand, you see stretching and warping of the 3D mesh and image map. The effect is uncannily similar to the scenes in the film version of Minority Report when Tom Cruise’s character watches holovids of his son & wife, their forms trailing along the light path of the holoprojector.George & Mind frame this video as an exploration of emerging techniques and technologies in filmmaking. Also, they talk about coding and geekery and other cool stuff.

Clouds: beta from DEEPSPEED media on Vimeo.

Clouds is a computational documentary featuring hackers and media artists in dialogue about code, culture and the future of visualization.

This is a preview of a feature length production to be released later this year.

By Jonathan Minard (http://www.deepspeedmedia.com/) and James George (http://www.jamesgeorge.org/)
Made with RGBDToolkit.com

The State of Augmented Reality – ARE2012

Last week I attended and spoke at the Wednesday session of ARE2012, the SF Bay Area’s largest conference on augmented reality. This is the 3rd year of the conference and both the maturity of the industry and the cooling of the hype were evident. Attendance was lower than previous years, content was more focused on advertising & marketing examples, and there was a notable absence of platinum sponsors and top-tier enterprise attendees. On the surface this could be read as a general decline of the field but this is not the case.

A few things are happening to ferry augmented reality across the Trough of Disillusionment. This year there were more headset manufacturer’s than ever before. The need for AR to go hand’s-free is becoming more & more evident [my biases]. I saw a handful of new manufacturers I’d never even heard of before. And there they were with fully-functional hardware rendering annotations on transparent surfaces. In order for AR to move from content to utility it has to drive hardware development into HUD’s. Google see’s this as does any other enterprise player in the mobile game. Many of the forward-looking discussions effectively assume a head’s-up experience.

At the algorithmic level, things are moving quickly especially in the domain of edge detection, face tracking, and registration. I saw some really exceptional mapping that overlaid masks on people’s faces in realtime responding to movement & expressions without flickers or registration errors (except for the occasional super-cool New Aesthetic glitch when the map blurred off the user’s face if they moved too quickly). Machine vision is advancing at a strong pace and there was an ongoing thread throughout the conference about the challenges the broader industry faces in moving facial recognition technology into the mobile stack. It’s already there and works but the ethical and civil liberty issues are forcing a welcomes pause in consideration.

Qualcomm was the sole platinum sponsor, promoting its Vuforia AR platform. Sony had a booth showing some AR games (Pong!?) on their Playstation Vita device. But pretty much everyone in the enterprise tier stayed home, back in the labs and product meetings and design reviews, slowly & steadily moving AR into their respective feature stacks. Nokia is doing this, Google of course, Apple has been opening up the camera stream and patenting eyewear, HP is looking at using AR with Autonomy, even Pioneer has a Cyber Navi AR GPS solution. The same players that were underwriting AR conferences in exchange for marketing opportunities and the chance to poach young developers are now integrating the core AR stack into their platforms. This is both good & bad for the industry: good because it will drive standardization and put a lot of money behind innovation; bad because it will rock the world of the Metaio’s & Layar’s who have been tilling this field for years. Typically, as a young technology starts to gain traction and establish value, there follows a great period of consolidation as the big fish eat the little ones. Some succeed, many fall, and a new culture of creators emerges to develop for the winners.

So here we are. Augmented reality is flowing in three streams: Content and marketing grab eyeballs and easy money while conditioning the market to expect these experiences; developers extend the software stack towards real-time pixel-perfect recognition & mapping, enabling the solutions to actually, um, solve problems; and hardware manufacturers labor to bring AR into the many transparent surfaces through which we interact with the world, freeing our hands and augmenting our views with ubiquitous networked data. Across these domains sit content shops, emerging start-ups, the leading innovators ala Layar & Metaio, and the big fish enterprise companies that have had a piece of the tech game for years & years and aren’t going to miss out if AR goes supernova. The market is a bit shaky and very much uncertain for the SMB’s but it’s certainly maturing with the technology.

My sense is that everybody gets that AR is here to stay and has deep intrinsic value to the future of mobility and interface. How this will impact the many passionate folks curating & cultivating the field from the bottom-up remains to be seen.

Extended Senses & Invisible Fences – ARE2012

I recently gave a talk at ARE2012 about emerging interactions in the networked city. It’s a broad overview of ubicomp and how it is modulating our experience of ourselves, each other, and our environment. I’ll be writing a follow-up article with more info.

Top Post Round-Up: OWS, Ubicomp, Hyperconnectivity, & Transhumanity

I’ve just returned from a very interesting workshop in Washington, D.C. about fast-moving change, asymmetric threats to security, and finding signals within the wall of noise thrown up by big data. These are tremendous challenges to governance, policy makers, and the intelligence community. I’ll have more to say on these topics in later posts but for now, here’s a round-up of the most popular posts on URBEINGRECORDED in order of popularity:

Occupy Wall Street – New Maps for Shifting Terrain – On OWS, gaps in governance, empowered actors, and opportunities in the shifting sands…

Getting to Know Your Ghost in the Machine – On the convergence of ubiquitous computation (ubicomp), augmented reality, and network identity…

The Transhuman Gap – On the challenges facing the transhuman movement…

The Realities of Coal in the Second Industrial Revolution – On the energy demand and resource availability for the developing world…

Meshnets, Freedom Phones, and the People’s Revolution – On the Arab Spring, hyperconnectivity, and ad hoc wireless networks…

And a few that I really like:

Back-casting from 2043 – On possible futures, design fictions, and discontinuity…

On Human Networks & Living Biosystems – On the natural patterns driving technology & human systems…

Outliers & Complexity – On non-linearity, outliers, and the challenges of using the past to anticipate the future…

Thanks to all my readers for taking the time to think about my various rantings & pre-occupations. As always, your time, your participation, and your sharing is greatly appreciated!

Short Guides for Augmented Reality

A couple breakouts from my Signals, Challenges, & Horizon’s for Hand’s Free AR slidedeck…

Challenges
Aesthetics
Technical – power, weight, capture, integration
Interface – eye-tracking, gestural, selection, execution, filtering input & display
Interaction – context awareness, algorithms, provisioning
Perception – occlusion, distraction, depth cues, eyestrain
Legal & Ethical
Privacy
Identity
Surveillance
Security
Safety
Fragmented Realities

Future Horizons
Personal algorithms
Dynamic user interface
Provisioned experience
Cloud agents
Brain-computer interface
Bio-nanotechnology
Fully native augmented reality?

.

Signals, Challenges, & Horizons for Hand’s-Free Augmented Reality – ARE2011

Here’s the slidedeck from my recent talk at Augmented Reality Event 2011. I hope to post a general overview of the event soon, including some of the key trends that stood out for me in the space.

My IFTF Tech Horizons Perspective on Neuroprogramming

IFTF has published the 2010 research for their Technology Horizons program – When Everything is Programmable: Life in a Computational Age. This arc explored how the computational metaphor is permeating almost every aspect of our lives. I contributed the perspective on Neuroprogramming [PDF], looking at the ways technology & computation is directly interfacing with our brains & minds.

From the overview for the Neuroprogramming perspective:

Advances in neuroscience, genetic engineering, imaging, and nanotechnology are converging with ubiquitous computing to give us the ability to exert greater and greater control over the functioning of our brain, leading us toward a future in which we can program our minds. these technologies are increasing our ability to modify behavior, treat disorders, interface with machines, integrate intelligent neuroprosthetics, design more capable artificial intelligence, and illuminate the mysteries of consciousness. With new technologies for modulating and controlling the mind, this feedback loop in our co-evolution with technology is getting tighter and faster, rapidly changing who and what we are.

I also contributed to the Combinatorial Manufacturing perspective with Jake Dunagan. This perspective explores advances in nano-assembly & programmable matter. From the overview:

humans have always been makers, but the way humans manufacture is undergoing a radical transformation. tools for computational programming are converging with material science and synthetic biology to give us the ability to actually program matter—that is, to design matter that can change its physical properties based on user input or autonomous sensing. nanotechnology is allowing us to manipulate the atomic world with greater precision toward the construction of molecular assemblers. Researchers are designing “claytronics”: intelligent robots that will self-assemble, reconfigure, and respond to programmatic commands. And synthetic biologists are creating artificial organic machines to perform functions not seen in nature.

Augmented Reality Development Camp 2010 – Dec. 4th GAFFTA

ardc2010

I’m excited to announce that the second annual Bay Area Augmented Reality Developer’s Camp will be held on Saturday, December 4th, 2010, at the Gray Area Foundation For The Arts in San Francisco! This will be a free, open format, all-day unconference looking at the many aspects of Augmented Reality. We welcome hackers, developers, designers, product folks, biz devs, intellectuals, philosophers, tinkerers & futurists – anyone interested in this fascinating and revolutionary new technology.

If you’re interested please take a moment to sign up at the AR Dev Camp wiki.

Platforms for Growth and Points of Control for Augmented Reality

Tish Shute over at UgoTrade was kind enough to post a conversation we recently had about augmented reality, the Gartner Hype Cycle, and the O’Reilly Web 2.0 Points of Control map.

Chris Arkenberg: There will be much more of a blended reality experience in the living room for sure, and with interactive billboards. Digital mirrors are another area. So I mean if we kind of extend AR to include just blended reality in general, you know, this is moving into our culture through a number of different points. As you mentioned, it will be in the living room, it will be in our department stores where you can preview different outfits in their mirror. We’re already seeing these giant interactive digital billboards in Times Square and other areas.

It’s funny. I mean for me, the sort of blended reality aside, the augmented reality, to me, is actually a very simple proposition in some respects. When I look at this map, augmented reality is just an interface layer to this map in my mind, just as it’s an interface layer to the cloud and it’s an interface layer to the instrumented world. It’s a way to get information out of our devices and onto the world.

Is AR Ready for the Trough of Disillusionment?

hype cycle 2010

Gartner, one of the most trusted market analysis firms in the technology industry, just released it’s 2010 Hype Cycle Special Report. According to their research, augmented reality has entered the Peak of Inflated Expectations and will begin it’s slide down into the Trough of Disillusionment within the next year. To get a more visceral sense of what this means we can see that, again according to Gartner, public virtual worlds are just now at the bottom of the Trough looking for innovations and revenues to claw their way up onto the Slope of Enlightenment, having plummeted with the meteoric rise-and-fall of Second Life. The correlations between the VR curve & the AR curve are not lost on those of us who’ve been tracking both.

Looking at augmented reality it’s clear that much of the hype, especially here in the US, has been driven by the relentless need of marketers to grab eyes in a world of on-rushing novelty, coupled to the embryonic rush of a young developer community trying to prove it can be done. And to the credit of the developers they’ve indeed demonstrated the basic concept and shown that AR works and has a future but much implementation has entered the public marketplace as advertising gimmicks & hokey markups constrained by the limits of this nascent technology. While truly valuable & interesting work is happening in AR, particularly among university researchers, European factories, and the Dutch, the public mind only sees the gimmicks and the hype. As with virtual reality (now subtly re-branded as “virtual worlds” as if they’re embarrassed of those heady days of hope & hype), augmented reality cannot possibly live up to all the expectations set for it in time to meet the immediate gratification needs of the marketplace. Evangelists, pundits, marketers, and advertisers all feed the hype cycle while the developers & strategists keep their heads down toiling to plumb the foundations.

So if we accept that AR will necessarily pass through a PR & financial “Dark Night of the Soul” before reaching enlightenment, what then are the present challenges to the technology? Perhaps the largest barrier is the hardware. Using a cell phone to interrogate your surroundings is clearly of great value but it remains unclear which use cases benefit from rendering the results on the camera stream. Efforts like the Yelp Monocle are fun at first but the novelty quickly becomes overwhelmed by the challenges of the heavy-handed & occluding UI, the human interface (eg having to hold up your phone and “look” through it), and the need to have refined search, sort, and waypoint capabilities. Let’s not forget that the defining mythologies of AR – the sci-fi & cyberpunk visions of our expected futures – show augmented markups drawn heads-up on cool eyewear, in the near-term, and dance off of nanobots directly bonded to the optic stream in the far-term. Hyperbolic perhaps, but a fully-realized augmented reality must be a seamless, minimally intrusive and personally informative overlay on the world. AR will climb The Slope of Enlightenment with the help of a successful heads-up eyewear device capable of attracting significant market adoption. This is a challenge that cannot be met by the AR industry but depends on a Great White Hope like iShades or whatever offering the Jobs juggernaut may extrude in the next 3-5 years.

Hardware aside (there are challenges with cloud latency, GPS accuracy, and battery life, among others), the augmented reality stack has a ways to go before we can get to the type of standardization necessary to draw serious development. The current environment is as expected for such a young domain: balkanized platforms vying to become the first standard and fragmented developers playing with young SDK’s or just building their own kits. There’s a lot of sandboxing and minimal coordination & collaboration. This is one of the reasons public virtual worlds went into decline, in my opinion. When you’re dealing with reality – or it’s virtual approximation – walls tend to present a lot of general problems while offering only a few very select solutions. When architecting augmented reality platforms it should be paramount that the open internet is the core model. AR is simply a way to draw the net out on to the phenomenal world. As such it needs a common set of standards. For example the AR markup object is a fundamental component that will be used by all AR applications. How do you make it searchable, location-aware, federated, and share structured metadata? AR must work to enumerate the taxonomy of it’s architecture & experience models in order to begin working towards best practices & standards (some are already doing this such as the ARML standard proposed by Mobilizy). This is the only way that experience & interface will be broadly adopted and it’s the only way that a large enough development community will emerge to support the industry.

For augmented reality to make it through the Trough of Disillusionment it must formalize & standardize the core components for the visual, blended web. To this end companies like Layar and Metaio would be well-served by establishing strong partnerships and continue working with industry and civic bodies to understand exactly how AR can meet their needs. Likewise, working with the likes of IBM to build a visualization layer for the Smarter Planet. The marketing money will dry up so it’s imperative that the young platform companies collaborate to coordinate the standards under the hood, freeing them up to differentiate by the unique experiences & services they build on top. This may seem inevitable (or impossible, depending on your half-cup disposition) but look at virtual worlds – another technology that might be stronger if there were common standards & open movement across experiences. How Second Life, for example, has survived is by the soft & hard science work of unaffiliated university & corporate researchers trying to push the platform to be more than just a fancy chat experience. (Notably, the present heartbeat of Second Life does not appear to be the result of it’s management efforts.) AR would benefit by seeding this type of humanities and scientific work as much as possible, anchoring the technology in the very real needs of our world. To work on stuff that matters, to crib from Tim O’Reilly.

Gartner has generally been correct in it’s Hype Cycle prognostications. The timeframe is debatable, of course, but the report is instructional, provocative, and often impacts the degree of funding that moves into tech. Virtual worlds are a valuable model for augmented reality. The emergent AR players would do well to study both it’s decline into the Trough and it’s eventual, hopeful rise to enlightenment. The good news (and the freaky news) is that Gartner’s 2010 Hype Cycle indicates that human augmentation & brain computer interface are making headway up the Technology Trigger curve suggesting that both will show significant market presence within 5 years. So it’s likely that the dream of augmented reality will come to be, perhaps carried on the back of these even younger and more ambitious technologies.

For whatever failings or false starts the pundits may heap on augmented reality, it’s just too useful to be left behind. We want to see the world for what it is, rich with data & paths & affinities & memory. Those of us invested in its success would do well to work together to curate it’s passage through the Dark Night of the Hype Cycle.

[UPDATE: Marc Slocum over at O'Reilly RADAR (greatest horizon-scanning name evar!) elicited a very interesting comment from Layar CEO, Raimo van der Klein: "So we don't see AR as virtual Points of Interests around you. We see it as the most impactful mobile content out there." In some ways this challenges my assumption that AR is about visualizing the net & blending it with the hard world.]

The Cybernetic Self

This is one of 50 posts about cyborgs – a project to commemorate the 50th anniversary of the coining of the term. Thanks to Tim Maly of Quiet Babalon for running such a great project!

she
CC image from mondi.

“He would see faces in movies, on T.V., in magazines, and in books. He thought that some of these faces might be right for him…”

The word “cybernetic” derives from a Greek word, kybernetes, meaning “rudder” or “governor”. A cybernetic process is a control system that uses feedback about it’s actions in an environment to better adapt it’s behavior. The cybernetic organism, or “cyborg”, is a class of cybernetic systems that have converged with biological organisms. In this increasingly mythologized form, the cyborg embodies the ongoing dialectic between humanity & technology, and is an aspirational figure onto which we project our superhuman fantasies. While it offers security, enhancement, and corporeal salvation the cyborg also presents an existential threat to the self and to the cherished notions of being uniquely human.

It’s a gamble but we don’t seem able to leave the table. As we offload more of our tasks into technology we enhance our adaptability while undermining our own innate resilience as animals. We wrap ourselves in extended suits of shelter, mobility, health, and communications. We distribute our senses through a global network of hypermedia, augmenting our brains with satellites & server farms & smart phones. Increasingly, our minds & bodies are becoming the convergence point for both the real & the virtual, mediated through miniaturization, dematerialization, and nano-scale hybridization. Our ability to craft the world around us is quickly advancing to give us the ability to craft our bodies & our selves.

“And through the years, by keeping an ideal facial structure fixed in his mind… Or somewhere in the back of his mind… That he might, by force of will, cause his face to approach those of his ideals…”

Computation is miniaturizing, distributing, and becoming more powerful & efficient. It’s moving closer & closer to our bodies while ubiquitizing & dematerializing all around us. The cybernetic process has refined this most adaptive capacity in little more than 50 years to be right at hand, with us constantly, connected to a global web of people, places, things, information, and knowledge. We are co-evolving with our tools, or what Kevin Kelly refers to as the Technium – the seemingly-intentional kingdom of technology. As Terence McKenna suggested, we are like coral animals embedded in a technological reef of extruded psychic objects. By directly illustrating how our own fitness & bio-survival becomes bound to the survival of our technology, the cyborg is a fitting icon for this relationship.

mirror
CC image from PhotoDu.de.

Technology has historically been regarded as something we cast into the world separate from ourselves but it’s worth considering the symbiosis at play and how this relationship is changing the very nature of humanity. As we venture deeper & deeper into the Technium, we lend ourselves to it’s design. By embracing technology as part of our lives, as something we rely upon and depend on, we humanize it and wrap it in affection. We routinely fetishize & sexualize cool, flashy tech. In doing so we impart emotional value to the soul-less tools of our construction. We give them both life & meaning. By tying our lives to theirs, we agree to guarantee their survival. This arrangement is a sort of alchemical wedding between human & machine, seeking to yield gold from this mixture of blood & metal, uncertain of the outcome but almost religiously compelled to consummate.

“The change would be very subtle. It might take ten years or so. Gradually his face would change it’s shape. A more hooked nose. Wider, thinner lips. Beady eyes. A larger forehead…”

In the modern world, our identities include the social networks & affinity groups in which we participate, the digital media we capture & create & upload, the avatars we wear, and the myriad other fragments of ourselves we leave around the web. Who we are as individuals reflects the unique array of technologies through which we engage the world, at times instantiated through multiple masks of diverse utility, at other times fractured & dis-integrated – too many selves with too many virtual fingers picking at them. Our experience of life is increasingly composed of data & virtual events, cloudy & intangible yet remote-wired into our brains through re-targeted reward systems. A Twitter re-tweet makes us happy, a hostile blog comment makes us angry, the real-time web feeds our addiction to novelty. Memories are offloaded to digital storage mediums. Pictures, travel videos, art, calendars, phone numbers, thoughts & treatises… So much of who we are and who we have been is already virtualized & invested in cybernetic systems. All those tweets & blog posts cast into the cloud as digital moments captured & recorded. Every time I share a part of me with the digital world I become copied, distributed, more than myself yet… in pieces.

broken
CC image from Alejandro Hernandez.

It can be said that while we augment & extend our abilities through machines, machines learn more about the world through us. The web 2.0 social media revolution and the semantic web of structured data that is presently intercalating into it has brought machine algorithms into direct relationship with human behavior, watching our habits and tracking our paths through the digital landscape. These sophisticated marketing and research tools are learning more and more about what it means to be human, and the extended sensorium of the instrumented world is giving them deep insight into the run-time processes of civilization & nature. The spark of self-awareness has not yet animated these systems but there is an uneasy agreement that we will continue to assist in their cybernetic development, modifying their instructions to become more and more capable & efficient, perhaps to the point of being indistinguishable from, or surpassing, their human creators.

“He imagined that this was an ability he shared with most other people. They had also molded their faces according to some ideal. Maybe they imagined that their new face would better suit their personality. Or maybe they imagined that their personality would be forced to change to fit the new appearance…”

In Ridley Scott’s Blade Runner, the young Tyrell Corporation assistant, Rachel, reflects on her childhood memories while leafing through photographs of her youth. These images are evidence of her past she uses to construct her sense of self. Memories provide us with continuity and frame the present & future by reminding us of our history – critical for a species so capable of stepping out of time. Rachel’s realization that she is a replicant, that her memories are false implants deliberately created to make her believe she’s human, precipitates an existential crises that even threatens Harrison Ford’s character, Rick Deckard, surrounded as he is by photos of his own supposed past. This subtle narrative trick suggests that replicants will be more human-like if they don’t know they’re replicants. But it also invokes another query: If memories are (re-)writable, can we still trust our own past?

Yet both characters do appear quite human. They laugh and cry and love and seem driven by the same hopes and fears we all have. Ridley Scott’s brilliance – and by extension, Philip K. Dick’s – is to obscure the nature of the self and of humanity by challenging our notions of both. Is Rachel simply another mannequin animated by advanced cybernetics or is she more than that? Is she human enough? When the Tyrell bio-engineer J.F. Sebastian sees the Nexus 6 replicants, Pris and Roy Batty, he observes “you’re perfect”, underlining again the aspirational notion that through technology we can be made even better, becoming perhaps “more human than human”. This notion of intelligent artificial beings raises deep challenges to our cherished notions of humanity, as many have noted. But the casual fetishization of technology, as it gets nearer & friendlier & more magical, is perhaps just as threatening to our deified specialness in it’s subtle insinuation into our hands & hearts & minds.

mannequin
CC image from Photo Monkey.

In Mamoru Oshii’s anime classic, Ghost in the Shell, the female protagonist – a fully-engineered and functional robotic human named Kusanagi – at once decries those who resist augmentation, suggesting that “your effort to remain as you are is what limits you”, while simultaneously becoming engaged in a quest to determine if there might be more to her than just what has been programmed. She celebrates her artifice as a supreme achievement in overcoming the constraints of biological evolution while also seeking to find evidence that she is possessed of that most mysterious spark: the god-like ingression of being that enters and animates the human shell. Oshii’s narrative suggests that robots that achieve a sufficient level of complexity and self-awareness will, just like their human creators, seek to see themselves as somehow divinely animated. Perhaps it’s a method to defend the belief in human uniqueness but those writing the modern myths of cybernetics seem to imply that while humans aspire to the abilities of machines, machines aspire to the soulfulness of humans.

harlequin
CC image from Alaskan Dude.

“This is why first impressions are often correct…”

Chalk it up to curiosity, the power of design fictions, and an innate need to realize our visions, but if we can see it with enough resolution in our mind’s eye, we’ll try to bring it to life. The Ghost in the Shell & the Ghost in the Machine both intuit the ongoing merger between humanity & technology, and the hopes & fears that attend this arranged and seemingly-unavoidable alchemical wedding. As animals we are driven to adapt. As humans, we are compelled to create.

“Although some people might have made mistakes. They may have arrived at an appearance that bears no relationship to them. They may have picked an ideal appearance based on some childish whim or momentary impulse. Some may have gotten half-way there, and then changed their minds…”

Humans are brilliant & visionary but also impetuous, easily distracted, fascinated by shiny things, and typically ill-equipped to divine the downstream consequences of our actions. We extrude technologies at a pace that far outruns our ability to understand their impacts on the world, much less how they change who we are. As we reach towards AI, the cyborg, the singularity, and beyond, our cybernetic fantasies may necessarily pass through the dark night of the soul on the way to denouement. What is birthed from the alchemical marriage often necessitates the destruction of the wedding party.

cyborg
CC image from WebWizzard.

“He wonders if he too might have made a similar mistake.” – David Byrne, Seen & Not Seen

Are we working up some Faustian bargain promising the heights of technological superiority only for the meager sacrifice of our Souls? Or is this fear a reflection of our Cartesian inability to see ourselves as an evolving process, holding onto whatever continuity we can but always inevitably changing with the world in which we are embedded? As we offload more and more of our selves to our digital tools, we change what it means to be human. As we evolve & integrate more machine functionality we modify our relationship to the cybernetic process and re-frame our self-identity to accommodate our new capacities.

Like the replicants in Blade Runner and the animated cyborgs of Ghost in the Shell we will very likely continue to aspire to be more human than human, no matter how hard it may be to defend our ideals of what this may mean to the very spark of humanity. What form of cyborg we shall become, what degree of humanity we retain in the transaction, what unforeseen repercussions may be set in motion… The answers are as slippery as the continuum of the self and the ever-changing world in which we live. Confrontation with the existential Other – the global mind mediated through ubiquitous bio-machinery – and the resulting annihilation of the Self that will necessarily attend such knowledge, may very well yield a vastly different type of humanity than what we expect.

Augmented Reality: Federation & Fragmentation

Abstract for a forthcoming article:

At it’s core, the drive towards a fully-realized augmented reality is about the accessibility of information. The ability to quickly interrogate our environment in ways that reveal valuable data is a powerful adaptive capability conferred by the intersection of ubiquitous computing & augmented reality. The knowledge contained in the cloud becomes immediately visible, context-aware, and anchored to the solid world in which we move, both revealing formerly-hidden data and inviting participation & collaboration in new social annotations. The opportunity for AR to facilitate discoverability & social connections, to help reveal and share ourselves, and to reinforce social ties through visual signifiers is indeed quite promising. Yet as the hardware & software of augmented reality matures (particularly with respect to head-mounted visual overlays) it becomes possible that this technology will reinforce social elitism & designer realities, neo-tribalism, territorial conflicts, and socio-economic disparity by simultaneously inviting the wholesale tagging of our world while undermining the shared reference of objective reality we have relied upon as a fundamental socializing force since the dawn of humanity.

Back-Casting From 2043

When it’s busy like this the viz sometimes shifts like the color bleed you used to see on those old Sunday comics, way back in the day. Ubiquitous fiber pipes & wide-band wireless still can’t give enough bandwidth to the teeming multitudes downtown. The viz starts to lag, gets offset and even orphaned from the hard world it’s trying to be a part of. Hyperclear Ray Ban augments, lenses ground down by hand-sequenced rock algaes to such an impossibly smooth uniformity, run through with transparent circuity & bloodied rare-earth elements, scanning the world in multiple dimensions, pinging the cloud at 10GHz and pushing articulated data forms through massive OLED clusters just to show me where I can find an open null shield and the best possible cup of coffee this side of Ethiopia. Then the pipes clog and those ridiculously expensive glasses turn into cheap 3D specs from 2010 pretending to make 2D look like real life but instead here they’re doing the print offset thing, flattening my world into color shifts and mismatched registers.

Marks are flickering in & out, overlapping & losing their z-order. A public note on a park bench glows green – something about the local chemwash schedule – then loses integrity to one of my own annotations left there, like, a year ago. A poem I cranked out on a late night bender but it’s unreadable with all the other layers clashing. Even the filters get confused when the pipes clog. If you look around fast enough, marks start to trail & stutter in a wash of data echoes like when screens used to have refresh errors. Only now our eyes are the screens and the whole world gets caught in recursive copy loops.

The Ray Bans correct it pretty quickly, attenuating the rendered view and pushing up the hard view as the dominant layer. But for a moment it feels like you’re tripping. It used to be physically nauseating, a sudden vertigo brought on by that weird disconnect of self & place. Like so much of life these days, you spend a lot of time adapting to disconnects between layers. Between real and rendered. Between self & other, human & machine. Between expectations & outcomes.

The arc of glorious progress that opened the 21st century seemed to have found it’s apogee around 2006 or so and then came hurtling back towards Earth. And it wasn’t like earlier “corrections”. This one was big. It was a fundamental stock-taking of the entirety of the industrial age to date and things were suddenly, shockingly, terribly mis-matched from the realities of the world. Planetary-scale disconnects. The carrying capacity of economies, nations, ecosystems, and humanity itself came into clear & violent resolution by the 2020’s when everything started to radically shift under the twin engines of hyper-connectivity and ecological chaos. These two previously unexpected titans directly challenged and usurped the entire paradigm of the developed and developing worlds, setting us all into choppy and uncertain seas.

Sure, we still get to play with the crazy cool tech. Or at least some of us do. What the early cyberpunks showed us, and what the real systems geeks always knew, is that the world is not uniform or binary. It’s not utopia vs. dystopia, win vs. lose, us vs. them, iGlasses or collapse. It’s a complex, dynamic blend of an unfathomable number of inputs, governors, and feedback loops constantly, endlessly iterating across inconceivable scales to weave this crazy web of life. So we have climate refugees from Kansas getting tips from re-settled Ukrainians about resilience farming. We have insurgencies in North America and social collectives across South America. The biggest brands in the world are coming out of Seoul & Johannesburg while virtually-anonymous distributed collaboratives provide skills & services across the globe. And we have Macroviz design teams from Jakarta & Kerala directing fab teams in Bangkok to make Ray Bans to sell to anybody with enough will & credit to purchase. Globalization & it’s discontents has proven to offer a surprising amount of resilience. Heading into the Great Shift it looked like the developed world was headed for 3rd world-style poverty & collapse. But it hasn’t been quite that bad. More of a radical leveling of the entire global macro-economic playing field with the majority settling somewhere on the upper end of lower class. Some rose, many fell. It was… disturbing, to say the least. It simply didn’t fit the models. Everyone expected collapse or transcendence.

We humans want things to be as simple as possible. It’s just natural. Makes it easier to service the needs of biosurvival. But we’ve not created a simple world. Indeed, the world of our making looks about as orderly as the mess of 100 billion brain cells knotted up in our heads or the fragmented holographic complexes of memories & emotions, aspiration & fears, that clog it all up. We built living systems as complex as anything the planet could dish out. Not in the billions of years nature uses to refine and optimize but in a matter of a few millennia. We raced out of the gate, got on top of the resource game, took a look around, and realized the whole thing needed to be torn down and completely redesigned for the realities of the world. The outcomes no longer fit the expectations. In some strange fractal paradox, the maps got so accurate that the territory suddenly looked very different from what we thought.

The null shield was created as a black spot. A cone of silence for the information age. They’re like little international zones offering e-sylum in select coffee shops, parlors, dining establishments, and the finer brick-and-mortar lifestyle shops. And in conflict zones, narco-corridors, favelas, gang tenements, and the many other long-tail alleyways of the ad hoc shadow state. The null shield is a fully encrypted, anonymized, opt-in hotspot that deflects everything and anything the global service/intel/pr industry tries to throw at you or copy from you. What’s better is you don’t even show up as a black spot like the early implementations that would hide you but basically tell the world where you were hidden. You’re invisible and only connected to the exact channels you want.

These were originally created for civ lib types and the militarized criminal underclass as a counter-measure to the encroaching security state. But as traditional states universally weakened under the weight of bureaucracies and insurmountable budgets (and the growing power of cities and their Corp/NGO alignments), the state’s ability to surveil the citizenry declined. All the money they needed to keep paying IT staff, policy researchers, infrastructure operators, emergency responders, and the security apparatus – all that money was siphoned up by the cunning multinationals who used their financial wit & weight to undermine the states ability to regulate them. Now states – even relatively large ones like the U.S. government – are borrowing money from the multinationals just to stay afloat. The iron fist of surveillance & security has been mostly replaced by the annoying finger of marketing & advertising, always poking you in the eye wherever you go.

Keeping on top of the viz means keeping your filters up to date and fully functional. Bugs & viruses are still a problem, sure, but we’ve had near-50 years to develop a healthy immunity to most data infections. We still get the occasional viz jammer swapping all your english mark txt with kanji, and riders that sit in your stream just grabbing it all and bussing it to some server in Bucharest. But it’s the marketing vads and shell scanners that drive the new arms race of personal security. Used to be the FBI were the ones who would scan your browsing history to figure out if you’re an Islamic terrorist or right wing nut, then black-out the Burger Trough and grab you with a shock team right in the middle of your Friendly Meal. Even if they had the money to do it now, the Feds understand that the real threats are in the dark nets not the shopping malls. So the marketers have stepped in. They want your reading list so they can scan-and-spam you wherever you go, whenever, then sell the data to an ad agency. They want access to your viz to track your attention in real-time. They want to fold your every move into a demographic profile to help them pin-point their markets, anticipate trends, and catch you around every corner with ads for the Next Little Thing. And they use their access to rent cog cycles for whatever mechanical turk market research projects they have running in the background.

Google gave us the most complete map of the world. They gave us a repository of the greatest written works of our species. And a legacy of ubiquitous smart advertising that now approaches near-sentience in it’s human-like capacity to find you and push your buttons. In some ways the viz is just a cheap universal billboard. Who knew that all those billions of embedded chips covering the planet would be running subroutines pushing advertising and special interest blurbs to every corner of the globe? There are tales of foot travelers ranging deep into the ancient back-country forests of New Guinea, off-grid and viz-free, only to be confronted by flocks of parrots squawking out the latest tagline from some Bangalore soap opera. Seems the trees were instrumented with Google smart motes a few decades ago for a study in heavy metal bio-accumulation. Something about impedance shielding and sub-frequency fields affecting the parrots…

So while the people colonized the cloud so they could share themselves and embrace the world, the spammers, advert jocks, and marketing hacks pushed in just as quickly because wherever people are, wherever they gather and talk and measure themselves against each other & the world… in those places they can be watched and studied and readily persuaded to part with their hard-earned currency.

Or credits or karma points or whatever. Just like the rest of the big paradigms, value has shifted beyond anybody’s understanding. Gold and currency at least attempted to normalize value into some tangible form. But the markets got too big & complex and too deeply connected to the subtleties of human behavior and the cunning of human predators. While money, the thing, was a tangible piece of value, the marketplace of credit & derivatives undermined it’s solidity and abstracted value out into the cold frontiers of economics philosophers and automated high-frequency trading bots. So much of the money got sucked up into so few hands that the world was left to figure out just how the hell all those unemployed people were going to work again. Instead of signing up for indentured servitude on the big banking farms, folks got all DIY while value fled the cash & credit markets and transfigured into service exchanges, reputation currencies, local scrip, barter markets, shadow economies, and a seemingly endless cornucopia of adaptive strategies for trading your work & talent for goods & services.

Sure, there’s still stock markets, central banks, and big box corps but they operate in a world kind of like celebrities did in the 20th century, though more infamous than famous. They exist as the loa in a web of voodoo economics: you petition them for the trickle-down. Or just ignore them. They’re a special class that mostly sticks among their kind, sustaining a B2B layer that drives the e-teams & design shops, fab plants & supply chains to keep churning out those Ray Ban iGlasses. Lucky for them, materials science has seen a big acceleration since the 2010’s with considerable gains in miniaturization and efficiency so it’s a lot easier to be a multinational when much of your work is dematerialized and the stuff that is hard goods is mostly vat-grown or micro-assembled by bacterial hybrids. Once the massive inflationary spike of the Big Correction passed, it actually got a lot cheaper to do business.

Good news for the rest of us, too, as we were all very sorely in need of a serious local manufacturing capacity with a sustainable footprint and DIY extensibility. Really, this was the thing that moved so many people off the legacy economy. Powerful desktop CAD coupled to lo-intensity, high-fidelity 3d printers opened up hard goods innovation to millions. The mad rush of inventors and their collaborations brought solar conversion efficiency up to 85% within 3 years, allowing the majority of the world to secure their energy needs with minimal overhead. Even now, garage biotech shops in Sao Paulo are developing hybrid chloroplasts that can be vat-grown and painted on just about anything. This will pretty much eliminate the materials costs of hard solar and make just about anything into a photosynthetic energy generator, slurping up atmospheric carbon and exhaling oxygen in the process. Sometimes things align and register just right…

So here we are in 2043 and, like all of our history, so many things have changed and so many things have stayed the same. But this time it’s the really big things that have changed, and while all change is difficult we’re arguably much stronger and much more independent for it all. Sure, not everybody can afford these sweet Ray Bans. And the federated state bodies that kept us mostly safe and mostly employed are no longer the reliable parents they once were. We live in a complex world of great wealth and great disparity, as always, but security & social welfare is slowly rising with the tide of human technological adaptation. Things are generally much cheaper, lighter, and designed to reside & decay within ecosystems. Product becomes waste becomes food becomes new life. Our machines are more like natural creatures, seeking equilibrium and optimization, hybridized by the ceaseless blurring of organic & inorganic, by the innate animal disposition towards biomimicry, and by the insistence of the natural world to dictate the rules of human evolution, as always. After all, we are animals, deep down inside, compelled to work it out and adapt.

Time’s up on the null shield. Coffee is down. And the viz is doing it’s thing now that the evening rush has thinned. Out into the moody streets of the city core, the same streets trod for a thousand years here, viz or no. The same motivations, the same dreams. It always comes back to how our feet fall on the ground, how the food reaches our mouth, and how we share our lives with those we care for.

On Augmented Realities


Image from robinmochi.

[Cross-posted from my post at Boing Boing.]

Augmented Reality is definitely trending up the Hype Cycle in a big way. The past year has seen explosive growth in this nascent field buoyed by the rise of gps-enabled, cloud-aware smart phones. The marketing hype has, of course, been even more resounding, like a wailing chorus of virtual vuvuzelas trumpeting the next great wave of advertising (I couldn’t resist). But beneath the hype and the fluff is a thriving community of innovators & designers working to weave this technology into the very fabric of our lives.

As a quick review, augmented reality is a context-aware UI layer rendered over a camera stream or other transparent interface. This is typically mediated by geo-location, orientation, physical markers (those funky UPC-like symbols), and visual recognition. In this manner AR is able to reveal visually the hidden data shadow of our world, like showing you the nearest coffee shops or details about the air quality in your city. The mobile device gets info about where you are and what direction you’re facing, goes to the cloud to look up data appropriate for the vicinity, then renders it over the camera stream in a way that updates as you move.

A whole industry has been born around this premise, dragging in images, annotations, and data to overlay on the camera stream of our mobiles. But the really interesting stuff is yet to come. As standardization issues, hardware issues, and numerous UI design challenges sort out in the next couple of years, concurrent with the development of AR-specific devices, our interaction with visualized data will become more and more specialized and appropriate to our individual needs. The clutter of markups that currently plagues many AR apps will be attenuated by algorithms that know our interests and affinities and block out the elements we wish to avoid. Just like Amazon makes recommendations based on your click & purchase history, AR apps will screen out the noise and provide us only with the data we need.

When paired with the massive deployment of embedded sensors AR becomes a lightweight visualization layer for interfacing with the instrumented world. Civic workers could see underground cables and pipelines. Homeowners could see real-time energy & network use. Police and early responders could post visual warnings cordoning streets and alerting to hazards. Ecologists could determine water & air quality at-a-glance. Ecosystems begin to have a voice, communicating soil contamination to observers. Public facilities like park benches, utility poles, and street signs could hold annotations & links created by community members, made public or gated by in-group permissions. Geographic social annotations could mark up our cities with tags and content. Virtual worlds might break out of the box and overlay on the physical plane. The environment suddenly becomes much richer – and potentially much nosier – with a flood of information. Augmented reality promises to exteriorize the cloud, drawing it out across the world canvas and making visible our social fabric. But it doesn’t promise to mediate or regulate that content.

We risk myopia, disconnection, visual occlusion, fragmented realities, reinforced tribalism. Consider the seemingly-inevitable future where eyewear mediates a cloud-aware augmented interface with the world. Perhaps you opt to obscure ethnicities or anyone not connected to the net. Ghettos look much nicer when painted over with high-res colors and dancing sprites. The world you experience is really only shared by the other people running your default layer set. Maybe you see paycheck information or health records or political affinities of those you pass, measuring up the once-private lives of your community. Perhaps the most popular layers are hacked to display swastikas or porn or spam swarms or simply to black out your view in the middle of the morning commute. How does the layered world enable crime, gang affinities, and political or religious extremism? What inevitable inequities might arise between those able to purchase such access and those condemned to the dark poverty of quiet disconnection? Do the wealthy become even more enhanced & capable compared to the underclass? And what are the risks of getting lost in the virtual glitz? Are there considerations for how these augmented realities will bring us closer to the natural world in which we’re embedded? And just what is “real” or “natural” anymore?

As connected social computing devices get smaller & smaller and nearer & nearer to us, the weight of the cloud gets lighter. We carry around immense computational power and almost immediate access to the global repository of information. The mobile phone will eventually pair with head’s-up eyewear displays just as more and more people avoid catastrophic disease & injury through the aid of embedded brain-computer interfaces. As computation moves next to and into our bodies, the cloud is breaking out of the screen and washing onto our world. We grow more augmented with computation while our environment is getting smarter and more aware and increasingly able to communicate with us. It may very well be that in 5, 10, 20 years the world is a much more visual, dynamic, and communicative place than we can even imagine.

[For more of my explorations of this subject check out my articles
Breaking Open the Cloud: Heads in an Augmented World and Cognition & Computation: Augmented Reality Meets Brain-Computer Interface.]

KedgeForward: KTLS: The Future of Transhumanism

kedgeforward

The foresight & strategy group, KedgeForward, has featured me in their first KTLS: KedgeForward Thought Leader Series. They’re doing great work – check out their Holoptic Foresight Dynamics series, as well as their excellent presentation on Food Systems in 9 Minutes. I’m honored to be included in their list of Thought Leaders.

The question they pose:

“Do you see a transhuman species emerging? If yes, what present drivers are catalyzing this meme and evolutionary movement? If no, what ideas or emerging trends are discouraging or disrupting such a movement?”

And my answer:

“In strict terms, a species must be capable of passing on it’s adaptations to offspring through sexual transmission. In as much as transhumanism is proceeding through genetic engineering, it may be possible that enhancements to longevity, health, and physical & perceptual structures could be transmitted along the germ line, though there remain significant challenges to such deep modification, least of which are the attendant moral & ethical questions…” (continued at KedgeForward)

Breaking Open the Cloud: Heads in an Augmented World

This past Saturday I worked with Mike Liebhold, Gene Becker, Anselm Hook, and Damon Hernandez to present the West Coast Augmented Reality Development Camp at the Hacker Dojo in Mountain View, Ca. By all accounts it was a stunning success with a huge turn-out of companies, engineers, designers, makers, artists, geo-hackers, scientists, techies and thinkers. The planning was mostly done virtually via email and phone meetings with only a couple visits to the venue. On Saturday, the virtual planing phase collapsed into reality and bloomed on site into AR Dev Camp.

As an un-conference, the event itself was a study in grassroots, crowd-sourced, participatory organization with everyone proposing sessions which were then voted on and placed into the schedule. To me, it was a wonderfully organic and emergent process that almost magically gave life and spirit to the skeleton we had constructed. So before I launch into my thoughts I want to give a hearty “Thank You!” to everyone that joined us and helped make AR DevCamp such a great experience. I also want to give a big shout-out to Tish Shute, Ori Inbar, and Sophia for coordinating the AR DevCamp in New York City, as well as Dave Mee & Julian Tate who ran the Manchester, UK event. And, of course, we couldn’t have done it without the help of our sponsors, Layar, Metaio, Qualcomm, Google, IFTF, Lightning Laboratories, Web3D Consortium, IDEAbuilder, MakerLab, and Waze (and URBEINGRECORDED with Cage Free Consulting contributed the flood of afternoon cookies).

So first, just what is Augmented Reality? There’s a tremendous amount of buzz around the term, weighing it down with connotations and expectations. Often, those investing in it’s future invoke the haunting specter of Virtual Reality, doomed by it’s inability to live up to the hype: ahead of it’s time, lost mostly to the realm of military budgets and skunkworks. Yet, the AR buzz has driven a marketing rush throwing gobs of money at haphazard and questionable advertising implementations that quickly reach millions and cement in their minds a narrow association with flashy magazine covers and car ads. Not to diminish these efforts, but there’s a lot more – and a lot less – going on here.

In it’s most distilled form, augmented reality is an interface layer between the cloud and the material world. The term describes a set of methods to superimpose and blend rendered digital interface elements with a camera stream, most commonly in the form of annotations such as text, links, and other 2 & 3-dimensional objects that appear to float over the camera view of the live world. Very importantly, AR includes at it’s core the concept of location mediated through GPS coordinates, orientation, physical markers, point-clouds, and, increasingly, image recognition. This combination of location and superimposition of annotations over a live camera feed is the foundation of AR. As we’re seeing with smart phones, the device knows where you are, what direction you’re facing, what your looking at, who & what is near you, and what data annotations & links are available in the view. In this definition, the cloud is the platform, the AR browser is the interface, and annotation layers are content that blend with the world.

So the augmented reality experience is mediated through a camera view that identifies a location-based anchor or marker and reveals any annotations present in the annotation layer (think of a layer as a channel). Currently, each of these components is uniquely bound to the AR browser in which they were authored so you must use, for example, the Layar browser to experience Layar-authored annotation layers. While many AR browsers are grabbing common public data streams from sources like Flickr & Wikipedia, their display and function will vary from browser to browser as each renders this data uniquely. And just because you can see a Flicker annotation in one browser doesn’t mean you will see it in another. For now, content is mostly bound to the browser and authoring is mostly done by third-parties building canned info layers. There doesn’t seem to be much consideration for the durability and longevity of these core components, and there is a real risk that content experiences may become fractured and ephemeral.

Indeed, content wants to be an inclusive, social experience. One of the core propositions underlying our motivation for AR DevCamp is the idea that the platforms being built around augmented reality should be architected as openly as possible to encourage the greatest degree of interoperability and extensibility. In the nascent but massively-hyped AR domain, there’s a growing rush to plant flags and grab territory, as happens in all emergent opportunity spaces. The concern is that we might recapitulate the Browser Wars – not intentionally but by lack of concerted efforts to coordinate implementations. While I maintain that coordination & open standardization is of necessity, I question my own assumption that without it we’ll end up with a bunch of walled gardens. This may be under-estimating the impact of the web.

Through the lessons and resultant standardization of the Browser Wars, it’s become a best practice (and indeed, a necessity) to design specifically to the most common standards. Arguably, the move from Web 1.0 (essentially a collection of static billboards) to the social interactions that characterize Web 2.0 established and deeply reinforced the fundamental requirement that we’re all able to share information & experiences in the cloud. This social commons necessarily requires an architectural commonality. Thus, we all agree that HTML, JavaScript, PHP, JASON, MySQL, and now RDF, OWL, and SPARQL are the core components of our data service models. Since we understand that AR is primarily a location-aware interface layer for the cloud, it’s very likely that independent implementations will all speak the same language. However, the point of AR DevCamp and similar gatherings is to challenge this assumption and to reinforce commonality by bringing everyone together to press flesh & exchange notes. The social dynamic in the natural world will determine the level of cooperation in the virtual.

Yet, this cooperation and normalization is by no means a given. Just about every chunk of legacy code that the Information Age is built upon retains vestiges of the git-er-done, rush to market start-up midset. Short-sighted but well-meaing implementations based upon limited resources, embryonic design, and first-pass architectures bog down the most advance and expensive software suites. As these code bases swell to address the needs of a growing user base, the gap between core architecture and usability widens. Experience designers struggle against architectures that were not able to make such design considerations. Historically, code architecture has proceeded ahead of user experience design, though this is shifting to some degree in the era of Agile and hosted services. Nevertheless, the emerging platforms of AR have the opportunity – and, I’d argue, the requirement – to include user research, design, & usability as core components of implementation. The open, standardized web has fostered a continuous and known experience across it’s vast reaches. Artsy Flash sites aside, you always know how to navigate and interact with the content. The fundamentals of AR need to be identified and agreed upon before the mosaic of emerging code bases become too mature to adjust to the needs of a growing user base.

Given the highly social aspect of the web, place-based annotations and objects will suffer greatly if there’s not early coordination around a shared standard for anchors. This is where the Browser Wars may inadvertently re-emerge. The anchor is basically the address/location of an annotation layer. When you look through an augmented view It’s the bit of data that says “I’m here, check out my annotations”. Currently there is no shared standard for this object, nor for annotations & layers. You need the Layar browser in order to see annotation layers made in it’s platform. If you only have a Junaio browser, you won’t see it. If you annotate a forest, tagging each tree with a marker linked to it’s own data registry, and then the browser app you used to author goes out of business, all those pointers are gone. The historical analog would be coding your website for IE but anyone with Mosaic can’t see it. This is where early design and usability considerations are critical to ensure a reasonable commonality and longevity of content. Anchors, annotations, & layers are new territory that ought to be regarded as strongly as URL’s and markup. Continuing to regard these as independent platform IP will balkanize the user experience of continuity across content layers. There must be standards in authoring and viewing. Content and services are where the business models should innovate.

So if we’re moving towards an augmented world of anchors and annotations and layers, what considerations should be given to the data structure underlying these objects? An anchor will have an addressable location but should it contain information about who authored it and when? Should an annotation contain similar data, time-stamped and signed with an RDF structure underlying the annotation content? How will layers describe their contents, set permissions, and ensure security? And what of the physical location of the data? An anchor should be a distributed and redundant object, not bound to the durability and security of any single server. A secure and resilient backbone of real-world anchor points is critical as the scaffolding of this new domain.

Earthmine is a company I’ve been watching for a number of months since they presented at the IFTF. They joined us at AR DevCamp to present their platform. While many AR developers are using GPS & compass or markers to draw annotations over the real world, Earthmine is busy building a massive dataset that maps Lat/Long/Alt coordinates to hi-rez images of cities. They have a small fleet of vehicles equipped with stereoscopic camera arrays that drive around cities, capturing images of every inch they see. But they’re also grabbing precise geolocation coordinates that, when combined with the image sets, yields a dense point cloud of addressable pixels. When you look at one of these point clouds on a screen it looks like a finely-rendered pointillistic painting of a downtown. They massage this data set, mash the images and location, and stream it through their API as a navigable street view. You can then place objects in the view with very high accuracy – like a proposed bus stop you’d like to prototype, or a virtual billboard. Earthmine even indicated that making annotations in their 2d map layer could add a link to the augmented real-world view. So you can see a convergence and emerging correlation between location & annotation in the real world, in an augmented overlay, on a flat digital map, and on a Google Earth or Virtual World interface. This is an unprecedented coherency of virtual and real space.

The Earthmine demo is cool and the Flash API offers interesting ways to customize the street view with 2d & 3d annotations but the really killer thing is their dataset. As alluded to, they’re building an address space for the real world. So if you’re in San Francisco and you have an AR browser that uses the Earthmine API (rumors that Metaio are working on something here…) you can add an annotation to every STOP sign in The Mission so that a flashing text of “WAR” appears underneath. With the current GPS location strategy this would be impossible due to it’s relatively poor resolution (~3-5 meters at best). You could use markers but you’d need to stick one on every STOP sign. With Earthmine you can know almost exactly where in the real world you’re anchoring the annotation… and they can know whenever you click on one. Sound familiar?

Augmented reality suggests the most significant shift in computation since the internet. As we craft our computers into smaller and smaller mobile devices, exponentially more powerful and connected, we’re now on the verge of beginning the visual and locational integration of the digital world with the analog world. We’ve digitized much of human culture, pasted it onto screens and given ourselves mirror identities to navigate, communicate, and share in this virtual space. Now we’re breaking open the box and drawing the cloud across the phenomenal world, teaching our machines to see what we see and inviting the world to be listed in the digital Yellow Pages.

So, yeah, now your AR experience of the world is covered in billboards, sloganeering, propaganda, and dancing dinosaurs all competing for your click-through AdSense rating. A big consideration, and a topic that came up again & again at AR DevCamp, is the overwhelming amount of data and the need to filter it to some meaningful subset, particularly with respect to spam and advertising. A glance across the current crop of iPhone AR apps reveals many design interface challenges, with piles of annotations all occluding themselves and your view of the world. Now imagine a world covered in layers each with any number of annotations. UI becomes very important. Andrea Mangini & Julie Meridian led a session on design & usability considerations in AR that could easily be a conference of it’s own. How do you manage occlusion & sorting? Level of detail? What does simple & effective authoring of annotations on a mobile device look like? How do you design a small but visible environmental cue that an annotation exists? If the URL convention is an underlined text, what is the AR convention for gently indicating that the fire hydrant you’re looking at has available layers & annotations? Discoverability of the digital links within the augmented world will be at a tension with overwhelming the view of the world itself.

When we consider the seemingly-inevitable development of eyewear with digital heads-up display, occlusion can quickly move from helpful to annoying to dangerous. No matter how compelling the augmented world is you still need to see when that truck is coming down the street. Again, proper design for human usability is perhaps even more critical in the augmented interface than in a typical screen interface. Marketing and business plans aside, we have to assume that the emergence of truly compelling and valuable technologies are ultimately in line with the deep evolutionary needs of the human animal. We’re certainly augmenting for fun and art and engagement and communication but my sense is that, underneath all these we’re building this new augmented reality because the power & adaptive advantage mediated through the digital domain is so great that we need it to integrate seamlessly with our mobile, multi-tasking lives. It’s been noted by others – Kevin Kelly comes to mind – that we’re teaching machines to do many of things we do, but better. And in the process we’re making them smaller and more natural and bringing them closer and closer to our bodies. Ponderings of transhumanity and cyborgian futures aside, our lives are being increasingly augmented and mediated by many such smart machines.

DARPA wasn’t at AR Dev Camp. Or at least if they were, they didn’t say so. There was a guy from NASA showing a really cool air traffic control system that watched aircraft in the sky, tagged them with data annotations, and tracked their movements. We were shown the challenges to effectively register the virtual layer – the annotation – with the real object – a helicopter – when it’s moving rapidly. In other words, the virtual layer, mediated through a camera & a software layer, tended to lag behind the 80+ mph heli. But in lieu of DARPA’s actual attendance, it’s worth considering their Urban Leader Tactical Response, Awareness & Visualization (ULTRA-Vis) program to develop a multimodal mobile computational system for coordinating tactical movements of patrol units. This program sees the near-future soldier as outfitted with a specialized AR comm system with a CPU worn on a belt, a HUD lens over one eye, a voice recognition mic, and a system to capture gestures. Military patrols rely heavily on intel coming from command and on coordinating movements through back-channel talk and line-of-sight gestures. AR HUDs offer simple wayfinding and identification of team mates. Voice commands can execute distributed programs and open or close comm channels. Gestures will be captured to communicate to units both in an out of line-of-sight and to initiate or capture datastreams. Cameras and GPS will track patrol movements and offer remote viewing through other soldier’s cameras. But most importantly, this degree of interface will be simple, fluid, and effortless. It won’t get in your way. For better or for worse, maximizing pack hunting behaviors with technology will set the stage for the future of human-computer interaction.

After lunch provided by Qualcomm, Anselm Hook led an afternoon session at AR DevCamp titled simply “Hiking”. We convened in a dark and hot room, somewhat ironically called the “Sun Room” for it’s eastern exposure, to discuss nature and what, if any, role AR should play in our interface with the Great Outdoors. We quickly decided to move the meeting out into the parking lot where we shared our interests in both built and natural outdoor environments. A common theme that emerged in words and sentiment was the tension between experience & distraction. We all felt that the natural world is so rich and special in large part due to it’s increasing contrast to an urbanized and mechanized life. It’s remote and wild and utterly disconnected, inherently at peace in it’s unscripted and chaotic way. How is this value and uniqueness challenged by ubicomp and GPS and cellular networks? GPS & cellphone coverage can save lives but do we really need to Twitter from a mountain top? I make no judgement calls here and am plenty guilty myself but it’s worth acknowledging that augmented reality may challenge the direct experience of nature in unexpected ways and bring the capacity to overwrite even the remote corners of the world with human digital graffiti.

But remember that grove of trees I mentioned before, tagged with data annotations? Imagine the researchers viewing those trees through AR lenses able to see a glance-able color index for each one showing CO2, O2, heavy metals, turgidity, growth, and age. Sensors, mesh nets, and AR can give voice to ecosystems, cities, communities, vehicles, and objects. Imagine that grove is one of thousands in the Brazilian rainforest reporting on it’s status regularly, contributing data to policy debates and regulatory bodies. What types of augmented experiences can reinforce our connection to nature and our role as caretakers?

On the other hand, what happens when you and the people around you are each having very different experiences of “reality”? What happens to the commons when there are 500 different augmented versions? What happens to community and society when the common reference point for everything – the very environment in which we exist – is malleable and fluid and gated by permissions and access layers or overwrought with annotations competing for our attention? What social gaps could arise? What psychological ailments? Or perhaps more realistically, what happens when a small class of wealthy westerners begin to redraw the world around them? Don’t want to see other people? No problem! Just turn on the obfuscation layer. Ugly tenements ruining your morning commute? Turn on some happy music and set your iGlasses to the favela paintshop filter! Augmentation and enhancement with technology will inevitably proceed along economic lines. What is the proper balance between enjoying our technological luxuries and responsibly curating the world for those less fortunate? Technology often makes the symptoms look different but doesn’t usually eradicate the cause. In the rush to colonize the augmented reality, in the shadow of a wavering global economic system and deep revision of value and product, now is the best time and the most important time to put solutions ahead of products; to collaborate and cooperate on designing open, robust, and extensible systems; and, in the words of Tim O’Reilly, to “work on stuff that matters”.

At the end of the day, pizza’s arrived (Thanks MakerLab!), beers were opened (Thanks Layar & Lighting Labs), and the buzzing brains of AR DevCamp mingled and shared their thoughts. Hearts alit, I’ll be forgiven some sentimentality to suggest that the Hacker Dojo had a soft, warm glow emanating from all the fine folks in attendance. Maybe it was like this around the Acid Tests in the 60′s (with more paisley). Or the heady days of PARC Xerox in the 80′s (with more ties). That growing inertia and sense of destiny at being at the right place at the right time just at the start of something exceptional…

Special thanks to Andrea Mangini for deep and ranging discussions about all this stuff, among many other things.