Cognition & Computation: Augmented Reality Meets Brain-Computer Interface

With all the hype flying around Augmented Reality lately, it’s easy to assume the nascent tech is just another flash-in-the-pan destined to burn out in a fury of marketing gimmickry & sensational posturing. Yet, it’s informative to consider the drivers pushing this trend and to tease out the truly adaptive value percolating beneath the hype. As we survey the last 40 years of computation we see vast rooms of tube & tape mainframes consolidating into single stacks & dense supercomputers. These, in turn, rode manufacturing advances into smaller components and faster processors bringing computing to the desktop. In the last 10 years we’ve seen computation un-encumber from the location-bound desktop to powerful, free-roaming mobile platforms. These devices have allowed us to carry the advantages of instant communication, collaboration, and computation with us wherever we go. The trends in computation continue towards power, portability, and access.

Specific implementations aside, augmented reality in it’s purest, most dilute form, is about drawing the experience of computation across the real world. It’s about point-and-click access to the data shadows of everything in our environment. It’s about realizing social networks, content markups, and digital remix culture as truly tangible layers of human behavior. Augmented reality represents another fundamentally adaptive technology to empower individuals & collectives with instant access to knowledge about the world in which we’re embedded. It breaks open both the digital & mental box and dumps the contents out on the floor.

There is a fascinating convergence at play here that, at a glance, seems almost paradoxical. While the contents of our minds are moving beyond the digital containers we’ve used to such creative & collaborative advantage, out into the phenomenal world of things & critters, the physical hardware through which this expression is constructed & mediated is miniaturizing and moving closer & closer towards our physical bodies. DARPA is funding research to push AR beyond current device limitations, envisioning transparent HUDs, eye-trackers, speech recognition, and gestural interfaces that release soldiers from the physical dependencies of our current devices. Today’s mobiles (and the limited AR tech built on them) compete directly with the other most adaptive human feature: our hands. Truly functional mobile comm/collab/comp must be hands-free… and this is the promise taking form in the emerging field of neurotechnology.

Nanomaterials, optogenetics, SPASERs, advanced robotics, neurocomputation, and artificial intelligence are merely a handful of the modalities shaping up to enable tighter integration between humans, machines, and the digital sphere. Advances in understanding the communication protocols and deep brain structures that mediate the human interface between our sensorium and the perceived world are presenting opportunities to capture & program our minds, to more accurately modulate the complexities of human emotion, creativity, trust, & cognition, and to build more expressive interfaces between mind and machine. Augmented reality is co-evolving with augmented physiology.

In it’s current and most-visualized form, augmented reality is clunky and awkward, merely suggesting a future of seamless integration between computation & cognition. Yet the visions being painted by the pioneers are deeply compelling and illustrate a near-future of a more malleable world richly overlaid with information & interface. As AR begins to render more ubiquitously across the landscape, as more & more phones & objects become smart and connected, the requirements for advancing human-computer interface will create exceptional challenges & astonishing results. Indeed, imagine the interface elements of a fully-augmented and interactive merging between analog & digital, between mind & machine… How do you use your mind to “click” on an object? How will the object communicate & interact with you? How do you filter data & interactions out from simple social transactions? How do you obfuscate the layers of data rising off your activities & thoughts? And what are the challenges of having many different opt-in or opt-out realities running in parallel?

Humans have just crossed the threshold into the Information Age. The sheer speed of the uptake is mind-bending as our world is morphing everyday into the science fictional future we spent the last century dreaming of. We may not really need the latest advances in creative advertising (similarly driven to get closer and closer to us) but it’s inarguable that both humans & the planetary ecology would benefit from a glance at a stream that instantly reveals a profile of the pollutants contained within, tagged by call-outs showing the top ten contributing upstream sources and the profiles of their CEOs – with email, Facebook, Twitter, and newsburst links at the ready. Examples and opportunities abound, perhaps best left to the authors and innovators of the future to sort out in a flurry of sensemods, augs, and biosims.

There are, of course, many challenges and unforeseen contingencies. The rapid re-wiring of the fundamental interface that such “capably murderous” creatures as us have with the natural world, and the attendant blurring of the lines between real & fabricated, should give pause to the most fevered anticipatory optimists. In a very near future, perhaps 10 or 15 years ahead, amidst an age of inconceivable change, we’ll have broken open the box, painted the walls with our minds, and wired the species and the planet to instantaneous collaboration and expression, with massively constructive and destructive tools at our fingertips. What dreams and nightmares may be realized when the apes attain such godhood? When technology evolves at a lightning pace, yet the human psyche remains at best adolescent, will we pull it off without going nuclear? Will the adaptive expressions of our age save us in time? I think they will, if we design them right and fairly acknowledge the deeply biological drivers working through the technologies we extrude.

[Acknowledgements: Tish Shute & Ugo Trade; Zack Lynch and his book The Neuro Revolution; conversations with fellow researchers at IFTF; and many others listed in the Signtific Lab tag for ProgrammableEverything.]

6 comments

  1. Thomas K Carpenter

    Another question to ask as we lean over the edge of a technological updraft–will we all adapt to a merging of all these technologies, centered around human-kind? Even now with “simple” computers we’ve weeded out those not capable of engulfing the everflowing intake of information. What will happen when the motherload is a swirling mass of information around us. I think some will opt-out off that existance all together, while others will dive in and emmerse themselves in the digital sea completely.

    (either way, I’m all ready to jack in)

  2. Pingback: links for 2009-08-25 « Blarney Fellow
  3. Pingback: Weekly Linkfest « Games Alfresco
  4. Pingback: Yann Le Guennec (yannleguennec) 's status on Wednesday, 23-Sep-09 00:25:56 UTC - Identi.ca
  5. gary maloney

    There is a natural connection between the signals coming from the motor cortex and any attempt to move things with the mind. Granted any signals provided by the human body can be translated into commands, there is a particular thing we know only from our dreams what can be brought out into waking light. It is a phenomenon waiting to happen.

  6. garymaloney

    I think we have got to distinguish a certian kind of phenomenon from all the other things going on in our minds. There are all kinds of ways to push a cursor around with our minds by thinking at it and one of those ways is to actually push it. Have invisible hands made free made of pure motion? How about a tail or japanime tenacle! Things to do with the physical senses may be analog no matter how you slice them up and remix them and no matter how astounding the cyber phenomena.

Post a comment

You may use the following HTML:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>