Category: interface

That Echo you hear is the sound of Amazon’s servers calling your name

Amazon’s Echo device is the future. When computation and connectivity is so cheap and small and abundant it can be poured into anything. When convenience is aggressively enabled by networks and algorithms. When we routinely socialize with machines and artificial intelligences… It’s an odd menagerie to let into the human party. An always-on omnipresence with frames of privacy and intimacy violated by this commercial surveillance. Or worse: by state agents peering through hidden backdoors.

Is this a generational thing? Are digital natives and millennials more comfortable allowing machine intelligence into their private lives? Does a person who grew up with a distributed virtuality of friends and followers naturally expect those networks to be present throughout their lives? Are kids these days more comfortable with the bargain that trades privacy for convenience, data for insight?

When the typical signifiers of digital interaction – software, opt-in interfaces, the need to turn the thing “on” – recede and become invisible, the relationship with computation and connectivity becomes part of the fabric of reality – which is to say, it becomes forgotten.

Are we who cry foul at the intrusion of such tools destined to be outmoded by those more at ease with a distributed sense of selfhood, a comfort and reliance upon an outboarded cognitive toolset, an acceptance of transparency and visibility?

If there is to be a global mind, it’s inherently at odds with individuality, inasmuch as it anchors us more directly into the collective by unbundling our functions as network operations. How much of “you” do you owe to Google? How much of your experience is validated by your Facebook Friends or Instagram Likes? This isn’t entirely new – we’ve been calculating with external tools and sharing stories for ages. But there’s a creeping sense that comes with scale when our behaviors start to mimic the algorithms rather than the algorithms serving us.

We train Google’s AI with our searching. We shorten communications for Twitter. We capture the world and feed it to Facebook’s analytics. We yield to the algorithmic narrowing of recommended ideology because if you like X, you will also probably like x+1. We can also help you avoid y.

Amazon’s Echo takes a place in our home, like the bands on our wrists, a trojan horse for servers to learn more, for our convenience and amazement and a desire to curate the future by inviting it in, good hosts to the unfolding calculus of computation.

The Echo you hear is the reflection of our lives in data, across networks, with Friends and Followers, into the hands of marketers and product managers, the chartings of researchers and anthropologists, the power struggles of states and militias. But maybe it’s just the growing din of the future collapsing into the present, the world we inhabit and the challenges to some of us of becoming more connected than we perhaps want, more cognitively unbundled and dependent, somehow more empowered and less individual, tuned and entrained by things less human and more capable as the very concrete of our lives wakes and begins to gaze at us from across the wires.

Me, in the Kinect point cloud (video still).

my home project: kinect hacking

Over the weekend I bought a Kinect and wired it to my Mac. I’m following the O’Reilly/Make book, Making Things See, by Greg Borenstein (who kindly tells me I’m an amateur machine vision researcher). With the book I’ve set up Processing, a JAVA-lite coding environment, and the open source Kinect libraries, SimpleOpenNI & NITE. I’ve spent a good chunk of the weekend reading a hundred pages into the book and working through the project tutorials and I’ve generated some interesting interactions and imagery. There’s also a ton of tutorial vids on You Tube, natch, to help cut through the weeds and whatnot.

Continue reading

inviting machines into our bodies – big think

I have a new article over at Big Think looking at trends in wireless implant technology and the vulnerability profile presented by our emerging integration with connected biodevices. This article builds on my previous post here, Ubicomp Getting Under Your Skin? So Are Hackers.

From the intro:

In what amounts to a fairly shocking reminder of how quickly our technologies are advancing and how deeply our lives are being woven with networked computation, security researchers have recently reported successes in remotely compromising and controlling two different medical implant devices. Such implanted devices are becoming more and more common, implemented with wireless communications both across components and outward to monitors that allow doctors to non-invasively make changes to their settings. Until only recently, this technology was mostly confined to advanced labs but it is now moving steadily into our bodies. As these procedures become more common, researchers are now considering the security implications of wiring human anatomy directly into the web of ubiquitous computation and networked communications.

More…

Extended Senses & Invisible Fences – ARE2012

I recently gave a talk at ARE2012 about emerging interactions in the networked city. It’s a broad overview of ubicomp and how it is modulating our experience of ourselves, each other, and our environment. I’ll be writing a follow-up article with more info.

Amon Tobin ISAM – Mixed-Media Sound & Projection Mapping

I saw Amon Tobin’s ISAM project a week ago at The Warfield theater in San Francisco. Literally jaw-dropping.

Visualizing ISAM from Leviathan on Vimeo.

Leviathan worked with frequent collaborator and renowned VJ Vello Virkhaus on groundbreaking performance visuals for electronic musician Amon Tobin, creating ethereal CG narratives and engineering the geometry maps for an entire stage of stacked cube-like structures. Taking the performance further, the Leviathan team also developed a proprietary projection alignment tool to ensure quick and accurate setup for the show, along with custom Kinect control & visualization utilities for Amon to command.

Signals, Challenges, & Horizons for Hand’s-Free Augmented Reality – ARE2011

Here’s the slidedeck from my recent talk at Augmented Reality Event 2011. I hope to post a general overview of the event soon, including some of the key trends that stood out for me in the space.

Back-Casting From 2043

When it’s busy like this the viz sometimes shifts like the color bleed you used to see on those old Sunday comics, way back in the day. Ubiquitous fiber pipes & wide-band wireless still can’t give enough bandwidth to the teeming multitudes downtown. The viz starts to lag, gets offset and even orphaned from the hard world it’s trying to be a part of. Hyperclear Ray Ban augments, lenses ground down by hand-sequenced rock algaes to such an impossibly smooth uniformity, run through with transparent circuity & bloodied rare-earth elements, scanning the world in multiple dimensions, pinging the cloud at 10GHz and pushing articulated data forms through massive OLED clusters just to show me where I can find an open null shield and the best possible cup of coffee this side of Ethiopia. Then the pipes clog and those ridiculously expensive glasses turn into cheap 3D specs from 2010 pretending to make 2D look like real life but instead here they’re doing the print offset thing, flattening my world into color shifts and mismatched registers.

Marks are flickering in & out, overlapping & losing their z-order. A public note on a park bench glows green – something about the local chemwash schedule – then loses integrity to one of my own annotations left there, like, a year ago. A poem I cranked out on a late night bender but it’s unreadable with all the other layers clashing. Even the filters get confused when the pipes clog. If you look around fast enough, marks start to trail & stutter in a wash of data echoes like when screens used to have refresh errors. Only now our eyes are the screens and the whole world gets caught in recursive copy loops.

The Ray Bans correct it pretty quickly, attenuating the rendered view and pushing up the hard view as the dominant layer. But for a moment it feels like you’re tripping. It used to be physically nauseating, a sudden vertigo brought on by that weird disconnect of self & place. Like so much of life these days, you spend a lot of time adapting to disconnects between layers. Between real and rendered. Between self & other, human & machine. Between expectations & outcomes.

The arc of glorious progress that opened the 21st century seemed to have found it’s apogee around 2006 or so and then came hurtling back towards Earth. And it wasn’t like earlier “corrections”. This one was big. It was a fundamental stock-taking of the entirety of the industrial age to date and things were suddenly, shockingly, terribly mis-matched from the realities of the world. Planetary-scale disconnects. The carrying capacity of economies, nations, ecosystems, and humanity itself came into clear & violent resolution by the 2020’s when everything started to radically shift under the twin engines of hyper-connectivity and ecological chaos. These two previously unexpected titans directly challenged and usurped the entire paradigm of the developed and developing worlds, setting us all into choppy and uncertain seas.

Sure, we still get to play with the crazy cool tech. Or at least some of us do. What the early cyberpunks showed us, and what the real systems geeks always knew, is that the world is not uniform or binary. It’s not utopia vs. dystopia, win vs. lose, us vs. them, iGlasses or collapse. It’s a complex, dynamic blend of an unfathomable number of inputs, governors, and feedback loops constantly, endlessly iterating across inconceivable scales to weave this crazy web of life. So we have climate refugees from Kansas getting tips from re-settled Ukrainians about resilience farming. We have insurgencies in North America and social collectives across South America. The biggest brands in the world are coming out of Seoul & Johannesburg while virtually-anonymous distributed collaboratives provide skills & services across the globe. And we have Macroviz design teams from Jakarta & Kerala directing fab teams in Bangkok to make Ray Bans to sell to anybody with enough will & credit to purchase. Globalization & it’s discontents has proven to offer a surprising amount of resilience. Heading into the Great Shift it looked like the developed world was headed for 3rd world-style poverty & collapse. But it hasn’t been quite that bad. More of a radical leveling of the entire global macro-economic playing field with the majority settling somewhere on the upper end of lower class. Some rose, many fell. It was… disturbing, to say the least. It simply didn’t fit the models. Everyone expected collapse or transcendence.

We humans want things to be as simple as possible. It’s just natural. Makes it easier to service the needs of biosurvival. But we’ve not created a simple world. Indeed, the world of our making looks about as orderly as the mess of 100 billion brain cells knotted up in our heads or the fragmented holographic complexes of memories & emotions, aspiration & fears, that clog it all up. We built living systems as complex as anything the planet could dish out. Not in the billions of years nature uses to refine and optimize but in a matter of a few millennia. We raced out of the gate, got on top of the resource game, took a look around, and realized the whole thing needed to be torn down and completely redesigned for the realities of the world. The outcomes no longer fit the expectations. In some strange fractal paradox, the maps got so accurate that the territory suddenly looked very different from what we thought.

The null shield was created as a black spot. A cone of silence for the information age. They’re like little international zones offering e-sylum in select coffee shops, parlors, dining establishments, and the finer brick-and-mortar lifestyle shops. And in conflict zones, narco-corridors, favelas, gang tenements, and the many other long-tail alleyways of the ad hoc shadow state. The null shield is a fully encrypted, anonymized, opt-in hotspot that deflects everything and anything the global service/intel/pr industry tries to throw at you or copy from you. What’s better is you don’t even show up as a black spot like the early implementations that would hide you but basically tell the world where you were hidden. You’re invisible and only connected to the exact channels you want.

These were originally created for civ lib types and the militarized criminal underclass as a counter-measure to the encroaching security state. But as traditional states universally weakened under the weight of bureaucracies and insurmountable budgets (and the growing power of cities and their Corp/NGO alignments), the state’s ability to surveil the citizenry declined. All the money they needed to keep paying IT staff, policy researchers, infrastructure operators, emergency responders, and the security apparatus – all that money was siphoned up by the cunning multinationals who used their financial wit & weight to undermine the states ability to regulate them. Now states – even relatively large ones like the U.S. government – are borrowing money from the multinationals just to stay afloat. The iron fist of surveillance & security has been mostly replaced by the annoying finger of marketing & advertising, always poking you in the eye wherever you go.

Keeping on top of the viz means keeping your filters up to date and fully functional. Bugs & viruses are still a problem, sure, but we’ve had near-50 years to develop a healthy immunity to most data infections. We still get the occasional viz jammer swapping all your english mark txt with kanji, and riders that sit in your stream just grabbing it all and bussing it to some server in Bucharest. But it’s the marketing vads and shell scanners that drive the new arms race of personal security. Used to be the FBI were the ones who would scan your browsing history to figure out if you’re an Islamic terrorist or right wing nut, then black-out the Burger Trough and grab you with a shock team right in the middle of your Friendly Meal. Even if they had the money to do it now, the Feds understand that the real threats are in the dark nets not the shopping malls. So the marketers have stepped in. They want your reading list so they can scan-and-spam you wherever you go, whenever, then sell the data to an ad agency. They want access to your viz to track your attention in real-time. They want to fold your every move into a demographic profile to help them pin-point their markets, anticipate trends, and catch you around every corner with ads for the Next Little Thing. And they use their access to rent cog cycles for whatever mechanical turk market research projects they have running in the background.

Google gave us the most complete map of the world. They gave us a repository of the greatest written works of our species. And a legacy of ubiquitous smart advertising that now approaches near-sentience in it’s human-like capacity to find you and push your buttons. In some ways the viz is just a cheap universal billboard. Who knew that all those billions of embedded chips covering the planet would be running subroutines pushing advertising and special interest blurbs to every corner of the globe? There are tales of foot travelers ranging deep into the ancient back-country forests of New Guinea, off-grid and viz-free, only to be confronted by flocks of parrots squawking out the latest tagline from some Bangalore soap opera. Seems the trees were instrumented with Google smart motes a few decades ago for a study in heavy metal bio-accumulation. Something about impedance shielding and sub-frequency fields affecting the parrots…

So while the people colonized the cloud so they could share themselves and embrace the world, the spammers, advert jocks, and marketing hacks pushed in just as quickly because wherever people are, wherever they gather and talk and measure themselves against each other & the world… in those places they can be watched and studied and readily persuaded to part with their hard-earned currency.

Or credits or karma points or whatever. Just like the rest of the big paradigms, value has shifted beyond anybody’s understanding. Gold and currency at least attempted to normalize value into some tangible form. But the markets got too big & complex and too deeply connected to the subtleties of human behavior and the cunning of human predators. While money, the thing, was a tangible piece of value, the marketplace of credit & derivatives undermined it’s solidity and abstracted value out into the cold frontiers of economics philosophers and automated high-frequency trading bots. So much of the money got sucked up into so few hands that the world was left to figure out just how the hell all those unemployed people were going to work again. Instead of signing up for indentured servitude on the big banking farms, folks got all DIY while value fled the cash & credit markets and transfigured into service exchanges, reputation currencies, local scrip, barter markets, shadow economies, and a seemingly endless cornucopia of adaptive strategies for trading your work & talent for goods & services.

Sure, there’s still stock markets, central banks, and big box corps but they operate in a world kind of like celebrities did in the 20th century, though more infamous than famous. They exist as the loa in a web of voodoo economics: you petition them for the trickle-down. Or just ignore them. They’re a special class that mostly sticks among their kind, sustaining a B2B layer that drives the e-teams & design shops, fab plants & supply chains to keep churning out those Ray Ban iGlasses. Lucky for them, materials science has seen a big acceleration since the 2010’s with considerable gains in miniaturization and efficiency so it’s a lot easier to be a multinational when much of your work is dematerialized and the stuff that is hard goods is mostly vat-grown or micro-assembled by bacterial hybrids. Once the massive inflationary spike of the Big Correction passed, it actually got a lot cheaper to do business.

Good news for the rest of us, too, as we were all very sorely in need of a serious local manufacturing capacity with a sustainable footprint and DIY extensibility. Really, this was the thing that moved so many people off the legacy economy. Powerful desktop CAD coupled to lo-intensity, high-fidelity 3d printers opened up hard goods innovation to millions. The mad rush of inventors and their collaborations brought solar conversion efficiency up to 85% within 3 years, allowing the majority of the world to secure their energy needs with minimal overhead. Even now, garage biotech shops in Sao Paulo are developing hybrid chloroplasts that can be vat-grown and painted on just about anything. This will pretty much eliminate the materials costs of hard solar and make just about anything into a photosynthetic energy generator, slurping up atmospheric carbon and exhaling oxygen in the process. Sometimes things align and register just right…

So here we are in 2043 and, like all of our history, so many things have changed and so many things have stayed the same. But this time it’s the really big things that have changed, and while all change is difficult we’re arguably much stronger and much more independent for it all. Sure, not everybody can afford these sweet Ray Bans. And the federated state bodies that kept us mostly safe and mostly employed are no longer the reliable parents they once were. We live in a complex world of great wealth and great disparity, as always, but security & social welfare is slowly rising with the tide of human technological adaptation. Things are generally much cheaper, lighter, and designed to reside & decay within ecosystems. Product becomes waste becomes food becomes new life. Our machines are more like natural creatures, seeking equilibrium and optimization, hybridized by the ceaseless blurring of organic & inorganic, by the innate animal disposition towards biomimicry, and by the insistence of the natural world to dictate the rules of human evolution, as always. After all, we are animals, deep down inside, compelled to work it out and adapt.

Time’s up on the null shield. Coffee is down. And the viz is doing it’s thing now that the evening rush has thinned. Out into the moody streets of the city core, the same streets trod for a thousand years here, viz or no. The same motivations, the same dreams. It always comes back to how our feet fall on the ground, how the food reaches our mouth, and how we share our lives with those we care for.

Breaking Open the Cloud: Heads in an Augmented World

This past Saturday I worked with Mike Liebhold, Gene Becker, Anselm Hook, and Damon Hernandez to present the West Coast Augmented Reality Development Camp at the Hacker Dojo in Mountain View, Ca. By all accounts it was a stunning success with a huge turn-out of companies, engineers, designers, makers, artists, geo-hackers, scientists, techies and thinkers. The planning was mostly done virtually via email and phone meetings with only a couple visits to the venue. On Saturday, the virtual planing phase collapsed into reality and bloomed on site into AR Dev Camp.

As an un-conference, the event itself was a study in grassroots, crowd-sourced, participatory organization with everyone proposing sessions which were then voted on and placed into the schedule. To me, it was a wonderfully organic and emergent process that almost magically gave life and spirit to the skeleton we had constructed. So before I launch into my thoughts I want to give a hearty “Thank You!” to everyone that joined us and helped make AR DevCamp such a great experience. I also want to give a big shout-out to Tish Shute, Ori Inbar, and Sophia for coordinating the AR DevCamp in New York City, as well as Dave Mee & Julian Tate who ran the Manchester, UK event. And, of course, we couldn’t have done it without the help of our sponsors, Layar, Metaio, Qualcomm, Google, IFTF, Lightning Laboratories, Web3D Consortium, IDEAbuilder, MakerLab, and Waze (and URBEINGRECORDED with Cage Free Consulting contributed the flood of afternoon cookies).

So first, just what is Augmented Reality? There’s a tremendous amount of buzz around the term, weighing it down with connotations and expectations. Often, those investing in it’s future invoke the haunting specter of Virtual Reality, doomed by it’s inability to live up to the hype: ahead of it’s time, lost mostly to the realm of military budgets and skunkworks. Yet, the AR buzz has driven a marketing rush throwing gobs of money at haphazard and questionable advertising implementations that quickly reach millions and cement in their minds a narrow association with flashy magazine covers and car ads. Not to diminish these efforts, but there’s a lot more – and a lot less – going on here.

In it’s most distilled form, augmented reality is an interface layer between the cloud and the material world. The term describes a set of methods to superimpose and blend rendered digital interface elements with a camera stream, most commonly in the form of annotations such as text, links, and other 2 & 3-dimensional objects that appear to float over the camera view of the live world. Very importantly, AR includes at it’s core the concept of location mediated through GPS coordinates, orientation, physical markers, point-clouds, and, increasingly, image recognition. This combination of location and superimposition of annotations over a live camera feed is the foundation of AR. As we’re seeing with smart phones, the device knows where you are, what direction you’re facing, what your looking at, who & what is near you, and what data annotations & links are available in the view. In this definition, the cloud is the platform, the AR browser is the interface, and annotation layers are content that blend with the world.

So the augmented reality experience is mediated through a camera view that identifies a location-based anchor or marker and reveals any annotations present in the annotation layer (think of a layer as a channel). Currently, each of these components is uniquely bound to the AR browser in which they were authored so you must use, for example, the Layar browser to experience Layar-authored annotation layers. While many AR browsers are grabbing common public data streams from sources like Flickr & Wikipedia, their display and function will vary from browser to browser as each renders this data uniquely. And just because you can see a Flicker annotation in one browser doesn’t mean you will see it in another. For now, content is mostly bound to the browser and authoring is mostly done by third-parties building canned info layers. There doesn’t seem to be much consideration for the durability and longevity of these core components, and there is a real risk that content experiences may become fractured and ephemeral.

Indeed, content wants to be an inclusive, social experience. One of the core propositions underlying our motivation for AR DevCamp is the idea that the platforms being built around augmented reality should be architected as openly as possible to encourage the greatest degree of interoperability and extensibility. In the nascent but massively-hyped AR domain, there’s a growing rush to plant flags and grab territory, as happens in all emergent opportunity spaces. The concern is that we might recapitulate the Browser Wars – not intentionally but by lack of concerted efforts to coordinate implementations. While I maintain that coordination & open standardization is of necessity, I question my own assumption that without it we’ll end up with a bunch of walled gardens. This may be under-estimating the impact of the web.

Through the lessons and resultant standardization of the Browser Wars, it’s become a best practice (and indeed, a necessity) to design specifically to the most common standards. Arguably, the move from Web 1.0 (essentially a collection of static billboards) to the social interactions that characterize Web 2.0 established and deeply reinforced the fundamental requirement that we’re all able to share information & experiences in the cloud. This social commons necessarily requires an architectural commonality. Thus, we all agree that HTML, JavaScript, PHP, JASON, MySQL, and now RDF, OWL, and SPARQL are the core components of our data service models. Since we understand that AR is primarily a location-aware interface layer for the cloud, it’s very likely that independent implementations will all speak the same language. However, the point of AR DevCamp and similar gatherings is to challenge this assumption and to reinforce commonality by bringing everyone together to press flesh & exchange notes. The social dynamic in the natural world will determine the level of cooperation in the virtual.

Yet, this cooperation and normalization is by no means a given. Just about every chunk of legacy code that the Information Age is built upon retains vestiges of the git-er-done, rush to market start-up midset. Short-sighted but well-meaing implementations based upon limited resources, embryonic design, and first-pass architectures bog down the most advance and expensive software suites. As these code bases swell to address the needs of a growing user base, the gap between core architecture and usability widens. Experience designers struggle against architectures that were not able to make such design considerations. Historically, code architecture has proceeded ahead of user experience design, though this is shifting to some degree in the era of Agile and hosted services. Nevertheless, the emerging platforms of AR have the opportunity – and, I’d argue, the requirement – to include user research, design, & usability as core components of implementation. The open, standardized web has fostered a continuous and known experience across it’s vast reaches. Artsy Flash sites aside, you always know how to navigate and interact with the content. The fundamentals of AR need to be identified and agreed upon before the mosaic of emerging code bases become too mature to adjust to the needs of a growing user base.

Given the highly social aspect of the web, place-based annotations and objects will suffer greatly if there’s not early coordination around a shared standard for anchors. This is where the Browser Wars may inadvertently re-emerge. The anchor is basically the address/location of an annotation layer. When you look through an augmented view It’s the bit of data that says “I’m here, check out my annotations”. Currently there is no shared standard for this object, nor for annotations & layers. You need the Layar browser in order to see annotation layers made in it’s platform. If you only have a Junaio browser, you won’t see it. If you annotate a forest, tagging each tree with a marker linked to it’s own data registry, and then the browser app you used to author goes out of business, all those pointers are gone. The historical analog would be coding your website for IE but anyone with Mosaic can’t see it. This is where early design and usability considerations are critical to ensure a reasonable commonality and longevity of content. Anchors, annotations, & layers are new territory that ought to be regarded as strongly as URL’s and markup. Continuing to regard these as independent platform IP will balkanize the user experience of continuity across content layers. There must be standards in authoring and viewing. Content and services are where the business models should innovate.

So if we’re moving towards an augmented world of anchors and annotations and layers, what considerations should be given to the data structure underlying these objects? An anchor will have an addressable location but should it contain information about who authored it and when? Should an annotation contain similar data, time-stamped and signed with an RDF structure underlying the annotation content? How will layers describe their contents, set permissions, and ensure security? And what of the physical location of the data? An anchor should be a distributed and redundant object, not bound to the durability and security of any single server. A secure and resilient backbone of real-world anchor points is critical as the scaffolding of this new domain.

Earthmine is a company I’ve been watching for a number of months since they presented at the IFTF. They joined us at AR DevCamp to present their platform. While many AR developers are using GPS & compass or markers to draw annotations over the real world, Earthmine is busy building a massive dataset that maps Lat/Long/Alt coordinates to hi-rez images of cities. They have a small fleet of vehicles equipped with stereoscopic camera arrays that drive around cities, capturing images of every inch they see. But they’re also grabbing precise geolocation coordinates that, when combined with the image sets, yields a dense point cloud of addressable pixels. When you look at one of these point clouds on a screen it looks like a finely-rendered pointillistic painting of a downtown. They massage this data set, mash the images and location, and stream it through their API as a navigable street view. You can then place objects in the view with very high accuracy – like a proposed bus stop you’d like to prototype, or a virtual billboard. Earthmine even indicated that making annotations in their 2d map layer could add a link to the augmented real-world view. So you can see a convergence and emerging correlation between location & annotation in the real world, in an augmented overlay, on a flat digital map, and on a Google Earth or Virtual World interface. This is an unprecedented coherency of virtual and real space.

The Earthmine demo is cool and the Flash API offers interesting ways to customize the street view with 2d & 3d annotations but the really killer thing is their dataset. As alluded to, they’re building an address space for the real world. So if you’re in San Francisco and you have an AR browser that uses the Earthmine API (rumors that Metaio are working on something here…) you can add an annotation to every STOP sign in The Mission so that a flashing text of “WAR” appears underneath. With the current GPS location strategy this would be impossible due to it’s relatively poor resolution (~3-5 meters at best). You could use markers but you’d need to stick one on every STOP sign. With Earthmine you can know almost exactly where in the real world you’re anchoring the annotation… and they can know whenever you click on one. Sound familiar?

Augmented reality suggests the most significant shift in computation since the internet. As we craft our computers into smaller and smaller mobile devices, exponentially more powerful and connected, we’re now on the verge of beginning the visual and locational integration of the digital world with the analog world. We’ve digitized much of human culture, pasted it onto screens and given ourselves mirror identities to navigate, communicate, and share in this virtual space. Now we’re breaking open the box and drawing the cloud across the phenomenal world, teaching our machines to see what we see and inviting the world to be listed in the digital Yellow Pages.

So, yeah, now your AR experience of the world is covered in billboards, sloganeering, propaganda, and dancing dinosaurs all competing for your click-through AdSense rating. A big consideration, and a topic that came up again & again at AR DevCamp, is the overwhelming amount of data and the need to filter it to some meaningful subset, particularly with respect to spam and advertising. A glance across the current crop of iPhone AR apps reveals many design interface challenges, with piles of annotations all occluding themselves and your view of the world. Now imagine a world covered in layers each with any number of annotations. UI becomes very important. Andrea Mangini & Julie Meridian led a session on design & usability considerations in AR that could easily be a conference of it’s own. How do you manage occlusion & sorting? Level of detail? What does simple & effective authoring of annotations on a mobile device look like? How do you design a small but visible environmental cue that an annotation exists? If the URL convention is an underlined text, what is the AR convention for gently indicating that the fire hydrant you’re looking at has available layers & annotations? Discoverability of the digital links within the augmented world will be at a tension with overwhelming the view of the world itself.

When we consider the seemingly-inevitable development of eyewear with digital heads-up display, occlusion can quickly move from helpful to annoying to dangerous. No matter how compelling the augmented world is you still need to see when that truck is coming down the street. Again, proper design for human usability is perhaps even more critical in the augmented interface than in a typical screen interface. Marketing and business plans aside, we have to assume that the emergence of truly compelling and valuable technologies are ultimately in line with the deep evolutionary needs of the human animal. We’re certainly augmenting for fun and art and engagement and communication but my sense is that, underneath all these we’re building this new augmented reality because the power & adaptive advantage mediated through the digital domain is so great that we need it to integrate seamlessly with our mobile, multi-tasking lives. It’s been noted by others – Kevin Kelly comes to mind – that we’re teaching machines to do many of things we do, but better. And in the process we’re making them smaller and more natural and bringing them closer and closer to our bodies. Ponderings of transhumanity and cyborgian futures aside, our lives are being increasingly augmented and mediated by many such smart machines.

DARPA wasn’t at AR Dev Camp. Or at least if they were, they didn’t say so. There was a guy from NASA showing a really cool air traffic control system that watched aircraft in the sky, tagged them with data annotations, and tracked their movements. We were shown the challenges to effectively register the virtual layer – the annotation – with the real object – a helicopter – when it’s moving rapidly. In other words, the virtual layer, mediated through a camera & a software layer, tended to lag behind the 80+ mph heli. But in lieu of DARPA’s actual attendance, it’s worth considering their Urban Leader Tactical Response, Awareness & Visualization (ULTRA-Vis) program to develop a multimodal mobile computational system for coordinating tactical movements of patrol units. This program sees the near-future soldier as outfitted with a specialized AR comm system with a CPU worn on a belt, a HUD lens over one eye, a voice recognition mic, and a system to capture gestures. Military patrols rely heavily on intel coming from command and on coordinating movements through back-channel talk and line-of-sight gestures. AR HUDs offer simple wayfinding and identification of team mates. Voice commands can execute distributed programs and open or close comm channels. Gestures will be captured to communicate to units both in an out of line-of-sight and to initiate or capture datastreams. Cameras and GPS will track patrol movements and offer remote viewing through other soldier’s cameras. But most importantly, this degree of interface will be simple, fluid, and effortless. It won’t get in your way. For better or for worse, maximizing pack hunting behaviors with technology will set the stage for the future of human-computer interaction.

After lunch provided by Qualcomm, Anselm Hook led an afternoon session at AR DevCamp titled simply “Hiking”. We convened in a dark and hot room, somewhat ironically called the “Sun Room” for it’s eastern exposure, to discuss nature and what, if any, role AR should play in our interface with the Great Outdoors. We quickly decided to move the meeting out into the parking lot where we shared our interests in both built and natural outdoor environments. A common theme that emerged in words and sentiment was the tension between experience & distraction. We all felt that the natural world is so rich and special in large part due to it’s increasing contrast to an urbanized and mechanized life. It’s remote and wild and utterly disconnected, inherently at peace in it’s unscripted and chaotic way. How is this value and uniqueness challenged by ubicomp and GPS and cellular networks? GPS & cellphone coverage can save lives but do we really need to Twitter from a mountain top? I make no judgement calls here and am plenty guilty myself but it’s worth acknowledging that augmented reality may challenge the direct experience of nature in unexpected ways and bring the capacity to overwrite even the remote corners of the world with human digital graffiti.

But remember that grove of trees I mentioned before, tagged with data annotations? Imagine the researchers viewing those trees through AR lenses able to see a glance-able color index for each one showing CO2, O2, heavy metals, turgidity, growth, and age. Sensors, mesh nets, and AR can give voice to ecosystems, cities, communities, vehicles, and objects. Imagine that grove is one of thousands in the Brazilian rainforest reporting on it’s status regularly, contributing data to policy debates and regulatory bodies. What types of augmented experiences can reinforce our connection to nature and our role as caretakers?

On the other hand, what happens when you and the people around you are each having very different experiences of “reality”? What happens to the commons when there are 500 different augmented versions? What happens to community and society when the common reference point for everything – the very environment in which we exist – is malleable and fluid and gated by permissions and access layers or overwrought with annotations competing for our attention? What social gaps could arise? What psychological ailments? Or perhaps more realistically, what happens when a small class of wealthy westerners begin to redraw the world around them? Don’t want to see other people? No problem! Just turn on the obfuscation layer. Ugly tenements ruining your morning commute? Turn on some happy music and set your iGlasses to the favela paintshop filter! Augmentation and enhancement with technology will inevitably proceed along economic lines. What is the proper balance between enjoying our technological luxuries and responsibly curating the world for those less fortunate? Technology often makes the symptoms look different but doesn’t usually eradicate the cause. In the rush to colonize the augmented reality, in the shadow of a wavering global economic system and deep revision of value and product, now is the best time and the most important time to put solutions ahead of products; to collaborate and cooperate on designing open, robust, and extensible systems; and, in the words of Tim O’Reilly, to “work on stuff that matters”.

At the end of the day, pizza’s arrived (Thanks MakerLab!), beers were opened (Thanks Layar & Lighting Labs), and the buzzing brains of AR DevCamp mingled and shared their thoughts. Hearts alit, I’ll be forgiven some sentimentality to suggest that the Hacker Dojo had a soft, warm glow emanating from all the fine folks in attendance. Maybe it was like this around the Acid Tests in the 60’s (with more paisley). Or the heady days of PARC Xerox in the 80’s (with more ties). That growing inertia and sense of destiny at being at the right place at the right time just at the start of something exceptional…

Special thanks to Andrea Mangini for deep and ranging discussions about all this stuff, among many other things.

Subcycle Multi-Touch Instrument Experiment

multi-touch the storm – interactive sound visuals – subcycle labs from christian bannister on Vimeo.

Christian Bannister, Subcycle Labs: “Things are starting to sound more song-like and I can really appreciate that. In previous builds everything sounded more like an experiment or a demo. Now I have something more akin to an experimental song. “

A Few Recent Developments in Brain-Computer Interface

BCI technology and the convergence of mind & machine are on the rise. Wired Magazine just published an article by Michael Chorost discussing advances in optogenetic neuromodulation. Of special interest, he notes the ability of optogenetics to both read & write information across neurons.

In theory, two-way optogenetic traffic could lead to human-machine fusions in which the brain truly interacts with the machine, rather than only giving or only accepting orders. It could be used, for instance, to let the brain send movement commands to a prosthetic arm; in return, the arm’s sensors would gather information and send it back.

In another article featured at IEEE Spectrum, researchers at Brown University have developed a working microchip implant that can wirelessly transmit neural signals to a remote sensor. This advance suggests that brain-computer interface technologies will evolve past the need for wired connections.

Wireless neural implants open up the possibility of embedding multiple chips in the brain, enabling them to read more and different types of neurons and allowing more complicated thoughts to be converted into action. Thus, for example, a person with a paralyzed arm might be able to play sports.

MindHacks has discusses the recent video of a touch-sensitive prosthetic hand. This is a Holy Grail of sorts for brain-machine interface: the hope that an amputee could regain functionality through a fully-articulatable, touch-sensitive, neural-integrated robotic hand. Such an accomplishment would indeed be a huge milestone. Of note, the MindHacks appraisal focuses on the brain’s ability to re-image body maps (perhaps due to it’s plasticity).

There’s an interesting part of the video where the patient says “When I grab something tightly I can feel it in the finger tips, which is strange because I don’t have them anymore”.

Finally, ScienceDaily notes that researchers have demonstrated rudimentary brain-to-brain communication mediated by non-invasive EEG.

[The]experiment had one person using BCI to transmit thoughts, translated as a series of binary digits, over the internet to another person whose computer receives the digits and transmits them to the second user’s brain through flashing an LED lamp… You can watch Dr James’ BCI experiment at YouTube.

One can imagine a not too distant future where the brain is directly transacting across wireless networks with machines, sensor arrays, and other humans.

ADHD and the Emerging Transhumanity

Today’s children are being prescribed pharmaceutical treatments unlike any previous generation. The BBC reports that child obesity drug use is soaring. The American Academy of Pediatrics notes that abuse of ADHD drugs is growing among teens after an unprecedented rise in medicating kids for behavioral conditions after the turn of the millennium. KeepKidsHealthy.com, inadvertantly or not, offers an extensive shopping list of currently available ADHD drugs for the concerned parent. In 2007, the Cincinnati Children’s Hospital Medical Center published a report stating that 9% of US children age 8 to 15 meet criteria for having ADHD, in spite of other sources suggesting that hyperactivity may just be normal behavior.

All editorials aside, raising children with prescription drugs may be creating a generation of people that accept pharmaceutical behavioral modification as the norm. The adults of tomorrow may be much more inclined to embrace the utility of chemical treatment, modulation, and enhancement. This is a departure from older generations that have typically endured social stigma around reliance on drugs for treatment of behavioral conditions. Now, the first world is on the precipice of confronting unprecedented abilities to alter our brains & behaviors, and the rising tide of morality will likely move against tradition to support such degrees of self-agency.

There are many other signals suggesting that we’re heading towards a global debate about human enhancement and the rise of the transhuman. While the initial congressional & media volleys will likely be saturated with sensational scenes of moral outrage and the attendant outlying catastrophes, there is nevertheless an emerging trend towards accepting human modifications and enhancements as interventional medical procedures, behavioral treatments, and cognitive enhancements. For example, Psychology Today just published an article looking at how smart drugs enhance us in response to the rise in off-label use of ADHD drugs like Modafinil to enhance concentration and memory. People like Michael Chorost are sharing the experience of living with cochlear implants. Soon, we’ll be welcoming home soldiers patched up with neuroprosthetics capable of enhanced ranges of perception. And if the annual Woodstock Film Festival is any indicator (it is), then it might be worth checking out this year’s coming offerings addressing Transhumanism and a panel on Redesigning Humanity — The New Frontier.

The growing acceptance of the role of pharmacology in behavioral and cognitive treatments is just one vector in a large phase space witnessing the growing integration of non-biological and hybridized technologies into the human organism. We’re moving into an age of neural implants, mechanical prosthetics, and cloud-aware augmentations that are only now just hinting at what may likely become an extraordinary new era of human evolution. Or, as Zach Lynch suggests, a Neuro Revolution.

RSS Augmented Reality Blog Feeds

[This is a narrative exploration of an idea @jingleyfish & I had walking around the Westside of Santa Cruz late at night…]

Imagine walking around a town wearing your stylish Ray Ban augmented reality glasses (because hand-held mobile devices will become a significant limiting factor to experiencing the annotated world). You see small transparent white dots glowing on people and objects indicating that they contain accessible cloud content. Maybe you “select” (by whatever mechanism constitutes selection through a pair of eyeglasses) a bench on the sidewalk then view a flyout markup indicating that the bench was commissioned by the Bruce family in memorium of Aldis Bruce, manufactured by the Taiwanese Seating Concern. You click through the family link to see a brief bio of Aldis with a set of links to his life story, works, etc…

In the upper corner of your view a light begins to blink indicating a new feed is available in your subscription list. You select and expand, showing a menu item for Bob’s Neighborhood Chat. Initializing this feed draws a new green dot over the bench, indicating that Bob has published information tagged to it. You click and Bob’s markup flies out with text stating, “Tuesday, March 11, 2010: Don & Charise Ludemeyer celebrated their 25th wedding anniversary by returning to the place where they first kissed in 1985.” A link below this offers the couple’s personal website, a photo gallery & playlist of their wedding, and then a link to more public markups about the bench.

Clicking through the “more” link offers a list of other public comments. You choose the Sur 13 layer just to see what the local hoods are up to. Flyout: “Hernandez Bros. shot down by Westside Brownshirts, Sept. 23, 2009. RIP, locos.” Then, drawn over, a bit-crushed graffiti logo “WSB” animates across the view, hacked into the Sur 13 layer by Brownshirts. A click through would open the full Brownshirt regional layer but you already feel like a trespasser on suddenly dangerous turf.

Unsettled, you call up the local Police layer. A trailing list of crimes in a 5mi radius begins scrolling. You narrow the search to your current location with a 2 week time horizon. 3 yellow car break-ins glow indicators along the road, followed by a red assault marker 10 feet down the walk, and then 2 more blinking red markers at the bench. You hover over the bench markers and learn of two shootings here within the last 4 days.

You open up the iCabNow utility, send up your beacon, and wait nervously for Yellow Cab to find you. You thumb back to the Ludemeyer markup and click through to find the playlist from their wedding. As you hop into the cab a few moments later, the theme from Miami Vice swells up in your earbuds, sending you off from this time-twisted place. You call up the WordTweet micromarker app and make a traveler’s note: “This is a dangerous bench with an old heart.” Click “Publish” and a new feed indicator appears, offering your own layer update to subscribers.

“Digital Wallpaper”

The post title is in quotes because, although the effect is quite clean & nice, the tech is projection and not really digital wallpaper. What the demo suggests is a programmable surface material, perhaps made of some sort of liquid OLED or biolumin. Wallpaper or paint that could cover a room and represent the full RGB spectrum, dynamically. Nevertheless, the video below suggests the end result of such a future tech, much like the 555Kubik video facade projection.

Hirzberger Events – Digital Wallpaper from Gregor Hofbauer on Vimeo.

As an aside, I’ll be posting more videos given the highly visual nature of a lot of the cool emerging tech these days.

The Co-Evolution of Neuroscience & Computation


Image from Wired Magazine.

[Cross-posted from Signtific Lab.]

Researchers at VU University Medical Center in Amsterdam have applied the analytic methods of graph theory to analyze the neural networks of patients suffering from dementia. Their findings reveal that brain activity networks in dementia sufferers are much more randomized and disconnected than in typical brains. "The underlying idea is that cognitive dysfunction can be illustrated by, and perhaps even explained by, a disturbed functional organization of the whole brain network", said lead researcher Willem de Haan.

Of perhaps deeper significance, this work shows the application of network analysis algorithms to the understanding of neurophysiology and mind, suggesting a similarity in functioning between computational networks and neural networks. Indeed, the research highlights the increasing feedback between computational models and neural models. As we learn more about brain structure & functioning, these understandings translate into better computational models. As computation is increasingly able to model brain systems, we come to understand their physiology more completely. The two modalities are co-evolving.

The interdependence of the two fields has been most recently illustrated with the announcement of the Blue Brain Project which aims to simulate a human brain within 10 years. This ambitious project will inevitably drive advanced research & development in imaging technologies to reveal the structural complexities of the brain which will, in turn, yield roadmaps towards designing better computational structures. This convergence of computer science and neuroscience is laying the foundation for an integrative language of brain computer interface. As the two sciences get closer and closer to each other, they will inevitably interact more directly and powerfully, as each domain adds value to the other and the barriers to integration erode.

This feedback loop between computation and cognition is ultimately bringing the power of programming to our brains and bodies. The ability to create programmatic objects capable of executing tasks on our behalf has radically altered the way we extend our functionality by dematerializing technologies into more efficient, flexible, & powerful virtual domains. This shift  has brought an unprecedented ability to iterate information and construct hyper-technical objects. The sheer adaptive power of these technologies underwrites the imperative towards programming our bodies, enabling us to excercies unprecedented levels of control and augmnetation over our physical form, and further reveal the fabric of mind.

 

Cognition & Computation: Augmented Reality Meets Brain-Computer Interface

With all the hype flying around Augmented Reality lately, it’s easy to assume the nascent tech is just another flash-in-the-pan destined to burn out in a fury of marketing gimmickry & sensational posturing. Yet, it’s informative to consider the drivers pushing this trend and to tease out the truly adaptive value percolating beneath the hype. As we survey the last 40 years of computation we see vast rooms of tube & tape mainframes consolidating into single stacks & dense supercomputers. These, in turn, rode manufacturing advances into smaller components and faster processors bringing computing to the desktop. In the last 10 years we’ve seen computation un-encumber from the location-bound desktop to powerful, free-roaming mobile platforms. These devices have allowed us to carry the advantages of instant communication, collaboration, and computation with us wherever we go. The trends in computation continue towards power, portability, and access.

Specific implementations aside, augmented reality in it’s purest, most dilute form, is about drawing the experience of computation across the real world. It’s about point-and-click access to the data shadows of everything in our environment. It’s about realizing social networks, content markups, and digital remix culture as truly tangible layers of human behavior. Augmented reality represents another fundamentally adaptive technology to empower individuals & collectives with instant access to knowledge about the world in which we’re embedded. It breaks open both the digital & mental box and dumps the contents out on the floor.

There is a fascinating convergence at play here that, at a glance, seems almost paradoxical. While the contents of our minds are moving beyond the digital containers we’ve used to such creative & collaborative advantage, out into the phenomenal world of things & critters, the physical hardware through which this expression is constructed & mediated is miniaturizing and moving closer & closer towards our physical bodies. DARPA is funding research to push AR beyond current device limitations, envisioning transparent HUDs, eye-trackers, speech recognition, and gestural interfaces that release soldiers from the physical dependencies of our current devices. Today’s mobiles (and the limited AR tech built on them) compete directly with the other most adaptive human feature: our hands. Truly functional mobile comm/collab/comp must be hands-free… and this is the promise taking form in the emerging field of neurotechnology.

Nanomaterials, optogenetics, SPASERs, advanced robotics, neurocomputation, and artificial intelligence are merely a handful of the modalities shaping up to enable tighter integration between humans, machines, and the digital sphere. Advances in understanding the communication protocols and deep brain structures that mediate the human interface between our sensorium and the perceived world are presenting opportunities to capture & program our minds, to more accurately modulate the complexities of human emotion, creativity, trust, & cognition, and to build more expressive interfaces between mind and machine. Augmented reality is co-evolving with augmented physiology.

In it’s current and most-visualized form, augmented reality is clunky and awkward, merely suggesting a future of seamless integration between computation & cognition. Yet the visions being painted by the pioneers are deeply compelling and illustrate a near-future of a more malleable world richly overlaid with information & interface. As AR begins to render more ubiquitously across the landscape, as more & more phones & objects become smart and connected, the requirements for advancing human-computer interface will create exceptional challenges & astonishing results. Indeed, imagine the interface elements of a fully-augmented and interactive merging between analog & digital, between mind & machine… How do you use your mind to “click” on an object? How will the object communicate & interact with you? How do you filter data & interactions out from simple social transactions? How do you obfuscate the layers of data rising off your activities & thoughts? And what are the challenges of having many different opt-in or opt-out realities running in parallel?

Humans have just crossed the threshold into the Information Age. The sheer speed of the uptake is mind-bending as our world is morphing everyday into the science fictional future we spent the last century dreaming of. We may not really need the latest advances in creative advertising (similarly driven to get closer and closer to us) but it’s inarguable that both humans & the planetary ecology would benefit from a glance at a stream that instantly reveals a profile of the pollutants contained within, tagged by call-outs showing the top ten contributing upstream sources and the profiles of their CEOs – with email, Facebook, Twitter, and newsburst links at the ready. Examples and opportunities abound, perhaps best left to the authors and innovators of the future to sort out in a flurry of sensemods, augs, and biosims.

There are, of course, many challenges and unforeseen contingencies. The rapid re-wiring of the fundamental interface that such “capably murderous” creatures as us have with the natural world, and the attendant blurring of the lines between real & fabricated, should give pause to the most fevered anticipatory optimists. In a very near future, perhaps 10 or 15 years ahead, amidst an age of inconceivable change, we’ll have broken open the box, painted the walls with our minds, and wired the species and the planet to instantaneous collaboration and expression, with massively constructive and destructive tools at our fingertips. What dreams and nightmares may be realized when the apes attain such godhood? When technology evolves at a lightning pace, yet the human psyche remains at best adolescent, will we pull it off without going nuclear? Will the adaptive expressions of our age save us in time? I think they will, if we design them right and fairly acknowledge the deeply biological drivers working through the technologies we extrude.

[Acknowledgements: Tish Shute & Ugo Trade; Zack Lynch and his book The Neuro Revolution; conversations with fellow researchers at IFTF; and many others listed in the Signtific Lab tag for ProgrammableEverything.]