The work of James George caught my attention when he began publishing still images generated by mixing inputs from a DSLR camera paired with a Kinect scanner. He & partner, Jonathan Mind, recently produced this thoroughly compelling future-now video from the same process, using their open framework software, RGBD Toolkit, to manage the mapping and in-scene navigation. The camera is fixed but since the Kinect produces a 3D scene you can navigate around the captured image. Where forms in the camera field cast shadows, i.e. where the Kinect cannot scan past e.g. an occluding arm & hand, you see stretching and warping of the 3D mesh and image map. The effect is uncannily similar to the scenes in the film version of Minority Report when Tom Cruise’s character watches holovids of his son & wife, their forms trailing along the light path of the holoprojector.George & Mind frame this video as an exploration of emerging techniques and technologies in filmmaking. Also, they talk about coding and geekery and other cool stuff.
Clouds is a computational documentary featuring hackers and media artists in dialogue about code, culture and the future of visualization.
This is a preview of a feature length production to be released later this year.
By Jonathan Minard (http://www.deepspeedmedia.com/) and James George (http://www.jamesgeorge.org/)
Made with RGBDToolkit.com
I’ve just returned from a very interesting workshop in Washington, D.C. about fast-moving change, asymmetric threats to security, and finding signals within the wall of noise thrown up by big data. These are tremendous challenges to governance, policy makers, and the intelligence community. I’ll have more to say on these topics in later posts but for now, here’s a round-up of the most popular posts on URBEINGRECORDED in order of popularity:
Occupy Wall Street – New Maps for Shifting Terrain – On OWS, gaps in governance, empowered actors, and opportunities in the shifting sands…
Getting to Know Your Ghost in the Machine – On the convergence of ubiquitous computation (ubicomp), augmented reality, and network identity…
The Transhuman Gap – On the challenges facing the transhuman movement…
The Realities of Coal in the Second Industrial Revolution – On the energy demand and resource availability for the developing world…
Meshnets, Freedom Phones, and the People’s Revolution – On the Arab Spring, hyperconnectivity, and ad hoc wireless networks…
And a few that I really like:
Back-casting from 2043 – On possible futures, design fictions, and discontinuity…
On Human Networks & Living Biosystems – On the natural patterns driving technology & human systems…
Outliers & Complexity – On non-linearity, outliers, and the challenges of using the past to anticipate the future…
Thanks to all my readers for taking the time to think about my various rantings & pre-occupations. As always, your time, your participation, and your sharing is greatly appreciated!
I saw Amon Tobin’s ISAM project a week ago at The Warfield theater in San Francisco. Literally jaw-dropping.
Leviathan worked with frequent collaborator and renowned VJ Vello Virkhaus on groundbreaking performance visuals for electronic musician Amon Tobin, creating ethereal CG narratives and engineering the geometry maps for an entire stage of stacked cube-like structures. Taking the performance further, the Leviathan team also developed a proprietary projection alignment tool to ensure quick and accurate setup for the show, along with custom Kinect control & visualization utilities for Amon to command.
Here’s the slidedeck from my recent talk at Augmented Reality Event 2011. I hope to post a general overview of the event soon, including some of the key trends that stood out for me in the space.
[Cross-posted from Humanity + Magazine.]
Emergent technologies often inspire great excitement attended by utopic visions of how they will transform our lives for the better. Yet all innovations introduce risk and the likelihood of unforeseen consequences. The transhumanity stack of technologies – life extension, medical & genetic modification, brain-computer & brain-machine interface, and virtual & augmented realities – offer great opportunities for human enhancement but pose profound risks for all aspects of humanity & civilization. It is critical to confront these dangers and temper the enthusiasm of tranhumanism with diligent risk assessment and thorough scenario modeling for possible outcomes.
To wit, here are 5 scenarios that explore the possible dangers embedded within transhumanism. This is, of course, by no means an exhaustive list but is simply intended to encourage further risk analysis. Most or all have probably been addressed by others elsewhere, and this list is not intended as a criticism of those presently active in the transhumanity community.
1. Population growth from longevity & senescence studies
Life extension looks great from an individual or group perspective but it’s a resource nightmare from a national and global angle. Current human population is about 6.8 billion with most linear estimates projecting somewhere around 9 billion by 2050. If life extension is designed to be readily available to anyone & everyone, we can expect two outcomes: considerable population growth as longevity outpaces mortality, and a rise in global GDP and its commensurate resource consumption as working age extends towards the centenarian. People living longer means people will consume more in the course of their lifetimes. Consider the competition for resources & ecological carrying capacity we currently face in 2010 and roll that forward 40 years with a massive global population and members of the workforce that can potentially stay employed for 70 years….
2. Inequity of technology distribution — the Transhuman Gap
The flip-side of the resource consumption issue arises if we admit that transhuman technologies will not be evenly available to all; that socio-economic factors will gate who has access to technologies that extend human capabilities. In this context, population dynamics will not be appreciably influenced by human life extension as only a small subset of the populace will have access to such enhancement. Indeed, genetic modification, brain-computer interface, advanced prosthesis, and access to virtual & augmented realities are all presently gated by economic barriers to entry that are not likely to diminish any time soon. AR & VW’s may become ubiquitous & cheap but real human enhancement through interventionary technologies will mostly fall along class lines, giving rise to a wealthy tier of augmented & enhanced individuals. If only the wealthy are most able to afford enhancement, the socioeconomic divide will be reinforced by the Transhuman Gap, further disenfranchising those already at a competitive disadvantage by their class circumstances. From such economic disparity, reinforced by the inevitable moralizing and judgments from both sides of the gap, social cohesion will be further challenged and class distinctions will begin to take on a biomechanical & genetic aspect with the threat of technology-enabled superiority.
3. Techno-elitism, civil discord, and eugenics
Throughout history elite classes have used their status & abilities to influence the control systems that govern those beneath them. Likewise, the underclass has looked at elites with both admiration & disdain, occasionally rising to join their ranks but, more often, rising up to knock them down. Civil strife is a common outcome of disparity, driven by inequities in access to resources, opportunities, and power. A class of techno-elite transhumans would pose a profound existential threat to the underclass who might very well perceive themselves as being forever cut-out from the Democratic ideal that “all humans are created equal”, no longer able to compete in any capacity without transhuman enhancements. The anger and victimization from such an outlook would very quickly translate into moralizing against the crimes of human augmentation and stigmatizing those who pursue such “un-natural” and “un-holy” enhancement. In turn, the techno-elite may feel inclined to judge the underclass as “unfit” or “un-evolved” – two distinctions that have historically led to great atrocities.
4. Co-option of transhumanity by fascists, oligarchs, and super-empowered individuals
The slippery slope of this scenario posits the rise of a transhuman ruling class who, when challenged by the underclass, recede into their own sense of authority & enhanced intelligence to determine that the only appropriate course of action is to subjugate the masses and shepherd the rise of transhuman governance. If transhuman enhancement is truly advantageous, yet remains available only to an elite class, then in all likelihood those elites will embrace the technology to their competitive advantage. Since it would be folly to assume that human technological enhancement will remediate our most basest evolutionary program of survival of the fittest, the likelihood of enhanced predatory elites seizing global power is not so small. The darkest scenario might see transhuman governance requiring control & tracking implants in all newborns – perhaps a bit hyperbolic but not inconceivable if the type of global predators that currently traverse societies gained access to advanced transhuman technologies.
5. Fractured reality
Virtual worlds and augmented reality offer many compelling experiences across the spectrum of entertainment, socialization, marketing & advertising, collaboration, and modern knowledge work. At their core, these technologies intermediate our experience of the world, giving third parties access to program our sensorium. Brain-computer interface technologies are working to extend this access to the core structures of our brain, kicking off a wave of neurotechnologies able to more specifically & accurately influencing the mind-brain interface. The opt-in path through designer reality gives us the ability to modify the way we interface with the phenomenal world, electing to commit more of our selves to virtual experiences & relationships, or to overlay our environments with the images of our choosing rather than confront the physical world solely on its terms. While affinity groups will accrete around specific worlds & layers the barriers between differing experiences of objective reality will multiply when the world I experience is markedly different than yours. As the Transhuman Gap threatens social cohesion through class, reality design threatens cohesion across all classes by erecting virtual constructions between adjacent-but-unrelated digital worlds. While we may feel a sense of agency in creating such personalized experiences we do so in digital layers most likely owned by 3rd parties or accessible through public APIs. We may inadvertently wall ourselves off from each other but we’ll become even richer targets for profilers, influencers, and governors. The slippery slope in this scenario suggests that governance might enforce realities onto subjects or that dangerous identity groups might create monstrous, all-encompassing layers as indoctrination tools & neuro-propaganda towards the engineering of social movements. Considering how supremely the television has been used to influence the masses with only basic access to eyes and ears, it’s not unlikely that greater access into the transhuman will yield a greater ability to influence and manipulate.
Again, these scenarios are not meant as accusations or designed to arouse a fear of transhumanism but, rather, to encourage critical thinking along the dystopic possibilities of the future transhuman phase space, as it were, in order to better control for such outcomes. As the saying goes, all technology is inherently neutral. But this glib statement does not acknowledge that all technology is born of humanity and wielded by our hands alone. To paraphrase a great modern philosopher, all of the animals are capably murderous.
This past Saturday I worked with Mike Liebhold, Gene Becker, Anselm Hook, and Damon Hernandez to present the West Coast Augmented Reality Development Camp at the Hacker Dojo in Mountain View, Ca. By all accounts it was a stunning success with a huge turn-out of companies, engineers, designers, makers, artists, geo-hackers, scientists, techies and thinkers. The planning was mostly done virtually via email and phone meetings with only a couple visits to the venue. On Saturday, the virtual planing phase collapsed into reality and bloomed on site into AR Dev Camp.
As an un-conference, the event itself was a study in grassroots, crowd-sourced, participatory organization with everyone proposing sessions which were then voted on and placed into the schedule. To me, it was a wonderfully organic and emergent process that almost magically gave life and spirit to the skeleton we had constructed. So before I launch into my thoughts I want to give a hearty “Thank You!” to everyone that joined us and helped make AR DevCamp such a great experience. I also want to give a big shout-out to Tish Shute, Ori Inbar, and Sophia for coordinating the AR DevCamp in New York City, as well as Dave Mee & Julian Tate who ran the Manchester, UK event. And, of course, we couldn’t have done it without the help of our sponsors, Layar, Metaio, Qualcomm, Google, IFTF, Lightning Laboratories, Web3D Consortium, IDEAbuilder, MakerLab, and Waze (and URBEINGRECORDED with Cage Free Consulting contributed the flood of afternoon cookies).
So first, just what is Augmented Reality? There’s a tremendous amount of buzz around the term, weighing it down with connotations and expectations. Often, those investing in it’s future invoke the haunting specter of Virtual Reality, doomed by it’s inability to live up to the hype: ahead of it’s time, lost mostly to the realm of military budgets and skunkworks. Yet, the AR buzz has driven a marketing rush throwing gobs of money at haphazard and questionable advertising implementations that quickly reach millions and cement in their minds a narrow association with flashy magazine covers and car ads. Not to diminish these efforts, but there’s a lot more – and a lot less – going on here.
In it’s most distilled form, augmented reality is an interface layer between the cloud and the material world. The term describes a set of methods to superimpose and blend rendered digital interface elements with a camera stream, most commonly in the form of annotations such as text, links, and other 2 & 3-dimensional objects that appear to float over the camera view of the live world. Very importantly, AR includes at it’s core the concept of location mediated through GPS coordinates, orientation, physical markers, point-clouds, and, increasingly, image recognition. This combination of location and superimposition of annotations over a live camera feed is the foundation of AR. As we’re seeing with smart phones, the device knows where you are, what direction you’re facing, what your looking at, who & what is near you, and what data annotations & links are available in the view. In this definition, the cloud is the platform, the AR browser is the interface, and annotation layers are content that blend with the world.
So the augmented reality experience is mediated through a camera view that identifies a location-based anchor or marker and reveals any annotations present in the annotation layer (think of a layer as a channel). Currently, each of these components is uniquely bound to the AR browser in which they were authored so you must use, for example, the Layar browser to experience Layar-authored annotation layers. While many AR browsers are grabbing common public data streams from sources like Flickr & Wikipedia, their display and function will vary from browser to browser as each renders this data uniquely. And just because you can see a Flicker annotation in one browser doesn’t mean you will see it in another. For now, content is mostly bound to the browser and authoring is mostly done by third-parties building canned info layers. There doesn’t seem to be much consideration for the durability and longevity of these core components, and there is a real risk that content experiences may become fractured and ephemeral.
Indeed, content wants to be an inclusive, social experience. One of the core propositions underlying our motivation for AR DevCamp is the idea that the platforms being built around augmented reality should be architected as openly as possible to encourage the greatest degree of interoperability and extensibility. In the nascent but massively-hyped AR domain, there’s a growing rush to plant flags and grab territory, as happens in all emergent opportunity spaces. The concern is that we might recapitulate the Browser Wars – not intentionally but by lack of concerted efforts to coordinate implementations. While I maintain that coordination & open standardization is of necessity, I question my own assumption that without it we’ll end up with a bunch of walled gardens. This may be under-estimating the impact of the web.
Yet, this cooperation and normalization is by no means a given. Just about every chunk of legacy code that the Information Age is built upon retains vestiges of the git-er-done, rush to market start-up midset. Short-sighted but well-meaing implementations based upon limited resources, embryonic design, and first-pass architectures bog down the most advance and expensive software suites. As these code bases swell to address the needs of a growing user base, the gap between core architecture and usability widens. Experience designers struggle against architectures that were not able to make such design considerations. Historically, code architecture has proceeded ahead of user experience design, though this is shifting to some degree in the era of Agile and hosted services. Nevertheless, the emerging platforms of AR have the opportunity – and, I’d argue, the requirement – to include user research, design, & usability as core components of implementation. The open, standardized web has fostered a continuous and known experience across it’s vast reaches. Artsy Flash sites aside, you always know how to navigate and interact with the content. The fundamentals of AR need to be identified and agreed upon before the mosaic of emerging code bases become too mature to adjust to the needs of a growing user base.
Given the highly social aspect of the web, place-based annotations and objects will suffer greatly if there’s not early coordination around a shared standard for anchors. This is where the Browser Wars may inadvertently re-emerge. The anchor is basically the address/location of an annotation layer. When you look through an augmented view It’s the bit of data that says “I’m here, check out my annotations”. Currently there is no shared standard for this object, nor for annotations & layers. You need the Layar browser in order to see annotation layers made in it’s platform. If you only have a Junaio browser, you won’t see it. If you annotate a forest, tagging each tree with a marker linked to it’s own data registry, and then the browser app you used to author goes out of business, all those pointers are gone. The historical analog would be coding your website for IE but anyone with Mosaic can’t see it. This is where early design and usability considerations are critical to ensure a reasonable commonality and longevity of content. Anchors, annotations, & layers are new territory that ought to be regarded as strongly as URL’s and markup. Continuing to regard these as independent platform IP will balkanize the user experience of continuity across content layers. There must be standards in authoring and viewing. Content and services are where the business models should innovate.
So if we’re moving towards an augmented world of anchors and annotations and layers, what considerations should be given to the data structure underlying these objects? An anchor will have an addressable location but should it contain information about who authored it and when? Should an annotation contain similar data, time-stamped and signed with an RDF structure underlying the annotation content? How will layers describe their contents, set permissions, and ensure security? And what of the physical location of the data? An anchor should be a distributed and redundant object, not bound to the durability and security of any single server. A secure and resilient backbone of real-world anchor points is critical as the scaffolding of this new domain.
Earthmine is a company I’ve been watching for a number of months since they presented at the IFTF. They joined us at AR DevCamp to present their platform. While many AR developers are using GPS & compass or markers to draw annotations over the real world, Earthmine is busy building a massive dataset that maps Lat/Long/Alt coordinates to hi-rez images of cities. They have a small fleet of vehicles equipped with stereoscopic camera arrays that drive around cities, capturing images of every inch they see. But they’re also grabbing precise geolocation coordinates that, when combined with the image sets, yields a dense point cloud of addressable pixels. When you look at one of these point clouds on a screen it looks like a finely-rendered pointillistic painting of a downtown. They massage this data set, mash the images and location, and stream it through their API as a navigable street view. You can then place objects in the view with very high accuracy – like a proposed bus stop you’d like to prototype, or a virtual billboard. Earthmine even indicated that making annotations in their 2d map layer could add a link to the augmented real-world view. So you can see a convergence and emerging correlation between location & annotation in the real world, in an augmented overlay, on a flat digital map, and on a Google Earth or Virtual World interface. This is an unprecedented coherency of virtual and real space.
The Earthmine demo is cool and the Flash API offers interesting ways to customize the street view with 2d & 3d annotations but the really killer thing is their dataset. As alluded to, they’re building an address space for the real world. So if you’re in San Francisco and you have an AR browser that uses the Earthmine API (rumors that Metaio are working on something here…) you can add an annotation to every STOP sign in The Mission so that a flashing text of “WAR” appears underneath. With the current GPS location strategy this would be impossible due to it’s relatively poor resolution (~3-5 meters at best). You could use markers but you’d need to stick one on every STOP sign. With Earthmine you can know almost exactly where in the real world you’re anchoring the annotation… and they can know whenever you click on one. Sound familiar?
Augmented reality suggests the most significant shift in computation since the internet. As we craft our computers into smaller and smaller mobile devices, exponentially more powerful and connected, we’re now on the verge of beginning the visual and locational integration of the digital world with the analog world. We’ve digitized much of human culture, pasted it onto screens and given ourselves mirror identities to navigate, communicate, and share in this virtual space. Now we’re breaking open the box and drawing the cloud across the phenomenal world, teaching our machines to see what we see and inviting the world to be listed in the digital Yellow Pages.
So, yeah, now your AR experience of the world is covered in billboards, sloganeering, propaganda, and dancing dinosaurs all competing for your click-through AdSense rating. A big consideration, and a topic that came up again & again at AR DevCamp, is the overwhelming amount of data and the need to filter it to some meaningful subset, particularly with respect to spam and advertising. A glance across the current crop of iPhone AR apps reveals many design interface challenges, with piles of annotations all occluding themselves and your view of the world. Now imagine a world covered in layers each with any number of annotations. UI becomes very important. Andrea Mangini & Julie Meridian led a session on design & usability considerations in AR that could easily be a conference of it’s own. How do you manage occlusion & sorting? Level of detail? What does simple & effective authoring of annotations on a mobile device look like? How do you design a small but visible environmental cue that an annotation exists? If the URL convention is an underlined text, what is the AR convention for gently indicating that the fire hydrant you’re looking at has available layers & annotations? Discoverability of the digital links within the augmented world will be at a tension with overwhelming the view of the world itself.
When we consider the seemingly-inevitable development of eyewear with digital heads-up display, occlusion can quickly move from helpful to annoying to dangerous. No matter how compelling the augmented world is you still need to see when that truck is coming down the street. Again, proper design for human usability is perhaps even more critical in the augmented interface than in a typical screen interface. Marketing and business plans aside, we have to assume that the emergence of truly compelling and valuable technologies are ultimately in line with the deep evolutionary needs of the human animal. We’re certainly augmenting for fun and art and engagement and communication but my sense is that, underneath all these we’re building this new augmented reality because the power & adaptive advantage mediated through the digital domain is so great that we need it to integrate seamlessly with our mobile, multi-tasking lives. It’s been noted by others – Kevin Kelly comes to mind – that we’re teaching machines to do many of things we do, but better. And in the process we’re making them smaller and more natural and bringing them closer and closer to our bodies. Ponderings of transhumanity and cyborgian futures aside, our lives are being increasingly augmented and mediated by many such smart machines.
DARPA wasn’t at AR Dev Camp. Or at least if they were, they didn’t say so. There was a guy from NASA showing a really cool air traffic control system that watched aircraft in the sky, tagged them with data annotations, and tracked their movements. We were shown the challenges to effectively register the virtual layer – the annotation – with the real object – a helicopter – when it’s moving rapidly. In other words, the virtual layer, mediated through a camera & a software layer, tended to lag behind the 80+ mph heli. But in lieu of DARPA’s actual attendance, it’s worth considering their Urban Leader Tactical Response, Awareness & Visualization (ULTRA-Vis) program to develop a multimodal mobile computational system for coordinating tactical movements of patrol units. This program sees the near-future soldier as outfitted with a specialized AR comm system with a CPU worn on a belt, a HUD lens over one eye, a voice recognition mic, and a system to capture gestures. Military patrols rely heavily on intel coming from command and on coordinating movements through back-channel talk and line-of-sight gestures. AR HUDs offer simple wayfinding and identification of team mates. Voice commands can execute distributed programs and open or close comm channels. Gestures will be captured to communicate to units both in an out of line-of-sight and to initiate or capture datastreams. Cameras and GPS will track patrol movements and offer remote viewing through other soldier’s cameras. But most importantly, this degree of interface will be simple, fluid, and effortless. It won’t get in your way. For better or for worse, maximizing pack hunting behaviors with technology will set the stage for the future of human-computer interaction.
After lunch provided by Qualcomm, Anselm Hook led an afternoon session at AR DevCamp titled simply “Hiking”. We convened in a dark and hot room, somewhat ironically called the “Sun Room” for it’s eastern exposure, to discuss nature and what, if any, role AR should play in our interface with the Great Outdoors. We quickly decided to move the meeting out into the parking lot where we shared our interests in both built and natural outdoor environments. A common theme that emerged in words and sentiment was the tension between experience & distraction. We all felt that the natural world is so rich and special in large part due to it’s increasing contrast to an urbanized and mechanized life. It’s remote and wild and utterly disconnected, inherently at peace in it’s unscripted and chaotic way. How is this value and uniqueness challenged by ubicomp and GPS and cellular networks? GPS & cellphone coverage can save lives but do we really need to Twitter from a mountain top? I make no judgement calls here and am plenty guilty myself but it’s worth acknowledging that augmented reality may challenge the direct experience of nature in unexpected ways and bring the capacity to overwrite even the remote corners of the world with human digital graffiti.
But remember that grove of trees I mentioned before, tagged with data annotations? Imagine the researchers viewing those trees through AR lenses able to see a glance-able color index for each one showing CO2, O2, heavy metals, turgidity, growth, and age. Sensors, mesh nets, and AR can give voice to ecosystems, cities, communities, vehicles, and objects. Imagine that grove is one of thousands in the Brazilian rainforest reporting on it’s status regularly, contributing data to policy debates and regulatory bodies. What types of augmented experiences can reinforce our connection to nature and our role as caretakers?
On the other hand, what happens when you and the people around you are each having very different experiences of “reality”? What happens to the commons when there are 500 different augmented versions? What happens to community and society when the common reference point for everything – the very environment in which we exist – is malleable and fluid and gated by permissions and access layers or overwrought with annotations competing for our attention? What social gaps could arise? What psychological ailments? Or perhaps more realistically, what happens when a small class of wealthy westerners begin to redraw the world around them? Don’t want to see other people? No problem! Just turn on the obfuscation layer. Ugly tenements ruining your morning commute? Turn on some happy music and set your iGlasses to the favela paintshop filter! Augmentation and enhancement with technology will inevitably proceed along economic lines. What is the proper balance between enjoying our technological luxuries and responsibly curating the world for those less fortunate? Technology often makes the symptoms look different but doesn’t usually eradicate the cause. In the rush to colonize the augmented reality, in the shadow of a wavering global economic system and deep revision of value and product, now is the best time and the most important time to put solutions ahead of products; to collaborate and cooperate on designing open, robust, and extensible systems; and, in the words of Tim O’Reilly, to “work on stuff that matters”.
At the end of the day, pizza’s arrived (Thanks MakerLab!), beers were opened (Thanks Layar & Lighting Labs), and the buzzing brains of AR DevCamp mingled and shared their thoughts. Hearts alit, I’ll be forgiven some sentimentality to suggest that the Hacker Dojo had a soft, warm glow emanating from all the fine folks in attendance. Maybe it was like this around the Acid Tests in the 60′s (with more paisley). Or the heady days of PARC Xerox in the 80′s (with more ties). That growing inertia and sense of destiny at being at the right place at the right time just at the start of something exceptional…
Special thanks to Andrea Mangini for deep and ranging discussions about all this stuff, among many other things.
We’re running around getting all the ducks in line for our AR Dev Camp this Saturday, December 5th at the Hacker Dojo. I’ve been amazed at the number and caliber of folks signed up to attend & contribute to both the Mountain View event and the simultaneous New York City AR Dev Camp. I think we all understand the scale of opportunities and challenges in forging this new domain. This will be an opportunity to come together and flesh out the many considerations needed to build a broad, robust, and open architecture for augmented reality. We have the hindsight of the internet revolution to offer examples of pitfalls and best practices alike. Indeed, we’re not building a new internet nor terraforming new worlds. Augmented reality is simply the next logical interaction layer to the increasingly ubiquitous cloud of data & relationships permeating our lives, so it’s critical that we architect services & experiences that smoothly integrate across existing protocols.
Open interoperability across platforms, universal standards for markups & messaging, geospatial data representation, 2D & 3D rendering, identity & transaction management, strong security & encryption, structured data and portability, content & markup ownership, and solutions driven by design & user experience. All these considerations & more require tremendous coordination to converge on a set of platform specifications that enable a strong and extensible ecology of developers, users, and content creators. In the rush to plant flags and colonize the new AR domain, it’s critical that we balance competition and collaboration to avoid the walled-garden balkanization and impossible hypemachine expectations that sent virtual reality to an early grave.
So go to the signup page, add a topic on the Session Topics page, and come join us this weekend for heady, juicy, AR goodness! If you’re not in the SF Bay Area or NYC, check out the other AR Dev Camps listed or get some co-conspirators and plan your own.
[Cross-posted from Signtific Lab.]
While most would support using technology to allow parapalegics to walk again, to help the blind to see and the deaf to hear, how will society view those who electively enhance themselves through prosthetics & implants?
Consider the not-so-subtle marginalization of transhumanists who believe that technology should be readily integrated into human biology, experimenting with their own crude body modifications. Or the implications around personal security and privacy (not to mention religious fear) raised by those intrepid folks who are self-implanting RFIDs into their forearms to activate lighting & appliances when they enter their homes. Even the international debates over performance-enhancing drug use by athletes reinforces the cultural belief that a “natural” baseline range exists for human abilities and any “synthetic” modification beyond the accepted range is considered unfair.
From issues of fairness to those of security and trust, integrating more machinery into a programmable nervous system challenges many of the fundamental notions we have of what it means to be human. When a Marine returns from a warzone patched up with a cochlear implant, how will they be regarded when it’s revealed that they can hear you speaking from 3 blocks away? Imagine if that person then enters the Police force, what issues of civil liberty and privacy might be confronted? How might we regard an employer that suggests each employee be programmed with software to bring them into the corporate Thinkmesh?
How does society’s regard for a technology change when that technology becomes part of our bodies? How does our relationship to people change if we know they are different? What competitive advantages are conferred by these technologies and how will they be reinforced by socioeconomic drivers? What gaps might arise between those able to afford augmentations and those who cannot?
And what becomes of the Platonic sense of one fundamental Reality when more & more people are seeing personalized variations of the world mediated by connected devices? Will the merging of technology & flesh enable a more cohesive & effective society or a more fragmented and divisive one?
Thus far humans have worked from a standard body map that allows us to understand ourselves and project that understanding onto all other classes of our species. We will likely bring both our sense of membership as well as our fear of otherness with us as we begin to internalize machines unevenly across cultures.
[See also 5 Dark Scenarios For Trans-humanity.]
I’ve been tweeting a lot more than writing lately. Here are my recent tweets on the Tehran situation, in order of posting:
- Iran SMS networks “mysteriously” fail right before elections http://bit.ly/nsjm3 (via @boingboing)
- “You cannot stop people any longer. You cannot control people any longer.” (Iran & Twitter) http://tinyurl.com/kwmh7g (via @mpesce)
- Tech-enabled urbanites push for change as country folk vote for stasis, even reversion. collaborative networks win over tine
- Coordination of Tehran tech-savvy w/ international openinfo/progressive nodes shows leveling of global playing field, decline of statehood.
- Tehran: Ayatollah backs Ahmadi, police take Tehran University to shut down dissident comm nets. Power fears Change. Old fears New.
- University of Tehran held literary session on Saturday reviewing works by Woody Allen. http://bit.ly/Et7fa [Comedy, genius trumps religion.]
- @HiggsBoson23 Totally. The US must have people on the ground in Tehran working to open the comm channels.
- RT @robinsloan: #iranelection Giant photos. You are going to lose your mind: http://is.gd/12G72 [Tehran approaches civil war]
- Incredible to see instantaneous networking around control systems. No oppressor can hide their actions. Tehran: the future of Democracy.
- The events in Tehran are reinforcing the global identity of humanity in a way that directly challenges all oppressive regimes.
- What fascinates me most about Tehran is the empowerment of the tech-enabled to route around the State and reach across the globe.
- To me, the new democracy: granular representation; modernists using tech to challenge traditionalists; collectives taking power from states.
- No surprise that US elements might be encouraging/engineering the scene in Tehran. Via @NickHate: WSWS on NYT & Iran: http://bit.ly/H1s12
- Note: all Iranian candidates are pre-approved by the Ayatollah & Guardian Council. Resolution in favor of Moussavi will not bring freedom.
- Value lies in watching how empowerment of progressive voices impacts the stategies of rulership employed by the Iranian theocracy.
- Is Iranian dismissal of western media the prelude to a brutal smackdown on protests? Def not a sign of sudden openness…
- RT @m1k3y @DavidForbes: The State Department asked Twitter not to shut down yesterday. http://bit.ly/QQoyj #iranelection #awesomeabout
- RT @TEDchris: Here’s Clay Shirky on the incredible role Twitter has played in #iranelection. “This is the big one” http://on.ted.com/zabout
- “Mousavi is no liberal reformer. But the principle of freedom of speech and fair elections and the desire for reform trump that.” @cshirky
- What you should know about the Iranian Cyberwar: http://bit.ly/2b2NL (via @GreatDismal) [History in the making.]
Here’s a selection of my tweets from the O’Reilly Emerging Technology Conference this past week. These are the ones I think grab the juicy nuggets from the speaker’s presentations. [In temporal order with the earliest (ie Monday eve) listed first.]
Tim O’Reilly: “We have greatness but have wasted it on so much. ”
We have an unprecedented opportunity to build a digital commonwealth. #etech
Work on something that matters to you more than money. This is a robust strategy. #etech
Niall Kennedy: Energy Star rating for web apps? Thinking of clouds & programming like tuning a car for better gas mileage. #etech
Cloud computing: no reasonable expectation of privacy when data is not in your hands. Not protected by 4th amendment. #etech
Alex Steffen: Problems with water supply are based in part on our lack of beavers. #etech
Social media for human rights. http://hub.witness.org #etech
Gavin Starks – Your Energy Identity & Why You Should Care. see http://amee.com #etech
Maureen Mclugh – Consider that technology may be evolving in ways that are not particularly interested in us. #etech
Becker, Muller: We have under-estimated the costs and over-estimated the value of our economy. #etech
Becker, Muller: We assume economic trade must be the primary framing of value in our lives. Why? #etech
Design Patterns for PostConsumerism: Free; Repair Culture; Reputation Scaled; Loanership Society; Virtual Production. #etech
NYT: emerging platforms, text reflow, multitouch, flexy displays, smart content, sms story updates, sensors, GPS localized content. #etech
Jeremy Faludi: Buildings & transport have the largest impact on climate change. Biggest bang for the buck in re-design. #etech
Jeremy Faludi – Biggest contributor to species extinction & habitat loss is encroachment & byproducts from agriculture. #etech
Jeremy Faludi – Best strategies to vastly reduce overpopulation: access to birth control & family planning, empowerment of women. #etech
Tom Raftery: Grid 1.0 can’t manage excess power from renewables. Solution: electric cars as distributed storage. #etech
Considering the impact of pluging AMEE (@agentGav) data in ERP systems for feedback to biz about supply chain impacts. BI meets NRG ID.
Mike Mathieu: Data becoming more important than code. Civic data is plentiful and largely untapped. Make civic apps! #etech
Mike Mathieu: Take 10 minutes today and pick your crisis. Figure out how to create software to help. #etech
What is #SantaCruz doing to make civic data available to service builders? We want to help SC be healthier & more productive.
Mark Fraunfelder: “I haven’t heard of anybody having great success with automatic chicken doors.” #etech [re-emerging technology]
Realities of energy efficiency: 1gallon of gasoline = ~1000hrs of human labor. #etech
Kevin Lynch: Adobe is saving over $1M annually just by managing energy. #etech
Designing backwards: Think about the destiny of the item before thinking about he initial use. (via Brian Dougherty) #etech
RealTimeCity: physical & digital space merges, people incorporate intelligent systems, cities react in accord w/needs of pub welfare. #etech
Oh my we’re being LIDAR’d while Zoe Keating plays live cello n loops. ZOMG!!!
zoe keating & live lidar is blowing my mind at #etech 1.3M points per sec!
Julian Bleeker cites David A. Kirby: “Diegetic prototypes have a major rhetorical advantage over true prototypes” #etech
Julian Bleeker: Stories matter when designing the future, eg. Minority Report. #etech
Julian Bleeker: “Think of Philip K. Dick as a System Administrator. #etech
Rebecca MacKinnon: Which side are we helping, River Crabs or Grass Mud Horses? #etech
Kati London: How can we use games to game The System and how can they be used to solve civic problems? #etech
Nathan Wolfe: Trying to fight pandemics only at the viral human level ignores deep socioeconomic causes of animal-human transmission. #etech
Nathan Wolfe, re: viral jump from animal to human populations: “What happens in central Africa doesn’t stay in central Africa.”
Nathan Wolfe: need to work with % of population w/ hi freq of direct contact with animals for early detection of viral transmission.
Nathan Wolfe: Vast majority of biosphere is microscopic, mostly bacterial & viral. Humans: very small piece of life on Earth. #etech
[This is a reply I left recently to a Global Futures question about the near-future of the web. It goes a little off-topic at the end but such is the risk of systems analysis. Everything's connected.]
Within 10-15 years mobile devices will constantly interact with the world around us, analyzing objects, faces, signage, locations, and anything else their sensors can engage. Camera viewfinders will identify visual sources using algorithms to match them up with cloud data repositories. Bluetooth and GPS will interact on sub-channels silently exchanging relationships with embedded sensors across devices and objects. A user’s mobile device will become their IP address hosting much of their profile information and mediating relationships across social nets, commercial transactions, security clearances, and the array of increasingly smart objects and devices.
Cloud access and screen presence will be nearly ubiquitous further blurring the line between desktop, laptop, server, mobile devices, and the objects in our world. It will all be screens interfacing between data, objects, and humans. Amidst the overwhelming data/content glut we will outsource mathematical chores to cloud agents dedicated to scraping data and filtering the bits that are pertinent to our personalized affinities and needs. These data streams will be highly dynamic and cloud agents will send them to rich media layers that will render the results in comprehensible and meaningful displays.
The human sensorium and its interaction with reality will be highly augmented through mobile devices that layer rich information over the world around us. The digital world will move heavily into the natural analog world as the boundaries between the two further erode. This will be readily apparent in the increasing amount of communication we will receive from appliances, vehicles, storefronts, other people, animals, and even plants all wired to the cloud. Meanwhile, cloud agents will sort through vast amounts of human behavioral information creating smart profiles and socioeconomic and environmental systems models with incredible complexity and increasing predictive ability. The cloud itself will be made more intelligible to agents by the standardization of semantic web protocols implemented into most new sites and services. Agents will concatenate to tie services together into meta-functions, just as human collectives will be much more common as we move into increasingly multicellular functional bodies.
The sense of self and our philosophical paradigms will be iterating and revising on an almost weekly basis as we spread out across the cloud and innumerable virtual spaces connected through instantaneous communication. Virtual worlds themselves will be increasingly common but will break out of the walled-garden models of the present, allowing comm channels and video streams to move freely between them and the social web. World of Warcraft will have live video feeds from in-world out to device displays. Mobile GPS will report a user’s real-world location as well as their virtual location, mashing both into Google Maps and the SketchUp-enabled virtual map of the planet.
All of this abstraction will press back on the world and create even greater value for real face-to-face interactions. Familial bonds will be more and more cherished and local communities will take greater and greater control of their lives away from unreliable global supply chains and profit-driven corporate bodies. Most families will engage in some form of gardening to supplement their food supply. The state itself will be hollowed out through over-extended conflicts and insurgencies coupled with ongoing failures to manage domestic civic instabilities. Power outages and water failures will be common in large cities. This will of course further invigorate alternative energy technologies and shift civic responsibilities to local communities. US manufacturing will have partially shifted towards alternative energy capture and storage but much of the real successes will be in small progressive towns rallying around local resources, small-scale fab, and pre-existing economic successes.
All in all, the future will be a rich collage. Totally new and much the same as it has been.
Also known as Web 3.0. Here’s a linkdump from my Friday eve surfing:
Paul Miller: Cloud Computing is so much more than a computer in the Cloud
OpenCalais: Life in the Linked Data Cloud: Calais Release 4
Richard Cyganiak: Web of Data
Kevin Kelly: High Order Bit
Wikipedia: Linked Data
Wikipedia: Resource Description Framework (RDF)
Just a quick note (and props, kudos, & cheers) that Douglas Rushkoff is guestblogging at Boing Boing for the next week or so. From his intro:
The current culture wars, as I understand them, are between people who look at our circumstances as pre-existing conditions, and those who see them as largely of our own making. Those in the former camp prefer to see reality as confined by the operating system of a Creator, and the human role confined to behaving within the rule sets established by Him. Those in the latter camp recognize the function of evolution, and the opportunity (if not obligation) for human beings to participate in the ongoing construction of our world and its operating systems.
From O’Reilly Radar:
The “internet operating system” that I’m hoping to see evolve over the next few years will require developers to move away from thinking of their applications as endpoints, and more as re-usable components. For example, why does every application have to try to recreate its own social network? Shouldn’t social networking be a system service?
This isn’t just a “moral” appeal, but strategic advice. The first provider to build a reasonably open, re-usable system service in any particular area is going to get the biggest uptake. Right now, there’s a lot of focus on low level platform subsystems like storage and computation, but I continue to believe that many of the key subsystems in this evolving OS will be data subsystems, like identity, location, payment, product catalogs, music, etc. And eventually, these subsystems will need to be reasonably open and interoperable, so that a developer can build a data-intensive application without having to own all the data his application requires. This is what John Musser calls the programmable web.
As with much of the digital world, corporate transparency is greater now than it ever has been. Witness yesterday’s Adobe Analyst Meeting – a closed door, invite-only industry event at which analysts of all stripes were treated to Adobe’s financial strategy for the year to come. Within those exclusive walls, many industry agents were typing away on laptops and mobiles but they weren’t just live-blogging or recording notes for a report or article to be edited by their gatekeepers and published later. They were also broadcasting SMS messages to the masses in real-time through Twitter, micro-blogging their instantaneous thoughts, reactions, and sub-channel conversations to thousands of vicarious third-parties.
These raw feeds are perhaps a much more accurate representation of such events – or at least constitute a valuable nuance to the conversation – but their true merit is in their subversive tunneling to freedom through the garden walls, broadcast to the masses. I was annoyed that I couldn’t attend my own company’s briefing but then I got a lot of the meat from trolling the analyst tweets. This raises numerous issues. Should the company defend the tower and let me get the info second-hand through the emotional filters and bullshit detectors of the invitees? Or is it in their interest to include me and the rest of the public so they can at least have a better bet at controlling the message? Is there value in creating such walled gardens in the first place if anyone can breech your security with a simple 140 character message? Is it cost-effective? Do companies impose checkpoints to remove potentially threatening mobile devices? Can you trust people to stick to the talking points or do you allow that the genie is out of the bottle and the natural process of selection will actually help your company do a better job? Transparency and democratized digital broadcast is crowdsourced quality control. It’s a natural feedback mechanism for regulating the evolution of ideas.
These days, if an exclusionary body refuses to share beyond the in-crowd, at least one of those insiders will probably share it with the world. Information is free and the closed companies see their brand suffer as they try in vain to crush the dissenters on a global and very public stage. Their insular reporting hierarchies inevitably ensure that the same ideas and strategies eventually become recycled again and again, and that the truth is filtered through the instinct of self-preservation. Secrecy is like evolution in a vacuum or asexual reproduction. There is little pressure for real change beyond the cold, hard truth of the quarterly earnings report.
Is it even possible to keep secrets anymore? Do you remember all the conspiracy theories you read about in college? Have you noticed that most of them have now been recorded as historical fact? Have you considered that within 10 years the majority of elected officials will have public digital paper trails stretching across the fabled Information Superhighway? And there will be bands of saavy developers eager to crunch the data from those paper trails and render them in pretty visualizations that really show just exactly how honorable/charitable/pious/two-faced/depraved your future senator really is.
Even the analysts are known, willingly opting in to the public timeline of Twitter. All of their names are published at Sage Circle for anyone to see and follow. In fact, in order to really productively use many of the new open social tools & services, the user is highly incentivised to opt-in to their own public transparency. Everyone who wants to speak with power enough to reach the masses (or at least a few handfuls of them) must embrace the open platform. And if you’re professional, you need to use your real name. Therein lies the rub: to be competitive businesses need to have their product managers, their evangelists, their analysts, idea makers and trend-setters all dialed in to the social web. Communication and sharing and an openness to take feedback from your users is becoming crucial for the corporate body to humanize and interact with the eyes of the world. Effective product development must include the people buying your product, otherwise you end up designing for imagined ghosts. Hence, the increasing migration of analysts and audiences to Twitter. Then as a company you end up with your intelligence agents working for you but writing to their audience. And you have an empowered audience that’s publicly-yet-privately back-channeling their loathing of your corporate shill right in front of them, like the now legendary and immediately ground-breaking SXSW smackdown of Tara Hunt.
Like journalists, analysts are no longer totally bound by an allegiance to their lords nor to the companies they scrutinize. They become like moonlighting Ronin. They broadcast to the world from a niche stardom and semi-famous personhood that carefully (or not-so-carefully) balances the party line and the ratings of the viewers. In the face of even limited fame and empowerment, how does company loyalty measure up to increased outsourcing and diminishing employee perks? All life, it seems, will bend towards the viewership, simultaneously revealed and true, yet inevitably influenced and state-shifted by 5 or 6 billion eyes and the inescapable quantal fact of Heisenberg’s Uncertainty. In a totally measured and watched world, is Truth just a state of observation, a sufficiently-probable collapsing of the waveform undergoing the formality of actually occuring, to paraphrase McKenna quoting Whitehead. The soul becomes visible as the mind manifests to all eyes.
Information – Truth, whether it exists fundamentally or is just a state of mind – indeed wants to be free and this fundamental law works through the human species and the technologies we extrude. We are still animals and our tools must help us adapt and thrive. This is more clear now than ever as our actions leave deeper and deeper footprints across the digital terrain we walk. We are being recorded and we are recording, capturing more and more facets of our human experiment written onto spinning platters like prayer wheels in the virtual breeze. The New Journalism will find even the most exclusive events, the narrowest niches, the darkest secrets and the most banal subcultures and capture them, radiating out to the digital world into the very Akashic Record of Our Times. Life is the new media, rich in all it’s texture, drama, subterfuge, and transcendence. As the military struggles with soldier bloggers, embedded third-party reporters, wired insurgencies, and the ever-present satt feeds waving down from far up above with just a passing glint of sunlight, the injustices and atrocities wrought by man & machine are cataloged equally alongside silly cat pictures, personal bios, frat videos, copyright violations, knowledge wiki’s, satellite imagery, and reams & reams of pornography. All acts are caught and surveyed by the one unblinking eye, like Sauron or the Illuminati or the gaze of God.
The world is getting much smaller and simultaneously incredibly huge and diverse. Global instability will be balanced by local resilience, and hierarchical corruption will struggle against networked transparency. CCTV’s will merge with YouTube & reality TV and life will reveal itself on a scale never before known. The cloud is breaking out of the browser and out of our servers spreading to mobile devices and HUD overlays, objects & artifacts. Reality will be radically augmented, participatory, and unbounded. We will fragment and unite, solve et coagula. And tweeting as we go, televising & recording the revolution for all to witness.
From Mark Pesce’s recent presentation at Personal Democracy Forum 2008::
Hyperpolitics: American Style
It is as though we have all been shoved into the same room, a post-modern Panopticon, where everyone watches everyone else, can speak with everyone else, can work with everyone else. We can send out a call to â€œfind the others,â€ for any cause, and watch in wonder as millions raise their hands. Any fringe (noble or diabolical) multiplied across three and a half billion adds up to substantial numbers. Amplified by the Human Network, the bonds of affinity have delivered us over to a new kind of mob rule.
…These newly disproportionate returns on the investment in altruism now trump the â€˜virtue of selfishness.â€™
…Sharing is the threat. Not just a threat. It is the whole of the thing.
A photo snapped on my mobile becomes instantaneously and pervasively visible. No wonder sheâ€™s nervous: in my simple, honest and entirely human act of sharing, it becomes immediately apparent that any pretensions to control, or limitation, or the exercise of power have already collapsed into shell-shocked impotence.
Twitter has gotten a lot of mixed attention lately, both as a rising phenomenon but also for failing to fix its capacity issues as quickly as people seem to expect. The issue at hand, as expressed by Twitter Dev, is that the platform was not originally written as a messaging system. Indeed, it was built on a content management model.
Recall that Twitter was originally about posting what you are doing at the moment. As such, it was essentially constructed as a public microblog that happened to include mobile support. But very quickly the Twitter user community realized the power of broadcasting and co-opted this feature to grow a very large social netwoork. Twitter became an extension of sms and all of the new API clients that started popping up.
Now with almost 2 million users, many of whom are tweeting multiple times a day, the content management system is maxxing out. Imagine if 2 million people were posting 160-char messages to Blogger daily… Frankly, it’s amazing that Twitter is doing as well as it is. So now the Twitter dev team is rebuilding every component from scratch to explicitly construct a robust global messaging system.
What’s really interesting is that the Twitter community has effectively turned Twitter into something it wasn’t intended to be. The desire to rapidly communicate with affiliates across the globe is so strong, and the power of broadcast is so compelling in the web2.0 era, that the very DNA of Twitter is being forced to mutate to support this demand. The spark of “what am I doing right now?” set flame to social media and the connection of communities. We want to know what’s going on with all the people we’re interested in. We want to know them professionally, philosophically, and personally. And we want to speak our mind and emotions and will to them.
I’m constantly taken by the casual intimacy of Twitter friends – people I’ve never met yet I know that they had a rough interview, or their cats are hungry, or they are giving a lecture tomorrow, or just saw a crazy person dancing on Wall St., or that they think Indiana Jones represents the Marxian class struggle. Normally you only get this spread of data about someone if you’re close friends and physically near them on a regular basis.
We want to socialize and share and we have an instinctual feeling, waking up from the haze of 50 years of corporate push-media, that life itself in all it’s minutia is far more entertaining than anything Fox or NBC can throw at us. Or at least, it’s just as entertaining and engaging and, at it’s core, so much more real. The simulacrum cannot mess with us, ala Real World where we were sold scripted caricatures in the guise of “reality”. Twitter, YouTube, MySpace, Blogger, etc… These are the new reality media platforms and we’re all the new empowered content creators, scripted or real. Culture is going digital and the once-static web archive is waking up as a dynamic organism managing and sharing the very whims of it’s creators.
Through this process we’re getting to know each other and ourselves and our world very quickly as knowledge is distributed globally and minds are linked across worlds with zero lag. Culture is iterating faster than ever and we’re only at the very beginning of what is clearly becoming a huge revolution for all of humanity, whether or not each person is immediately touched by the wires. Life is virtualizing and the abstracted mental content of our world is increasingly archived and shared and commented upon and iterated on itself from all across the world. The power and reach of our minds is expanding out through our devices and the exocortical software agents we now have managing so many of our subroutines. We are cyber even without the implants and wetware. The individual is wiring into groups, like cells aggregating into functional bodies, towards greater communicative and iterative power.
The human species is beginning to truly know itself and grok it’s identity and function. As our eyes open up to perceive more and more of our world, we gaze at our creations and atrocities and the spark of soul sits in judgment, our conscience asserting itself. The democratization of media and the transparency of behavior is fundamentally altering the power balance away from the dominant elite towards the will of the people. In a very strange and sweet way, Twitter is part of this process of sharing and reinforcing the similarities between us all.
To briefly elaborate on an earlier post about Second Life… And specifically, ways in which I believe a modern 3d immersive world can leverage the new wave of cloud tech and create a truly compelling experience:
I want downtown billboards streaming Twitter feeds, rich dataviz, global network traffic, weather patterns, Flickr streams, and cycling media channels. I want to Dj from Traktor directly into a virtual club. I want interactive music and video remix tools that include the world as a substrate. I want to endow my avatar with metadata callouts, grouped in trust profiles, that display my affinities, affiliations, tag cloud, LinkedIn profile, sms number, twitter id, and credit accounts as appropriate to those I meet. I want to be free to re-purpose 3D assets from 3DSM, Maya, and Sketchup into my worldspace. I want a beautiful living homeworld that gathers the populace and inspires users and developers to create their own content elsewhere on distributed servers. I want to join friends on a virtual hilltop and watch the clouds drift past, watch the sun set, and the moons rise. I want to get lost in emergent behaviors, intelligent agents, and the beauty of physical dynamics. I want to easily find friends across multiple servers, across social nets, and out into mobile, gsm, and phone networks. I want an open-standard, opt-in, cloakable virtual ID that can be searched for and found across all dominant gaming and immersive networked worldspaces – and then when I find my friend I want to be able to join them wherever they are. I want peer-to-peer drop-boxes and back-channels that can address files to dominant industry and open-source applications, then back to in-world interfaces. I want an in-world, heads-up fly-out phone/sms/notepad/web-browser overlay that’s data synched to my mobile phone. I want to stumble into sinuous plotlines that sweep me away to distant parts of the virtual world. And yes, I want an SDK that allows EA to stick the Tony Hawk trick and physics model into a nice binary that can be purchased and installed into my client so I can skate around the place. And yes, I will try to grind your avatar if you have any linear edges sticking out.
I’m totally dreaming, I know. But dreams are what the future is built upon.
Alterculturalist, sonic datamasher, and cat lover, Wes Unruh has posted his latest work in the Philip K. Nixon project. Logosagogo! The Hyperstition of Philip K. Nixon is a Matrix-meets-machine-elf aural memescape that slices and dices the zeitgeist like television chopped into glass & oil. Somebody in Hollywood needs to hire this guy as an SFX developer…