Category: ghost in the machine

Running With Machine Herds

Continuing its annual tradition of walking the lines between genuine social goodyness and highfalutin’ techno utopianism, the TED2013 conference kicked off this week in Los Angeles. Gathering together some of the brighter minds and more well-heeled benefactors, attendees come to tease apart the phase space of possibility and to take a closer look at how we consciously examine and intentionally evolve our world. Among the many threads and themes, one in particular tugs deeply at both aspirational humanism and existential terror.

Continue reading

Machine Aesthetics Video – Robot Readable World

BERG creative director, Timo Arnall, has published a video collecting “found machine-vision footage”. In his words:

How do robots see the world? How do they gather meaning from our streets, cities, media and from us? This is an experiment in found machine-vision footage, exploring the aesthetics of the robot eye.

I think it gets particularly poignant about 4 minutes in when the face tracking & recognition alphas make human TV hosts into odd, simplified charicatures, at once de-humanizing the hosts while betraying the limited sophistication of machines like children trying to capture the world in colorful crayons. Bonus points for the creeping irony of machines learning about humans through TV.

Robot readable world from Timo on Vimeo.

A Few More Notes on Machine Aesthetics

Olympus glitch, from Year of the Glitch

Scott Smith has a nice article about Our Complicated Love-Hate Relationship With Robots, exploring how robotics have been seeping into the public dialog of late. A couple of the links he cites were good reminders of previous work looking at the aesthetics of machine perception, notably Sensor-Vernacular from the fine folks at BERG and The New Aesthetic Tumblr by James Bridle.

If humanity is a reflection on the experience of perceiving and interacting with the world, what role does machine perception play in this experience? And if nature acts through our hands, to what ends are flocking drones and herds of autonomous machines? A taxonomy of machine perception seems necessary to understand the many ways in which the world can be experienced.

New Aesthetics of Machine Vision

I’ve grown fascinated by the technology of machine vision, but even more so with the haunting aesthetics captured through their eyes. There’s something deeply enthralling and existentially disruptive about the emergence of autonomous machines into our shared world, watching us, learning about us, and inevitably interacting with each other. It’s like a new inorganic branch of taxonomy is evolving into being. Anyway, two recent notes on this topic…

The first is this short series of images taken from a UAV drone and featured in the ACLU report, Protecting Privacy From Aerial Surveillance [PDF]. There’s a decent summary of the report at the New York Times.

Makes me think of Ian McDonald’s excellent novel, Brasyl, and the ad hoc indoctrination of Our Lady of Perpetual Surveillance into the extended canon of casual Orishas.

The second item of note is this haunting video of a 3D Scanner wandering the streets of Barcelona. It’s not any sort of smart machine – it’s just a dumb handheld scanner hitching a ride on a creative human – but it again evokes the aesthetic of a world seen through eyes very different from our own. The video really grabs me about a minute in:

alley posts from James George on Vimeo.

It seems to show a bizarre ghost world or a glimpse from another dimension into ours. The aesthetic (and the tech) is similar to LIDAR, which I had the luck to play around with a couple years ago – and which Radiohead employed to a very interesting end:

In some ways, I want to see these visions as analogous to the view through a wolf’s eyes in the 80′s flick, Wolfen (at 0:24 in this trailer):

Seeing through the eyes of machines isn’t especially new but it’s the awareness of the many adjacent, convergent technologies of pattern recognition, data analysis, biometrics, autonomous navigation, swarming algorithms, and AI that adds pressure to the long-held notion that machines might someday walk our world of their own accord. It seems much closer than ever before so it’s fascinating to watch the new aesthetics of machine vision move into the popular domain.

Top Post Round-Up: OWS, Ubicomp, Hyperconnectivity, & Transhumanity

I’ve just returned from a very interesting workshop in Washington, D.C. about fast-moving change, asymmetric threats to security, and finding signals within the wall of noise thrown up by big data. These are tremendous challenges to governance, policy makers, and the intelligence community. I’ll have more to say on these topics in later posts but for now, here’s a round-up of the most popular posts on URBEINGRECORDED in order of popularity:

Occupy Wall Street – New Maps for Shifting Terrain – On OWS, gaps in governance, empowered actors, and opportunities in the shifting sands…

Getting to Know Your Ghost in the Machine – On the convergence of ubiquitous computation (ubicomp), augmented reality, and network identity…

The Transhuman Gap – On the challenges facing the transhuman movement…

The Realities of Coal in the Second Industrial Revolution – On the energy demand and resource availability for the developing world…

Meshnets, Freedom Phones, and the People’s Revolution – On the Arab Spring, hyperconnectivity, and ad hoc wireless networks…

And a few that I really like:

Back-casting from 2043 – On possible futures, design fictions, and discontinuity…

On Human Networks & Living Biosystems – On the natural patterns driving technology & human systems…

Outliers & Complexity – On non-linearity, outliers, and the challenges of using the past to anticipate the future…

Thanks to all my readers for taking the time to think about my various rantings & pre-occupations. As always, your time, your participation, and your sharing is greatly appreciated!

My IFTF Tech Horizons Perspective on Neuroprogramming

IFTF has published the 2010 research for their Technology Horizons program – When Everything is Programmable: Life in a Computational Age. This arc explored how the computational metaphor is permeating almost every aspect of our lives. I contributed the perspective on Neuroprogramming [PDF], looking at the ways technology & computation is directly interfacing with our brains & minds.

From the overview for the Neuroprogramming perspective:

Advances in neuroscience, genetic engineering, imaging, and nanotechnology are converging with ubiquitous computing to give us the ability to exert greater and greater control over the functioning of our brain, leading us toward a future in which we can program our minds. these technologies are increasing our ability to modify behavior, treat disorders, interface with machines, integrate intelligent neuroprosthetics, design more capable artificial intelligence, and illuminate the mysteries of consciousness. With new technologies for modulating and controlling the mind, this feedback loop in our co-evolution with technology is getting tighter and faster, rapidly changing who and what we are.

I also contributed to the Combinatorial Manufacturing perspective with Jake Dunagan. This perspective explores advances in nano-assembly & programmable matter. From the overview:

humans have always been makers, but the way humans manufacture is undergoing a radical transformation. tools for computational programming are converging with material science and synthetic biology to give us the ability to actually program matter—that is, to design matter that can change its physical properties based on user input or autonomous sensing. nanotechnology is allowing us to manipulate the atomic world with greater precision toward the construction of molecular assemblers. Researchers are designing “claytronics”: intelligent robots that will self-assemble, reconfigure, and respond to programmatic commands. And synthetic biologists are creating artificial organic machines to perform functions not seen in nature.

The Cybernetic Self

This is one of 50 posts about cyborgs – a project to commemorate the 50th anniversary of the coining of the term. Thanks to Tim Maly of Quiet Babalon for running such a great project!

she
CC image from mondi.

“He would see faces in movies, on T.V., in magazines, and in books. He thought that some of these faces might be right for him…”

The word “cybernetic” derives from a Greek word, kybernetes, meaning “rudder” or “governor”. A cybernetic process is a control system that uses feedback about it’s actions in an environment to better adapt it’s behavior. The cybernetic organism, or “cyborg”, is a class of cybernetic systems that have converged with biological organisms. In this increasingly mythologized form, the cyborg embodies the ongoing dialectic between humanity & technology, and is an aspirational figure onto which we project our superhuman fantasies. While it offers security, enhancement, and corporeal salvation the cyborg also presents an existential threat to the self and to the cherished notions of being uniquely human.

It’s a gamble but we don’t seem able to leave the table. As we offload more of our tasks into technology we enhance our adaptability while undermining our own innate resilience as animals. We wrap ourselves in extended suits of shelter, mobility, health, and communications. We distribute our senses through a global network of hypermedia, augmenting our brains with satellites & server farms & smart phones. Increasingly, our minds & bodies are becoming the convergence point for both the real & the virtual, mediated through miniaturization, dematerialization, and nano-scale hybridization. Our ability to craft the world around us is quickly advancing to give us the ability to craft our bodies & our selves.

“And through the years, by keeping an ideal facial structure fixed in his mind… Or somewhere in the back of his mind… That he might, by force of will, cause his face to approach those of his ideals…”

Computation is miniaturizing, distributing, and becoming more powerful & efficient. It’s moving closer & closer to our bodies while ubiquitizing & dematerializing all around us. The cybernetic process has refined this most adaptive capacity in little more than 50 years to be right at hand, with us constantly, connected to a global web of people, places, things, information, and knowledge. We are co-evolving with our tools, or what Kevin Kelly refers to as the Technium – the seemingly-intentional kingdom of technology. As Terence McKenna suggested, we are like coral animals embedded in a technological reef of extruded psychic objects. By directly illustrating how our own fitness & bio-survival becomes bound to the survival of our technology, the cyborg is a fitting icon for this relationship.

mirror
CC image from PhotoDu.de.

Technology has historically been regarded as something we cast into the world separate from ourselves but it’s worth considering the symbiosis at play and how this relationship is changing the very nature of humanity. As we venture deeper & deeper into the Technium, we lend ourselves to it’s design. By embracing technology as part of our lives, as something we rely upon and depend on, we humanize it and wrap it in affection. We routinely fetishize & sexualize cool, flashy tech. In doing so we impart emotional value to the soul-less tools of our construction. We give them both life & meaning. By tying our lives to theirs, we agree to guarantee their survival. This arrangement is a sort of alchemical wedding between human & machine, seeking to yield gold from this mixture of blood & metal, uncertain of the outcome but almost religiously compelled to consummate.

“The change would be very subtle. It might take ten years or so. Gradually his face would change it’s shape. A more hooked nose. Wider, thinner lips. Beady eyes. A larger forehead…”

In the modern world, our identities include the social networks & affinity groups in which we participate, the digital media we capture & create & upload, the avatars we wear, and the myriad other fragments of ourselves we leave around the web. Who we are as individuals reflects the unique array of technologies through which we engage the world, at times instantiated through multiple masks of diverse utility, at other times fractured & dis-integrated – too many selves with too many virtual fingers picking at them. Our experience of life is increasingly composed of data & virtual events, cloudy & intangible yet remote-wired into our brains through re-targeted reward systems. A Twitter re-tweet makes us happy, a hostile blog comment makes us angry, the real-time web feeds our addiction to novelty. Memories are offloaded to digital storage mediums. Pictures, travel videos, art, calendars, phone numbers, thoughts & treatises… So much of who we are and who we have been is already virtualized & invested in cybernetic systems. All those tweets & blog posts cast into the cloud as digital moments captured & recorded. Every time I share a part of me with the digital world I become copied, distributed, more than myself yet… in pieces.

broken
CC image from Alejandro Hernandez.

It can be said that while we augment & extend our abilities through machines, machines learn more about the world through us. The web 2.0 social media revolution and the semantic web of structured data that is presently intercalating into it has brought machine algorithms into direct relationship with human behavior, watching our habits and tracking our paths through the digital landscape. These sophisticated marketing and research tools are learning more and more about what it means to be human, and the extended sensorium of the instrumented world is giving them deep insight into the run-time processes of civilization & nature. The spark of self-awareness has not yet animated these systems but there is an uneasy agreement that we will continue to assist in their cybernetic development, modifying their instructions to become more and more capable & efficient, perhaps to the point of being indistinguishable from, or surpassing, their human creators.

“He imagined that this was an ability he shared with most other people. They had also molded their faces according to some ideal. Maybe they imagined that their new face would better suit their personality. Or maybe they imagined that their personality would be forced to change to fit the new appearance…”

In Ridley Scott’s Blade Runner, the young Tyrell Corporation assistant, Rachel, reflects on her childhood memories while leafing through photographs of her youth. These images are evidence of her past she uses to construct her sense of self. Memories provide us with continuity and frame the present & future by reminding us of our history – critical for a species so capable of stepping out of time. Rachel’s realization that she is a replicant, that her memories are false implants deliberately created to make her believe she’s human, precipitates an existential crises that even threatens Harrison Ford’s character, Rick Deckard, surrounded as he is by photos of his own supposed past. This subtle narrative trick suggests that replicants will be more human-like if they don’t know they’re replicants. But it also invokes another query: If memories are (re-)writable, can we still trust our own past?

Yet both characters do appear quite human. They laugh and cry and love and seem driven by the same hopes and fears we all have. Ridley Scott’s brilliance – and by extension, Philip K. Dick’s – is to obscure the nature of the self and of humanity by challenging our notions of both. Is Rachel simply another mannequin animated by advanced cybernetics or is she more than that? Is she human enough? When the Tyrell bio-engineer J.F. Sebastian sees the Nexus 6 replicants, Pris and Roy Batty, he observes “you’re perfect”, underlining again the aspirational notion that through technology we can be made even better, becoming perhaps “more human than human”. This notion of intelligent artificial beings raises deep challenges to our cherished notions of humanity, as many have noted. But the casual fetishization of technology, as it gets nearer & friendlier & more magical, is perhaps just as threatening to our deified specialness in it’s subtle insinuation into our hands & hearts & minds.

mannequin
CC image from Photo Monkey.

In Mamoru Oshii’s anime classic, Ghost in the Shell, the female protagonist – a fully-engineered and functional robotic human named Kusanagi – at once decries those who resist augmentation, suggesting that “your effort to remain as you are is what limits you”, while simultaneously becoming engaged in a quest to determine if there might be more to her than just what has been programmed. She celebrates her artifice as a supreme achievement in overcoming the constraints of biological evolution while also seeking to find evidence that she is possessed of that most mysterious spark: the god-like ingression of being that enters and animates the human shell. Oshii’s narrative suggests that robots that achieve a sufficient level of complexity and self-awareness will, just like their human creators, seek to see themselves as somehow divinely animated. Perhaps it’s a method to defend the belief in human uniqueness but those writing the modern myths of cybernetics seem to imply that while humans aspire to the abilities of machines, machines aspire to the soulfulness of humans.

harlequin
CC image from Alaskan Dude.

“This is why first impressions are often correct…”

Chalk it up to curiosity, the power of design fictions, and an innate need to realize our visions, but if we can see it with enough resolution in our mind’s eye, we’ll try to bring it to life. The Ghost in the Shell & the Ghost in the Machine both intuit the ongoing merger between humanity & technology, and the hopes & fears that attend this arranged and seemingly-unavoidable alchemical wedding. As animals we are driven to adapt. As humans, we are compelled to create.

“Although some people might have made mistakes. They may have arrived at an appearance that bears no relationship to them. They may have picked an ideal appearance based on some childish whim or momentary impulse. Some may have gotten half-way there, and then changed their minds…”

Humans are brilliant & visionary but also impetuous, easily distracted, fascinated by shiny things, and typically ill-equipped to divine the downstream consequences of our actions. We extrude technologies at a pace that far outruns our ability to understand their impacts on the world, much less how they change who we are. As we reach towards AI, the cyborg, the singularity, and beyond, our cybernetic fantasies may necessarily pass through the dark night of the soul on the way to denouement. What is birthed from the alchemical marriage often necessitates the destruction of the wedding party.

cyborg
CC image from WebWizzard.

“He wonders if he too might have made a similar mistake.” – David Byrne, Seen & Not Seen

Are we working up some Faustian bargain promising the heights of technological superiority only for the meager sacrifice of our Souls? Or is this fear a reflection of our Cartesian inability to see ourselves as an evolving process, holding onto whatever continuity we can but always inevitably changing with the world in which we are embedded? As we offload more and more of our selves to our digital tools, we change what it means to be human. As we evolve & integrate more machine functionality we modify our relationship to the cybernetic process and re-frame our self-identity to accommodate our new capacities.

Like the replicants in Blade Runner and the animated cyborgs of Ghost in the Shell we will very likely continue to aspire to be more human than human, no matter how hard it may be to defend our ideals of what this may mean to the very spark of humanity. What form of cyborg we shall become, what degree of humanity we retain in the transaction, what unforeseen repercussions may be set in motion… The answers are as slippery as the continuum of the self and the ever-changing world in which we live. Confrontation with the existential Other – the global mind mediated through ubiquitous bio-machinery – and the resulting annihilation of the Self that will necessarily attend such knowledge, may very well yield a vastly different type of humanity than what we expect.

A Few Recent Developments in Brain-Computer Interface

BCI technology and the convergence of mind & machine are on the rise. Wired Magazine just published an article by Michael Chorost discussing advances in optogenetic neuromodulation. Of special interest, he notes the ability of optogenetics to both read & write information across neurons.

In theory, two-way optogenetic traffic could lead to human-machine fusions in which the brain truly interacts with the machine, rather than only giving or only accepting orders. It could be used, for instance, to let the brain send movement commands to a prosthetic arm; in return, the arm’s sensors would gather information and send it back.

In another article featured at IEEE Spectrum, researchers at Brown University have developed a working microchip implant that can wirelessly transmit neural signals to a remote sensor. This advance suggests that brain-computer interface technologies will evolve past the need for wired connections.

Wireless neural implants open up the possibility of embedding multiple chips in the brain, enabling them to read more and different types of neurons and allowing more complicated thoughts to be converted into action. Thus, for example, a person with a paralyzed arm might be able to play sports.

MindHacks has discusses the recent video of a touch-sensitive prosthetic hand. This is a Holy Grail of sorts for brain-machine interface: the hope that an amputee could regain functionality through a fully-articulatable, touch-sensitive, neural-integrated robotic hand. Such an accomplishment would indeed be a huge milestone. Of note, the MindHacks appraisal focuses on the brain’s ability to re-image body maps (perhaps due to it’s plasticity).

There’s an interesting part of the video where the patient says “When I grab something tightly I can feel it in the finger tips, which is strange because I don’t have them anymore”.

Finally, ScienceDaily notes that researchers have demonstrated rudimentary brain-to-brain communication mediated by non-invasive EEG.

[The]experiment had one person using BCI to transmit thoughts, translated as a series of binary digits, over the internet to another person whose computer receives the digits and transmits them to the second user’s brain through flashing an LED lamp… You can watch Dr James’ BCI experiment at YouTube.

One can imagine a not too distant future where the brain is directly transacting across wireless networks with machines, sensor arrays, and other humans.

The Co-Evolution of Neuroscience & Computation


Image from Wired Magazine.

[Cross-posted from Signtific Lab.]

Researchers at VU University Medical Center in Amsterdam have applied the analytic methods of graph theory to analyze the neural networks of patients suffering from dementia. Their findings reveal that brain activity networks in dementia sufferers are much more randomized and disconnected than in typical brains. "The underlying idea is that cognitive dysfunction can be illustrated by, and perhaps even explained by, a disturbed functional organization of the whole brain network", said lead researcher Willem de Haan.

Of perhaps deeper significance, this work shows the application of network analysis algorithms to the understanding of neurophysiology and mind, suggesting a similarity in functioning between computational networks and neural networks. Indeed, the research highlights the increasing feedback between computational models and neural models. As we learn more about brain structure & functioning, these understandings translate into better computational models. As computation is increasingly able to model brain systems, we come to understand their physiology more completely. The two modalities are co-evolving.

The interdependence of the two fields has been most recently illustrated with the announcement of the Blue Brain Project which aims to simulate a human brain within 10 years. This ambitious project will inevitably drive advanced research & development in imaging technologies to reveal the structural complexities of the brain which will, in turn, yield roadmaps towards designing better computational structures. This convergence of computer science and neuroscience is laying the foundation for an integrative language of brain computer interface. As the two sciences get closer and closer to each other, they will inevitably interact more directly and powerfully, as each domain adds value to the other and the barriers to integration erode.

This feedback loop between computation and cognition is ultimately bringing the power of programming to our brains and bodies. The ability to create programmatic objects capable of executing tasks on our behalf has radically altered the way we extend our functionality by dematerializing technologies into more efficient, flexible, & powerful virtual domains. This shift  has brought an unprecedented ability to iterate information and construct hyper-technical objects. The sheer adaptive power of these technologies underwrites the imperative towards programming our bodies, enabling us to excercies unprecedented levels of control and augmnetation over our physical form, and further reveal the fabric of mind.

 

The Transhuman Gap

[Cross-posted from Signtific Lab.]

While most would support using technology to allow parapalegics to walk again, to help the blind to see and the deaf to hear, how will society view those who electively enhance themselves through prosthetics & implants?

Consider the not-so-subtle marginalization of transhumanists who believe that technology should be readily integrated into human biology, experimenting with their own crude body modifications. Or the implications around personal security and privacy (not to mention religious fear) raised by those intrepid folks who are self-implanting RFIDs into their forearms to activate lighting & appliances when they enter their homes. Even the international debates over performance-enhancing drug use by athletes reinforces the cultural belief that a “natural” baseline range exists for human abilities and any “synthetic” modification beyond the accepted range is considered unfair.

From issues of fairness to those of security and trust, integrating more machinery into a programmable nervous system challenges many of the fundamental notions we have of what it means to be human. When a Marine returns from a warzone patched up with a cochlear implant, how will they be regarded when it’s revealed that they can hear you speaking from 3 blocks away? Imagine if that person then enters the Police force, what issues of civil liberty and privacy might be confronted? How might we regard an employer that suggests each employee be programmed with software to bring them into the corporate Thinkmesh?

How does society’s regard for a technology change when that technology becomes part of our bodies? How does our relationship to people change if we know they are different? What competitive advantages are conferred by these technologies and how will they be reinforced by socioeconomic drivers? What gaps might arise between those able to afford augmentations and those who cannot?

And what becomes of the Platonic sense of one fundamental Reality when more & more people are seeing personalized variations of the world mediated by connected devices? Will the merging of technology & flesh enable a more cohesive & effective society or a more fragmented and divisive one?

Thus far humans have worked from a standard body map that allows us to understand ourselves and project that understanding onto all other classes of our species. We will likely bring both our sense of membership as well as our fear of otherness with us as we begin to internalize machines unevenly across cultures.

[See also 5 Dark Scenarios For Trans-humanity.]

Direct Brain-Computer Interface Will Require a New Language of Interaction

[Cross-posted from Signtific.]

When Apple Computer recently released the 3.0 version of its iPhone OS one of the most anticipated new features was Cut & Paste. This simple task has been a staple of computing since GUIs were part of the OS, so why did it take Apple until it’s 3rd OS version to implement the feature for the iPhone?

As Apple tells it, there was incredible deliberation over how best to design the user experience. This is, after all, the first and only fully multi-touch mobile computing device. Apple has been meticulously developing and patenting the gestural language through which users interact with the device. Every scroll and pinch, zoom and drag is a consciously designed gesture adding to Apple’s growing lexicon of multi-touch interface. Implementing Cut & Paste was a substantial challenge to create the most accessible gestural commands within the narrow real-estate of the mobile screen.

Now, consider interacting with the same content types available on an iPhone or anywhere in the cloud, but remove the device interface and replace it with a HUD or direct brain interface. If the content is readily visible, either as an eyeglass orverlay or directly registered in the visual cortex, how do we give a UI element focus? How do you make a selection? How do you scroll and zoom? How do you invoke, execute, and dismiss programs? Can you speak internally to type text? How might a back-channel voice be distinguished from someone standing behind you? How do you manage focus changes between the digital content and the visual content of the real-world when both are superimposed in some state?

The fields of Human Computer Interaction and User Interface & Experience Design address these challenges for interacting with digital content and processes, but what new interaction modalities may be developed to better interface humans and computers? As we internalize computation and interaction, the disciplines of HCI & BCI will begin to interpenetrate in ways that may radically alter the conventions of the Information Age.

Bangkok & the Future

I’ll be checking out for a few weeks while traveling in Asia (w00t!). I may Twitter occasionally but will likely do no blogging (going analog – Moleskin). When I return in July I’ll be working with the Institute for the Future as a Visiting Researcher. In this capacity I’ll be contributing to their 2009 Technology Horizons Research Program. More info on that to come but suffice it to say I am extremely stoked on this development. A second w00t!

Best to all!

Chris

DARPA Thinkbots Talk Data Stories

This is the most interesting thing I’ve read in a while. DARPA is using smart agent algorithms to crunch heavy data sets and convert them to human-grokable narratives. Before long such agents will be living on our desktops, mobile devices, cars, and appliances actively interpreting innumerable datastreams rendered to transparent screens and spoken through earbuds.

“Like people,” Darpa notes …such a story-telling system would be able to “retrieve and reuse stories to construct an appropriate interpretation of events …because they convey the aspects of a situation that are most important in determining a decision.”

Darpa hopes to have this Experience-based Narrative Memory (EN-Mem) system make “complex situations… simple, understandable, and solvable.”

…Making sense of a complex situation is like understanding a story; one must construct, impose and extract an interpretation. This interpretation weaves a commonly understood narrative into the information in a way that captures the basic interactions of characters and the dynamics of their motivations while filling in details not explicitly mentioned in the input stream. It uses story lines with which we all have experience as analogies, and it simplifies the detail in order to communicate the crucial aspects of a situation. The story lines it uses are those the decision maker should be reminded of, because they are similar to the current situation based upon what the decision maker is trying to do.

Twittering Analysts Invoke the Singularity. News at 11.

As with much of the digital world, corporate transparency is greater now than it ever has been. Witness yesterday’s Adobe Analyst Meeting – a closed door, invite-only industry event at which analysts of all stripes were treated to Adobe’s financial strategy for the year to come. Within those exclusive walls, many industry agents were typing away on laptops and mobiles but they weren’t just live-blogging or recording notes for a report or article to be edited by their gatekeepers and published later. They were also broadcasting SMS messages to the masses in real-time through Twitter, micro-blogging their instantaneous thoughts, reactions, and sub-channel conversations to thousands of vicarious third-parties.

These raw feeds are perhaps a much more accurate representation of such events – or at least constitute a valuable nuance to the conversation – but their true merit is in their subversive tunneling to freedom through the garden walls, broadcast to the masses. I was annoyed that I couldn’t attend my own company’s briefing but then I got a lot of the meat from trolling the analyst tweets. This raises numerous issues. Should the company defend the tower and let me get the info second-hand through the emotional filters and bullshit detectors of the invitees? Or is it in their interest to include me and the rest of the public so they can at least have a better bet at controlling the message? Is there value in creating such walled gardens in the first place if anyone can breech your security with a simple 140 character message? Is it cost-effective? Do companies impose checkpoints to remove potentially threatening mobile devices? Can you trust people to stick to the talking points or do you allow that the genie is out of the bottle and the natural process of selection will actually help your company do a better job? Transparency and democratized digital broadcast is crowdsourced quality control. It’s a natural feedback mechanism for regulating the evolution of ideas.

These days, if an exclusionary body refuses to share beyond the in-crowd, at least one of those insiders will probably share it with the world. Information is free and the closed companies see their brand suffer as they try in vain to crush the dissenters on a global and very public stage. Their insular reporting hierarchies inevitably ensure that the same ideas and strategies eventually become recycled again and again, and that the truth is filtered through the instinct of self-preservation. Secrecy is like evolution in a vacuum or asexual reproduction. There is little pressure for real change beyond the cold, hard truth of the quarterly earnings report.

Is it even possible to keep secrets anymore? Do you remember all the conspiracy theories you read about in college? Have you noticed that most of them have now been recorded as historical fact? Have you considered that within 10 years the majority of elected officials will have public digital paper trails stretching across the fabled Information Superhighway? And there will be bands of saavy developers eager to crunch the data from those paper trails and render them in pretty visualizations that really show just exactly how honorable/charitable/pious/two-faced/depraved your future senator really is.

Even the analysts are known, willingly opting in to the public timeline of Twitter. All of their names are published at Sage Circle for anyone to see and follow. In fact, in order to really productively use many of the new open social tools & services, the user is highly incentivised to opt-in to their own public transparency. Everyone who wants to speak with power enough to reach the masses (or at least a few handfuls of them) must embrace the open platform. And if you’re professional, you need to use your real name. Therein lies the rub: to be competitive businesses need to have their product managers, their evangelists, their analysts, idea makers and trend-setters all dialed in to the social web. Communication and sharing and an openness to take feedback from your users is becoming crucial for the corporate body to humanize and interact with the eyes of the world. Effective product development must include the people buying your product, otherwise you end up designing for imagined ghosts. Hence, the increasing migration of analysts and audiences to Twitter. Then as a company you end up with your intelligence agents working for you but writing to their audience. And you have an empowered audience that’s publicly-yet-privately back-channeling their loathing of your corporate shill right in front of them, like the now legendary and immediately ground-breaking SXSW smackdown of Tara Hunt.

Like journalists, analysts are no longer totally bound by an allegiance to their lords nor to the companies they scrutinize. They become like moonlighting Ronin. They broadcast to the world from a niche stardom and semi-famous personhood that carefully (or not-so-carefully) balances the party line and the ratings of the viewers. In the face of even limited fame and empowerment, how does company loyalty measure up to increased outsourcing and diminishing employee perks? All life, it seems, will bend towards the viewership, simultaneously revealed and true, yet inevitably influenced and state-shifted by 5 or 6 billion eyes and the inescapable quantal fact of Heisenberg’s Uncertainty. In a totally measured and watched world, is Truth just a state of observation, a sufficiently-probable collapsing of the waveform undergoing the formality of actually occuring, to paraphrase McKenna quoting Whitehead. The soul becomes visible as the mind manifests to all eyes.

Information – Truth, whether it exists fundamentally or is just a state of mind – indeed wants to be free and this fundamental law works through the human species and the technologies we extrude. We are still animals and our tools must help us adapt and thrive. This is more clear now than ever as our actions leave deeper and deeper footprints across the digital terrain we walk. We are being recorded and we are recording, capturing more and more facets of our human experiment written onto spinning platters like prayer wheels in the virtual breeze. The New Journalism will find even the most exclusive events, the narrowest niches, the darkest secrets and the most banal subcultures and capture them, radiating out to the digital world into the very Akashic Record of Our Times. Life is the new media, rich in all it’s texture, drama, subterfuge, and transcendence. As the military struggles with soldier bloggers, embedded third-party reporters, wired insurgencies, and the ever-present satt feeds waving down from far up above with just a passing glint of sunlight, the injustices and atrocities wrought by man & machine are cataloged equally alongside silly cat pictures, personal bios, frat videos, copyright violations, knowledge wiki’s, satellite imagery, and reams & reams of pornography. All acts are caught and surveyed by the one unblinking eye, like Sauron or the Illuminati or the gaze of God.

The world is getting much smaller and simultaneously incredibly huge and diverse. Global instability will be balanced by local resilience, and hierarchical corruption will struggle against networked transparency. CCTV’s will merge with YouTube & reality TV and life will reveal itself on a scale never before known. The cloud is breaking out of the browser and out of our servers spreading to mobile devices and HUD overlays, objects & artifacts. Reality will be radically augmented, participatory, and unbounded. We will fragment and unite, solve et coagula. And tweeting as we go, televising & recording the revolution for all to witness.

Second Life Avatar Controlled By Thoughts of Paraplegic

I have a lot of issues with Second Life – mostly because I’m frustrated by their potential and their seeming inability to act on it – but it’s nevertheless an interesting sandbox to explore the greater frontiers of virtual immersion and social ontology. To this end, Japanese researchers have wired up a Second Life avatar to respond to the thoughts of a paraplegic.

…he wore headgear with three electrodes monitoring brain waves related to his hands and legs. Even though he cannot move his legs, he imagined that his character was walking.

He was then able to have a conversation with the other character using an attached microphone, said the researchers at Japan’s Keio University.

…”In the near future, they would be able to stroll through Second Life shopping malls with their brain waves… and click to make a purchase,” Ushiba said.

Convergence and Continuity Across Virtual Worlds

In games, immersive worlds, forums, social networks, and in blogs we inhabit multiple selves. In most cases, theses virtual spaces are walled islands with little relation between them. Increasingly it’s becoming apparent that continuity is necessary to resolve these fractured selves and to open up the channels of communication between the diversity of online containers. This can be seen in the new wave of web 2.0 aggregators like FriendFeed and Plaxo that aim to collate our myriad profiles, friends and content streams into a single portal. Now, Technology Review reports that several companies are working to enable avatars to move between virtual worlds.

More and more, such affordances will move into virtual spaces. 2D content streams and communication pipelines will feed into and across immersive worlds. A WoW player should be able to call up a HUD console in the game and locate their friends across all of the virtual worlds they’re currently in. They should then be able to communicate with them through IM or VoIP and subsequently transport to join them in another world. GTA4 has announced a feature to allow users to call each other in-world using the game cell phone. Shouldn’t this extend across game worlds and out into real-world mobiles? API’s could evolve to mine user communications (Twitter in WoW?) and chart locations on world maps. In the age of digital society, findability is key.

The vast amounts of personal profiling we’re building up around ourselves in MySpace, Facebook, blogs, and other forums should be accessible through our avatars and from all places we inhabit, virtually and in reality. It should be present in our devices and our profiles. As avatars, it should follow us like a digital skin (secured and opt-in, of course) layered in transaction-appropriate trust profiles that fly-out on mouseover. My avatars should contain more information than just polygons and scripted motions. Social transactions are information exchanges. My LinkedIn profile should be accesible to anyone in 2D and 3D if I so desire.

The realness of immersive worlds should leverage the fundamental reality of our digital profiles and interests. If these platforms are going to become truly compelling, they must work to integrate the API’s, content streams, and communication channels of the web2.0 revolution. We’re in the midst of a completely unprecedented historical shift as all of our cultural and intellectual content is going digital, made manifest in searchable, findable, and persistent datalogs. The profiles we create around our virtual selves are growing larger and larger, and they are being recorded and left open for many eyes to see. Imagine the political candidates running 10 or 15 years from now. So much of their lives will be a matter of public record easily searchable and graphed out to show affiliations, donations, histories and contradictions. So much of who they are will live online like a shadow. SO much of who we all are.

Virtual worlds are poised to engage directly in this shift and draw culture and identity into their domain. Instead of closed platforms, worlds like Second Life must open up and grow to become contiguous spaces whose character arises from the types of people that choose to gather there by affiliation, interest, and intention. MMORPG’s like WoW will continue to offer highly crafted narratives, specialized social groups and hierarchies, and bleeding edge rendering tech but will acknowledge the tremendous personal content within each player distributed across their digital and analog lives.

Of course, if virtual platforms become more open, their business models will inevitably shift towards advertising. Space is space, whether 2D, 3D, or 4D, and eyes are eyes especially when they gather in great enough concentration. As in the real world, the exchange of goods and services will always be of great value in any domain, so the shift towards continuity will be a shift towards reality. Virtual worlds have the unique proposition of creating fantasy within the world of life. So the shift towards reality in the context of a realized fantasy brings both closer together. It is part of the alchemical formula of bringing spirit into matter. It is the power of gods to create in an unlimited universe. It is the movement of the ghost in the machine as our real selves grow more and more to include virtual, digital, non-local aspects of identity and presence. Who am I but the sum of my transactions with the world? These words I’m writing and posting on the global billboard become preserved bits of my self. Your interactions with them extend my identity into the virtual world. All my words are facets of my expanding digital identity. My self-reflection extends from my body, my deeds, my actions towards others around me, to include the ideas and statements I leave online, the avatars I inhabit, and the webs of disembodied people I associate with. In 100 years I may roll up in bits under some social anthropologist’s data-mining PhD nudging their graphs this way or that with my Tweets and posts.

Aggregation of social data serves a very practical role of making it easier for us to manage an increasingly vast amount of data, but it also serves a larger role of helping us defragment our sense of self as it fractures out across so many new digital domains rising and falling daily. If we’re to walk like new gods through worlds both real and virtual, shouldn’t we do so with as much wholeness as possible? In a world that’s made it so challenging to have a fully integrated psyche it’s really imperative that we lay down a strong foundation of holism and continuity as we move into the unfettered vastness of the digital noosphere. As strong cohesive selves we can better wear the masks of avatars and wield the power of virtual gods.

A Little Virtual Spice Please

To briefly elaborate on an earlier post about Second Life… And specifically, ways in which I believe a modern 3d immersive world can leverage the new wave of cloud tech and create a truly compelling experience:

I want downtown billboards streaming Twitter feeds, rich dataviz, global network traffic, weather patterns, Flickr streams, and cycling media channels. I want to Dj from Traktor directly into a virtual club. I want interactive music and video remix tools that include the world as a substrate. I want to endow my avatar with metadata callouts, grouped in trust profiles, that display my affinities, affiliations, tag cloud, LinkedIn profile, sms number, twitter id, and credit accounts as appropriate to those I meet. I want to be free to re-purpose 3D assets from 3DSM, Maya, and Sketchup into my worldspace. I want a beautiful living homeworld that gathers the populace and inspires users and developers to create their own content elsewhere on distributed servers. I want to join friends on a virtual hilltop and watch the clouds drift past, watch the sun set, and the moons rise. I want to get lost in emergent behaviors, intelligent agents, and the beauty of physical dynamics. I want to easily find friends across multiple servers, across social nets, and out into mobile, gsm, and phone networks. I want an open-standard, opt-in, cloakable virtual ID that can be searched for and found across all dominant gaming and immersive networked worldspaces – and then when I find my friend I want to be able to join them wherever they are. I want peer-to-peer drop-boxes and back-channels that can address files to dominant industry and open-source applications, then back to in-world interfaces. I want an in-world, heads-up fly-out phone/sms/notepad/web-browser overlay that’s data synched to my mobile phone. I want to stumble into sinuous plotlines that sweep me away to distant parts of the virtual world. And yes, I want an SDK that allows EA to stick the Tony Hawk trick and physics model into a nice binary that can be purchased and installed into my client so I can skate around the place. And yes, I will try to grind your avatar if you have any linear edges sticking out.

I’m totally dreaming, I know. But dreams are what the future is built upon.

Second Life CEO Rosedale to Step Down

Second Life creator and CEO Philip Rosedale announced he will cease his role as CEO of Linden Lab. He states that he will replace Mitch Kapor as chairman and stay committed to SL full-time as it’s primary visionary. No word on Kapor’s alignment.

Rosedale is definitely more suited to the new role as Second Life has failed thus far to capitalize on their hype and advance their platform. The world is dated and has been unable to realize it’s own visionary goals. They’ve generated a decent amount of revenues but have not used the income to grow the platform in any truly compelling way. Their fundamental model – which is a grave failing point for many people eager to move their endeavors into 3D – assumes that people would rather do everything in an immersive world. But the simple fact is that chat, business meetings, online learning, and ecommerce are all far more functional in the flat 2d web. Even advertising loses it’s appeal when your virtual world only supports 100 or so avatars in any one space at a time.

For SL to succeed I believe they need to do the following:

1) Completely re-engineer the scenegraph to catch up with the immersion and realism of modern gaming platforms
2) Hire content developers whose sole task is to create a rich, detailed and compelling world.
3) Rewrite the entire UI, highlighting basic navigation, rich user profiles, and social affordances
4) Focus on user affordances. An avatar should essentially be a living MySpace/Facebook/LinkedIn object.
5) Create engaging narratives that users can easily and unexpectedly slip into. Imagine ARG’s being played out in SL.
6) Break the walls of the Second Life by wiring it up to the First. Avatars should be able to easily send and respond to sms and email. If I buy a new jacket at G-Star, I should also get a virtual copy for my avatar. Cross-channel communication and cross-promotional opportunities.
7) Scale down the virtual economy. The WoW economy is an emergent property of life in the Warcraft world. It should be the same for SL, not the primary business model.

The most compelling possibilities of immersive worlds are socialization, narrative, and realism, not trade and property ownership. Linden has sacrificed the former for the latter, in my opinion.

Of course, the obvious move will be for Google to buy SL and port it into Google Earth. This may be exactly what the Linden investors are hoping for by bringing in a new CEO. Or more likely, they will move further down the road of monetization through in-world advertising.

[Update] One of the primary 3rd party developers for SL, Electric Sheep, has laid off 22 of it’s SL content creators. Blood in the water?