Category: ghost in the machine

FastCo – swarming robotics and autonomous systems


From a new article of mine over at Fast Company – Inside The Near-Future World Where All Our Data And Machines Are In Constant Communication

This near-future is already unfolding—and it’s enabled by the convergence of a suite of technologies that have become cheap enough and powerful enough to work their way into the hardware of our lives. High-speed LTE wireless networks are nearly ubiquitous in most developed regions, connecting smart objects to each other and to remote services. These networks, combined with GPS and beacons, enable precise telemetry—the sharing of location, trajectories, and waypoints across transportation networks. Sensors have become much more sophisticated, miniaturized, and affordable, enabling devices at the edges of networks to scan and capture reality with tremendous fidelity. They pair with powerful computation riding the seemingly endless arc of Moore’s Law to crunch volumes of real-time data and turn it into analytics, predictive models, and algorithmic corrections.

This is how the brains of the Industrial Internet are forming, leveraging data from networks and sensors to model the world, evaluate contexts, predict outcomes, and respond and adapt to feedback. Now, these young capabilities are beginning to animate vehicles and ships, aircraft and robots. And, as we’ll see, they’re starting to socialize and collaborate.

Algorithms are smart but they’re nowhere near intelligent

artificial intelligence
Image from iRobot.

Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, “read” is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.

A recent comment from Douglas Hofstadter regarding the current state of AI.

We don’t yet understand how brains work, so we can’t build one.

Jaron Lanier

Of course, it may be that our anthropomorphic maps for sentience and intelligence will prevent us from spotting a different kind of networked machine intelligence…

Cybernetic jurisdictions and the Things of Internets

TV_EYE She got a TV eye on me…

Walled gardens are jurisdictions that exercise control over behaviors. Facebook determines what constitutes acceptable speech. Apple determines what applications are fit for public consumption. Google determines who has access to your data exhaust. When we each accept their TOS, we effectively opt-in to their legal system, yielding to further arbitrage whenever their lawyers or marketing teams or data scientists change the writ again. Most of us don’t even read the fine print. And yet, as long as we’re within the garden, we’re bound by the laws.

The Internet of Things is much more than just a buzzword and it’s instructive to consider what it means for these digital jurisdictions. The walled gardens are pushing into physicality, where they’ll likely further encircle us with their control structures [and I’m trying to use the term “control” in the cybernetic sense but it’s hard not to see the political angle as well]. Platform owners will be able to govern not just in digital gardens but across the physical world. Wearables, embedded systems, and the emerging realm of machine perception/learning empower these gardens to grow across the landscape – watching, mediating, and correcting.

It’s not to say we’re headed for ruin (I’m too much of an optimist) but it raises important considerations when the devices we carry are registering us on innumerable invisible networks through which we pass, and those networks are analyzing us and provisioning our relationships to the digital and physical world and the many stakeholders focused on our behaviors. It’s not hard to imagine how geofencing becomes actual fencing, for example, revoking access based on whatever data transactions are happening between us and the many voices in the cloud. Soon enough, context and prediction will rise as the next wave of cybernetics, granting greater agency to the algorithms deputized on our behalf.

We typically want our applications to be smarter and to better assist us but these things take on very different characteristics when they begin to interpenetrate with the physical, beyond our direct reach. When they’re matched to machine vision and learning systems and robotics and actuators, and when they invite platform owners and stakeholders to encode laws and Terms of Service into the built environment, when they’re always-on in the background of our lives, watching – these are no longer applications that we invoke. They’re the fabric in which we live.

Of course, it will still be a battle of jurisdictions, of subsets and super-sets, of laws and contracts. And there will probably be algorithms whose sole task is to arbitrate between them all. But it’s an odd thought to imagine how the platform wars might engage with us and our market share when their gardens are growing in our cities.


MySciFi: The Emissary of Nommo

Maintaining a watchmaker’s delicate precision Moseek fiddles with the joint under Nassam’s wing, humming to himself as he loosens it just enough to expose the port. He slips the tube in with a slight hermetic squeak and then initiates the feeder pump.

Nassam shivers and lets out a sort of gurgling squawk. The falcon is used to this but doesn’t particularly enjoy the process as the cold metallic carbonation of sange works its way through his vasculature. Exhaling a volume of musty smoke, Moseek puts down the old stained pipe, wipes his dark, wrinkled hands on a cloth, and rubs a bump behind his left ear to initiate the pairing sequence.

It’s easier with his eyes closed. The periphery narrows and sharpens into impossible detail, the colors shifted and slightly muted across a much wider visual spectrum showing him parts of the world occulted by typical human sight. Nassam shares odd bird thoughts with his friend, memories of flight and the desire to hunt, the pairing allowing them to join in this internal space, each self still individuated and yet overlapping in a cold, slightly-prosthetic intimacy.

After their brief inner greeting Moseek initiates the tuning kit. His view of Nassam’s optic feed blurs behind an array of alpha transparencies representing the sange interface. He moves through a set of viz showing various physical stats, runtime exceptions, and waypoint logs now streaming from the bird. Opening a new module, he uploads the package to its container. His humming returns, rising with intensity through the tonal melodies, something old and sad and vast. He binds the package, extracts its contents, and executes the program.

Nassam begins to shake erratically, loosening small feathers into the dimly lit air of the hut. With the sudden shifting of Moseek’s feet, puffs of dust stir in the narrow sunbeams cutting through cracks in the mud walls. The sweat beading his brow is running muddy and tan. Now panting uncontrollably, Nassam lets out a guttural squawk followed by a very unsettled droning. The bird of prey is scared and losing control. Moseek fights back his own autonomous response as his breath quickens and his hands begin to shake. His heart is pounding so loud it seems to boom in the space between them. Through the shared cascade of hormones and adrenaline he struggles to maintain the interface, rapidly adjusting parameters to combat Nassam’s stress while modifying the properties of the new program binding directly to the falcon’s nervous system. In the hut his hands wave in furious gestures grabbing at invisible objects. The humming breaks free of Moseek’s lips and rises into full-throated vocalization of the ancient songs passed to him by the ancestors, their movements and intonations now paired with macro functions driving the constructs. Like a conductor, he works the virtual interface running on Nassam’s wetware with deliberate passion and a divine providence born of faith and faith alone.

The great bird is still shaking but he’s finding a rhythm as the upgrade settles in and seeks homeostasis. The rush of user interface begins to subside showing only a few fundamental metrics. Their small mud hut resolves finely in Nassam’s optic channel as Moseek hums the bird’s name calmly and tenderly, placing his hand softly on the back of his wet, feathered neck.

For a moment of eternity they merge souls and fall into emptiness together through the shared un-space of self.
Continue reading

Running With Machine Herds

Continuing its annual tradition of walking the lines between genuine social goodyness and highfalutin’ techno utopianism, the TED2013 conference kicked off this week in Los Angeles. Gathering together some of the brighter minds and more well-heeled benefactors, attendees come to tease apart the phase space of possibility and to take a closer look at how we consciously examine and intentionally evolve our world. Among the many threads and themes, one in particular tugs deeply at both aspirational humanism and existential terror.

Continue reading

Machine Aesthetics Video – Robot Readable World

BERG creative director, Timo Arnall, has published a video collecting “found machine-vision footage”. In his words:

How do robots see the world? How do they gather meaning from our streets, cities, media and from us? This is an experiment in found machine-vision footage, exploring the aesthetics of the robot eye.

I think it gets particularly poignant about 4 minutes in when the face tracking & recognition alphas make human TV hosts into odd, simplified charicatures, at once de-humanizing the hosts while betraying the limited sophistication of machines like children trying to capture the world in colorful crayons. Bonus points for the creeping irony of machines learning about humans through TV.

Robot readable world from Timo on Vimeo.

A Few More Notes on Machine Aesthetics

Olympus glitch, from Year of the Glitch

Scott Smith has a nice article about Our Complicated Love-Hate Relationship With Robots, exploring how robotics have been seeping into the public dialog of late. A couple of the links he cites were good reminders of previous work looking at the aesthetics of machine perception, notably Sensor-Vernacular from the fine folks at BERG and The New Aesthetic Tumblr by James Bridle.

If humanity is a reflection on the experience of perceiving and interacting with the world, what role does machine perception play in this experience? And if nature acts through our hands, to what ends are flocking drones and herds of autonomous machines? A taxonomy of machine perception seems necessary to understand the many ways in which the world can be experienced.

New Aesthetics of Machine Vision

I’ve grown fascinated by the technology of machine vision, but even more so with the haunting aesthetics captured through their eyes. There’s something deeply enthralling and existentially disruptive about the emergence of autonomous machines into our shared world, watching us, learning about us, and inevitably interacting with each other. It’s like a new inorganic branch of taxonomy is evolving into being. Anyway, two recent notes on this topic…

The first is this short series of images taken from a UAV drone and featured in the ACLU report, Protecting Privacy From Aerial Surveillance [PDF]. There’s a decent summary of the report at the New York Times.

Makes me think of Ian McDonald’s excellent novel, Brasyl, and the ad hoc indoctrination of Our Lady of Perpetual Surveillance into the extended canon of casual Orishas.

The second item of note is this haunting video of a 3D Scanner wandering the streets of Barcelona. It’s not any sort of smart machine – it’s just a dumb handheld scanner hitching a ride on a creative human – but it again evokes the aesthetic of a world seen through eyes very different from our own. The video really grabs me about a minute in:

alley posts from James George on Vimeo.

It seems to show a bizarre ghost world or a glimpse from another dimension into ours. The aesthetic (and the tech) is similar to LIDAR, which I had the luck to play around with a couple years ago – and which Radiohead employed to a very interesting end:

In some ways, I want to see these visions as analogous to the view through a wolf’s eyes in the 80’s flick, Wolfen (at 0:24 in this trailer):

Seeing through the eyes of machines isn’t especially new but it’s the awareness of the many adjacent, convergent technologies of pattern recognition, data analysis, biometrics, autonomous navigation, swarming algorithms, and AI that adds pressure to the long-held notion that machines might someday walk our world of their own accord. It seems much closer than ever before so it’s fascinating to watch the new aesthetics of machine vision move into the popular domain.

Top Post Round-Up: OWS, Ubicomp, Hyperconnectivity, & Transhumanity

I’ve just returned from a very interesting workshop in Washington, D.C. about fast-moving change, asymmetric threats to security, and finding signals within the wall of noise thrown up by big data. These are tremendous challenges to governance, policy makers, and the intelligence community. I’ll have more to say on these topics in later posts but for now, here’s a round-up of the most popular posts on URBEINGRECORDED in order of popularity:

Occupy Wall Street – New Maps for Shifting Terrain – On OWS, gaps in governance, empowered actors, and opportunities in the shifting sands…

Getting to Know Your Ghost in the Machine – On the convergence of ubiquitous computation (ubicomp), augmented reality, and network identity…

The Transhuman Gap – On the challenges facing the transhuman movement…

The Realities of Coal in the Second Industrial Revolution – On the energy demand and resource availability for the developing world…

Meshnets, Freedom Phones, and the People’s Revolution – On the Arab Spring, hyperconnectivity, and ad hoc wireless networks…

And a few that I really like:

Back-casting from 2043 – On possible futures, design fictions, and discontinuity…

On Human Networks & Living Biosystems – On the natural patterns driving technology & human systems…

Outliers & Complexity – On non-linearity, outliers, and the challenges of using the past to anticipate the future…

Thanks to all my readers for taking the time to think about my various rantings & pre-occupations. As always, your time, your participation, and your sharing is greatly appreciated!

My IFTF Tech Horizons Perspective on Neuroprogramming

IFTF has published the 2010 research for their Technology Horizons program – When Everything is Programmable: Life in a Computational Age. This arc explored how the computational metaphor is permeating almost every aspect of our lives. I contributed the perspective on Neuroprogramming [PDF], looking at the ways technology & computation is directly interfacing with our brains & minds.

From the overview for the Neuroprogramming perspective:

Advances in neuroscience, genetic engineering, imaging, and nanotechnology are converging with ubiquitous computing to give us the ability to exert greater and greater control over the functioning of our brain, leading us toward a future in which we can program our minds. these technologies are increasing our ability to modify behavior, treat disorders, interface with machines, integrate intelligent neuroprosthetics, design more capable artificial intelligence, and illuminate the mysteries of consciousness. With new technologies for modulating and controlling the mind, this feedback loop in our co-evolution with technology is getting tighter and faster, rapidly changing who and what we are.

I also contributed to the Combinatorial Manufacturing perspective with Jake Dunagan. This perspective explores advances in nano-assembly & programmable matter. From the overview:

humans have always been makers, but the way humans manufacture is undergoing a radical transformation. tools for computational programming are converging with material science and synthetic biology to give us the ability to actually program matter—that is, to design matter that can change its physical properties based on user input or autonomous sensing. nanotechnology is allowing us to manipulate the atomic world with greater precision toward the construction of molecular assemblers. Researchers are designing “claytronics”: intelligent robots that will self-assemble, reconfigure, and respond to programmatic commands. And synthetic biologists are creating artificial organic machines to perform functions not seen in nature.

The Cybernetic Self

This is one of 50 posts about cyborgs – a project to commemorate the 50th anniversary of the coining of the term. Thanks to Tim Maly of Quiet Babalon for running such a great project!

CC image from mondi.

“He would see faces in movies, on T.V., in magazines, and in books. He thought that some of these faces might be right for him…”

The word “cybernetic” derives from a Greek word, kybernetes, meaning “rudder” or “governor”. A cybernetic process is a control system that uses feedback about it’s actions in an environment to better adapt it’s behavior. The cybernetic organism, or “cyborg”, is a class of cybernetic systems that have converged with biological organisms. In this increasingly mythologized form, the cyborg embodies the ongoing dialectic between humanity & technology, and is an aspirational figure onto which we project our superhuman fantasies. While it offers security, enhancement, and corporeal salvation the cyborg also presents an existential threat to the self and to the cherished notions of being uniquely human.

It’s a gamble but we don’t seem able to leave the table. As we offload more of our tasks into technology we enhance our adaptability while undermining our own innate resilience as animals. We wrap ourselves in extended suits of shelter, mobility, health, and communications. We distribute our senses through a global network of hypermedia, augmenting our brains with satellites & server farms & smart phones. Increasingly, our minds & bodies are becoming the convergence point for both the real & the virtual, mediated through miniaturization, dematerialization, and nano-scale hybridization. Our ability to craft the world around us is quickly advancing to give us the ability to craft our bodies & our selves.

“And through the years, by keeping an ideal facial structure fixed in his mind… Or somewhere in the back of his mind… That he might, by force of will, cause his face to approach those of his ideals…”

Computation is miniaturizing, distributing, and becoming more powerful & efficient. It’s moving closer & closer to our bodies while ubiquitizing & dematerializing all around us. The cybernetic process has refined this most adaptive capacity in little more than 50 years to be right at hand, with us constantly, connected to a global web of people, places, things, information, and knowledge. We are co-evolving with our tools, or what Kevin Kelly refers to as the Technium – the seemingly-intentional kingdom of technology. As Terence McKenna suggested, we are like coral animals embedded in a technological reef of extruded psychic objects. By directly illustrating how our own fitness & bio-survival becomes bound to the survival of our technology, the cyborg is a fitting icon for this relationship.

CC image from

Technology has historically been regarded as something we cast into the world separate from ourselves but it’s worth considering the symbiosis at play and how this relationship is changing the very nature of humanity. As we venture deeper & deeper into the Technium, we lend ourselves to it’s design. By embracing technology as part of our lives, as something we rely upon and depend on, we humanize it and wrap it in affection. We routinely fetishize & sexualize cool, flashy tech. In doing so we impart emotional value to the soul-less tools of our construction. We give them both life & meaning. By tying our lives to theirs, we agree to guarantee their survival. This arrangement is a sort of alchemical wedding between human & machine, seeking to yield gold from this mixture of blood & metal, uncertain of the outcome but almost religiously compelled to consummate.

“The change would be very subtle. It might take ten years or so. Gradually his face would change it’s shape. A more hooked nose. Wider, thinner lips. Beady eyes. A larger forehead…”

In the modern world, our identities include the social networks & affinity groups in which we participate, the digital media we capture & create & upload, the avatars we wear, and the myriad other fragments of ourselves we leave around the web. Who we are as individuals reflects the unique array of technologies through which we engage the world, at times instantiated through multiple masks of diverse utility, at other times fractured & dis-integrated – too many selves with too many virtual fingers picking at them. Our experience of life is increasingly composed of data & virtual events, cloudy & intangible yet remote-wired into our brains through re-targeted reward systems. A Twitter re-tweet makes us happy, a hostile blog comment makes us angry, the real-time web feeds our addiction to novelty. Memories are offloaded to digital storage mediums. Pictures, travel videos, art, calendars, phone numbers, thoughts & treatises… So much of who we are and who we have been is already virtualized & invested in cybernetic systems. All those tweets & blog posts cast into the cloud as digital moments captured & recorded. Every time I share a part of me with the digital world I become copied, distributed, more than myself yet… in pieces.

CC image from Alejandro Hernandez.

It can be said that while we augment & extend our abilities through machines, machines learn more about the world through us. The web 2.0 social media revolution and the semantic web of structured data that is presently intercalating into it has brought machine algorithms into direct relationship with human behavior, watching our habits and tracking our paths through the digital landscape. These sophisticated marketing and research tools are learning more and more about what it means to be human, and the extended sensorium of the instrumented world is giving them deep insight into the run-time processes of civilization & nature. The spark of self-awareness has not yet animated these systems but there is an uneasy agreement that we will continue to assist in their cybernetic development, modifying their instructions to become more and more capable & efficient, perhaps to the point of being indistinguishable from, or surpassing, their human creators.

“He imagined that this was an ability he shared with most other people. They had also molded their faces according to some ideal. Maybe they imagined that their new face would better suit their personality. Or maybe they imagined that their personality would be forced to change to fit the new appearance…”

In Ridley Scott’s Blade Runner, the young Tyrell Corporation assistant, Rachel, reflects on her childhood memories while leafing through photographs of her youth. These images are evidence of her past she uses to construct her sense of self. Memories provide us with continuity and frame the present & future by reminding us of our history – critical for a species so capable of stepping out of time. Rachel’s realization that she is a replicant, that her memories are false implants deliberately created to make her believe she’s human, precipitates an existential crises that even threatens Harrison Ford’s character, Rick Deckard, surrounded as he is by photos of his own supposed past. This subtle narrative trick suggests that replicants will be more human-like if they don’t know they’re replicants. But it also invokes another query: If memories are (re-)writable, can we still trust our own past?

Yet both characters do appear quite human. They laugh and cry and love and seem driven by the same hopes and fears we all have. Ridley Scott’s brilliance – and by extension, Philip K. Dick’s – is to obscure the nature of the self and of humanity by challenging our notions of both. Is Rachel simply another mannequin animated by advanced cybernetics or is she more than that? Is she human enough? When the Tyrell bio-engineer J.F. Sebastian sees the Nexus 6 replicants, Pris and Roy Batty, he observes “you’re perfect”, underlining again the aspirational notion that through technology we can be made even better, becoming perhaps “more human than human”. This notion of intelligent artificial beings raises deep challenges to our cherished notions of humanity, as many have noted. But the casual fetishization of technology, as it gets nearer & friendlier & more magical, is perhaps just as threatening to our deified specialness in it’s subtle insinuation into our hands & hearts & minds.

CC image from Photo Monkey.

In Mamoru Oshii’s anime classic, Ghost in the Shell, the female protagonist – a fully-engineered and functional robotic human named Kusanagi – at once decries those who resist augmentation, suggesting that “your effort to remain as you are is what limits you”, while simultaneously becoming engaged in a quest to determine if there might be more to her than just what has been programmed. She celebrates her artifice as a supreme achievement in overcoming the constraints of biological evolution while also seeking to find evidence that she is possessed of that most mysterious spark: the god-like ingression of being that enters and animates the human shell. Oshii’s narrative suggests that robots that achieve a sufficient level of complexity and self-awareness will, just like their human creators, seek to see themselves as somehow divinely animated. Perhaps it’s a method to defend the belief in human uniqueness but those writing the modern myths of cybernetics seem to imply that while humans aspire to the abilities of machines, machines aspire to the soulfulness of humans.

CC image from Alaskan Dude.

“This is why first impressions are often correct…”

Chalk it up to curiosity, the power of design fictions, and an innate need to realize our visions, but if we can see it with enough resolution in our mind’s eye, we’ll try to bring it to life. The Ghost in the Shell & the Ghost in the Machine both intuit the ongoing merger between humanity & technology, and the hopes & fears that attend this arranged and seemingly-unavoidable alchemical wedding. As animals we are driven to adapt. As humans, we are compelled to create.

“Although some people might have made mistakes. They may have arrived at an appearance that bears no relationship to them. They may have picked an ideal appearance based on some childish whim or momentary impulse. Some may have gotten half-way there, and then changed their minds…”

Humans are brilliant & visionary but also impetuous, easily distracted, fascinated by shiny things, and typically ill-equipped to divine the downstream consequences of our actions. We extrude technologies at a pace that far outruns our ability to understand their impacts on the world, much less how they change who we are. As we reach towards AI, the cyborg, the singularity, and beyond, our cybernetic fantasies may necessarily pass through the dark night of the soul on the way to denouement. What is birthed from the alchemical marriage often necessitates the destruction of the wedding party.

CC image from WebWizzard.

“He wonders if he too might have made a similar mistake.” – David Byrne, Seen & Not Seen

Are we working up some Faustian bargain promising the heights of technological superiority only for the meager sacrifice of our Souls? Or is this fear a reflection of our Cartesian inability to see ourselves as an evolving process, holding onto whatever continuity we can but always inevitably changing with the world in which we are embedded? As we offload more and more of our selves to our digital tools, we change what it means to be human. As we evolve & integrate more machine functionality we modify our relationship to the cybernetic process and re-frame our self-identity to accommodate our new capacities.

Like the replicants in Blade Runner and the animated cyborgs of Ghost in the Shell we will very likely continue to aspire to be more human than human, no matter how hard it may be to defend our ideals of what this may mean to the very spark of humanity. What form of cyborg we shall become, what degree of humanity we retain in the transaction, what unforeseen repercussions may be set in motion… The answers are as slippery as the continuum of the self and the ever-changing world in which we live. Confrontation with the existential Other – the global mind mediated through ubiquitous bio-machinery – and the resulting annihilation of the Self that will necessarily attend such knowledge, may very well yield a vastly different type of humanity than what we expect.

A Few Recent Developments in Brain-Computer Interface

BCI technology and the convergence of mind & machine are on the rise. Wired Magazine just published an article by Michael Chorost discussing advances in optogenetic neuromodulation. Of special interest, he notes the ability of optogenetics to both read & write information across neurons.

In theory, two-way optogenetic traffic could lead to human-machine fusions in which the brain truly interacts with the machine, rather than only giving or only accepting orders. It could be used, for instance, to let the brain send movement commands to a prosthetic arm; in return, the arm’s sensors would gather information and send it back.

In another article featured at IEEE Spectrum, researchers at Brown University have developed a working microchip implant that can wirelessly transmit neural signals to a remote sensor. This advance suggests that brain-computer interface technologies will evolve past the need for wired connections.

Wireless neural implants open up the possibility of embedding multiple chips in the brain, enabling them to read more and different types of neurons and allowing more complicated thoughts to be converted into action. Thus, for example, a person with a paralyzed arm might be able to play sports.

MindHacks has discusses the recent video of a touch-sensitive prosthetic hand. This is a Holy Grail of sorts for brain-machine interface: the hope that an amputee could regain functionality through a fully-articulatable, touch-sensitive, neural-integrated robotic hand. Such an accomplishment would indeed be a huge milestone. Of note, the MindHacks appraisal focuses on the brain’s ability to re-image body maps (perhaps due to it’s plasticity).

There’s an interesting part of the video where the patient says “When I grab something tightly I can feel it in the finger tips, which is strange because I don’t have them anymore”.

Finally, ScienceDaily notes that researchers have demonstrated rudimentary brain-to-brain communication mediated by non-invasive EEG.

[The]experiment had one person using BCI to transmit thoughts, translated as a series of binary digits, over the internet to another person whose computer receives the digits and transmits them to the second user’s brain through flashing an LED lamp… You can watch Dr James’ BCI experiment at YouTube.

One can imagine a not too distant future where the brain is directly transacting across wireless networks with machines, sensor arrays, and other humans.

The Co-Evolution of Neuroscience & Computation

Image from Wired Magazine.

[Cross-posted from Signtific Lab.]

Researchers at VU University Medical Center in Amsterdam have applied the analytic methods of graph theory to analyze the neural networks of patients suffering from dementia. Their findings reveal that brain activity networks in dementia sufferers are much more randomized and disconnected than in typical brains. "The underlying idea is that cognitive dysfunction can be illustrated by, and perhaps even explained by, a disturbed functional organization of the whole brain network", said lead researcher Willem de Haan.

Of perhaps deeper significance, this work shows the application of network analysis algorithms to the understanding of neurophysiology and mind, suggesting a similarity in functioning between computational networks and neural networks. Indeed, the research highlights the increasing feedback between computational models and neural models. As we learn more about brain structure & functioning, these understandings translate into better computational models. As computation is increasingly able to model brain systems, we come to understand their physiology more completely. The two modalities are co-evolving.

The interdependence of the two fields has been most recently illustrated with the announcement of the Blue Brain Project which aims to simulate a human brain within 10 years. This ambitious project will inevitably drive advanced research & development in imaging technologies to reveal the structural complexities of the brain which will, in turn, yield roadmaps towards designing better computational structures. This convergence of computer science and neuroscience is laying the foundation for an integrative language of brain computer interface. As the two sciences get closer and closer to each other, they will inevitably interact more directly and powerfully, as each domain adds value to the other and the barriers to integration erode.

This feedback loop between computation and cognition is ultimately bringing the power of programming to our brains and bodies. The ability to create programmatic objects capable of executing tasks on our behalf has radically altered the way we extend our functionality by dematerializing technologies into more efficient, flexible, & powerful virtual domains. This shift  has brought an unprecedented ability to iterate information and construct hyper-technical objects. The sheer adaptive power of these technologies underwrites the imperative towards programming our bodies, enabling us to excercies unprecedented levels of control and augmnetation over our physical form, and further reveal the fabric of mind.


The Transhuman Gap

[Cross-posted from Signtific Lab.]

While most would support using technology to allow parapalegics to walk again, to help the blind to see and the deaf to hear, how will society view those who electively enhance themselves through prosthetics & implants?

Consider the not-so-subtle marginalization of transhumanists who believe that technology should be readily integrated into human biology, experimenting with their own crude body modifications. Or the implications around personal security and privacy (not to mention religious fear) raised by those intrepid folks who are self-implanting RFIDs into their forearms to activate lighting & appliances when they enter their homes. Even the international debates over performance-enhancing drug use by athletes reinforces the cultural belief that a “natural” baseline range exists for human abilities and any “synthetic” modification beyond the accepted range is considered unfair.

From issues of fairness to those of security and trust, integrating more machinery into a programmable nervous system challenges many of the fundamental notions we have of what it means to be human. When a Marine returns from a warzone patched up with a cochlear implant, how will they be regarded when it’s revealed that they can hear you speaking from 3 blocks away? Imagine if that person then enters the Police force, what issues of civil liberty and privacy might be confronted? How might we regard an employer that suggests each employee be programmed with software to bring them into the corporate Thinkmesh?

How does society’s regard for a technology change when that technology becomes part of our bodies? How does our relationship to people change if we know they are different? What competitive advantages are conferred by these technologies and how will they be reinforced by socioeconomic drivers? What gaps might arise between those able to afford augmentations and those who cannot?

And what becomes of the Platonic sense of one fundamental Reality when more & more people are seeing personalized variations of the world mediated by connected devices? Will the merging of technology & flesh enable a more cohesive & effective society or a more fragmented and divisive one?

Thus far humans have worked from a standard body map that allows us to understand ourselves and project that understanding onto all other classes of our species. We will likely bring both our sense of membership as well as our fear of otherness with us as we begin to internalize machines unevenly across cultures.

[See also 5 Dark Scenarios For Trans-humanity.]

Direct Brain-Computer Interface Will Require a New Language of Interaction

[Cross-posted from Signtific.]

When Apple Computer recently released the 3.0 version of its iPhone OS one of the most anticipated new features was Cut & Paste. This simple task has been a staple of computing since GUIs were part of the OS, so why did it take Apple until it’s 3rd OS version to implement the feature for the iPhone?

As Apple tells it, there was incredible deliberation over how best to design the user experience. This is, after all, the first and only fully multi-touch mobile computing device. Apple has been meticulously developing and patenting the gestural language through which users interact with the device. Every scroll and pinch, zoom and drag is a consciously designed gesture adding to Apple’s growing lexicon of multi-touch interface. Implementing Cut & Paste was a substantial challenge to create the most accessible gestural commands within the narrow real-estate of the mobile screen.

Now, consider interacting with the same content types available on an iPhone or anywhere in the cloud, but remove the device interface and replace it with a HUD or direct brain interface. If the content is readily visible, either as an eyeglass orverlay or directly registered in the visual cortex, how do we give a UI element focus? How do you make a selection? How do you scroll and zoom? How do you invoke, execute, and dismiss programs? Can you speak internally to type text? How might a back-channel voice be distinguished from someone standing behind you? How do you manage focus changes between the digital content and the visual content of the real-world when both are superimposed in some state?

The fields of Human Computer Interaction and User Interface & Experience Design address these challenges for interacting with digital content and processes, but what new interaction modalities may be developed to better interface humans and computers? As we internalize computation and interaction, the disciplines of HCI & BCI will begin to interpenetrate in ways that may radically alter the conventions of the Information Age.

Bangkok & the Future

I’ll be checking out for a few weeks while traveling in Asia (w00t!). I may Twitter occasionally but will likely do no blogging (going analog – Moleskin). When I return in July I’ll be working with the Institute for the Future as a Visiting Researcher. In this capacity I’ll be contributing to their 2009 Technology Horizons Research Program. More info on that to come but suffice it to say I am extremely stoked on this development. A second w00t!

Best to all!


DARPA Thinkbots Talk Data Stories

This is the most interesting thing I’ve read in a while. DARPA is using smart agent algorithms to crunch heavy data sets and convert them to human-grokable narratives. Before long such agents will be living on our desktops, mobile devices, cars, and appliances actively interpreting innumerable datastreams rendered to transparent screens and spoken through earbuds.

“Like people,” Darpa notes …such a story-telling system would be able to “retrieve and reuse stories to construct an appropriate interpretation of events …because they convey the aspects of a situation that are most important in determining a decision.”

Darpa hopes to have this Experience-based Narrative Memory (EN-Mem) system make “complex situations… simple, understandable, and solvable.”

…Making sense of a complex situation is like understanding a story; one must construct, impose and extract an interpretation. This interpretation weaves a commonly understood narrative into the information in a way that captures the basic interactions of characters and the dynamics of their motivations while filling in details not explicitly mentioned in the input stream. It uses story lines with which we all have experience as analogies, and it simplifies the detail in order to communicate the crucial aspects of a situation. The story lines it uses are those the decision maker should be reminded of, because they are similar to the current situation based upon what the decision maker is trying to do.

Twittering Analysts Invoke the Singularity. News at 11.

As with much of the digital world, corporate transparency is greater now than it ever has been. Witness yesterday’s Adobe Analyst Meeting – a closed door, invite-only industry event at which analysts of all stripes were treated to Adobe’s financial strategy for the year to come. Within those exclusive walls, many industry agents were typing away on laptops and mobiles but they weren’t just live-blogging or recording notes for a report or article to be edited by their gatekeepers and published later. They were also broadcasting SMS messages to the masses in real-time through Twitter, micro-blogging their instantaneous thoughts, reactions, and sub-channel conversations to thousands of vicarious third-parties.

These raw feeds are perhaps a much more accurate representation of such events – or at least constitute a valuable nuance to the conversation – but their true merit is in their subversive tunneling to freedom through the garden walls, broadcast to the masses. I was annoyed that I couldn’t attend my own company’s briefing but then I got a lot of the meat from trolling the analyst tweets. This raises numerous issues. Should the company defend the tower and let me get the info second-hand through the emotional filters and bullshit detectors of the invitees? Or is it in their interest to include me and the rest of the public so they can at least have a better bet at controlling the message? Is there value in creating such walled gardens in the first place if anyone can breech your security with a simple 140 character message? Is it cost-effective? Do companies impose checkpoints to remove potentially threatening mobile devices? Can you trust people to stick to the talking points or do you allow that the genie is out of the bottle and the natural process of selection will actually help your company do a better job? Transparency and democratized digital broadcast is crowdsourced quality control. It’s a natural feedback mechanism for regulating the evolution of ideas.

These days, if an exclusionary body refuses to share beyond the in-crowd, at least one of those insiders will probably share it with the world. Information is free and the closed companies see their brand suffer as they try in vain to crush the dissenters on a global and very public stage. Their insular reporting hierarchies inevitably ensure that the same ideas and strategies eventually become recycled again and again, and that the truth is filtered through the instinct of self-preservation. Secrecy is like evolution in a vacuum or asexual reproduction. There is little pressure for real change beyond the cold, hard truth of the quarterly earnings report.

Is it even possible to keep secrets anymore? Do you remember all the conspiracy theories you read about in college? Have you noticed that most of them have now been recorded as historical fact? Have you considered that within 10 years the majority of elected officials will have public digital paper trails stretching across the fabled Information Superhighway? And there will be bands of saavy developers eager to crunch the data from those paper trails and render them in pretty visualizations that really show just exactly how honorable/charitable/pious/two-faced/depraved your future senator really is.

Even the analysts are known, willingly opting in to the public timeline of Twitter. All of their names are published at Sage Circle for anyone to see and follow. In fact, in order to really productively use many of the new open social tools & services, the user is highly incentivised to opt-in to their own public transparency. Everyone who wants to speak with power enough to reach the masses (or at least a few handfuls of them) must embrace the open platform. And if you’re professional, you need to use your real name. Therein lies the rub: to be competitive businesses need to have their product managers, their evangelists, their analysts, idea makers and trend-setters all dialed in to the social web. Communication and sharing and an openness to take feedback from your users is becoming crucial for the corporate body to humanize and interact with the eyes of the world. Effective product development must include the people buying your product, otherwise you end up designing for imagined ghosts. Hence, the increasing migration of analysts and audiences to Twitter. Then as a company you end up with your intelligence agents working for you but writing to their audience. And you have an empowered audience that’s publicly-yet-privately back-channeling their loathing of your corporate shill right in front of them, like the now legendary and immediately ground-breaking SXSW smackdown of Tara Hunt.

Like journalists, analysts are no longer totally bound by an allegiance to their lords nor to the companies they scrutinize. They become like moonlighting Ronin. They broadcast to the world from a niche stardom and semi-famous personhood that carefully (or not-so-carefully) balances the party line and the ratings of the viewers. In the face of even limited fame and empowerment, how does company loyalty measure up to increased outsourcing and diminishing employee perks? All life, it seems, will bend towards the viewership, simultaneously revealed and true, yet inevitably influenced and state-shifted by 5 or 6 billion eyes and the inescapable quantal fact of Heisenberg’s Uncertainty. In a totally measured and watched world, is Truth just a state of observation, a sufficiently-probable collapsing of the waveform undergoing the formality of actually occuring, to paraphrase McKenna quoting Whitehead. The soul becomes visible as the mind manifests to all eyes.

Information – Truth, whether it exists fundamentally or is just a state of mind – indeed wants to be free and this fundamental law works through the human species and the technologies we extrude. We are still animals and our tools must help us adapt and thrive. This is more clear now than ever as our actions leave deeper and deeper footprints across the digital terrain we walk. We are being recorded and we are recording, capturing more and more facets of our human experiment written onto spinning platters like prayer wheels in the virtual breeze. The New Journalism will find even the most exclusive events, the narrowest niches, the darkest secrets and the most banal subcultures and capture them, radiating out to the digital world into the very Akashic Record of Our Times. Life is the new media, rich in all it’s texture, drama, subterfuge, and transcendence. As the military struggles with soldier bloggers, embedded third-party reporters, wired insurgencies, and the ever-present satt feeds waving down from far up above with just a passing glint of sunlight, the injustices and atrocities wrought by man & machine are cataloged equally alongside silly cat pictures, personal bios, frat videos, copyright violations, knowledge wiki’s, satellite imagery, and reams & reams of pornography. All acts are caught and surveyed by the one unblinking eye, like Sauron or the Illuminati or the gaze of God.

The world is getting much smaller and simultaneously incredibly huge and diverse. Global instability will be balanced by local resilience, and hierarchical corruption will struggle against networked transparency. CCTV’s will merge with YouTube & reality TV and life will reveal itself on a scale never before known. The cloud is breaking out of the browser and out of our servers spreading to mobile devices and HUD overlays, objects & artifacts. Reality will be radically augmented, participatory, and unbounded. We will fragment and unite, solve et coagula. And tweeting as we go, televising & recording the revolution for all to witness.

Second Life Avatar Controlled By Thoughts of Paraplegic

I have a lot of issues with Second Life – mostly because I’m frustrated by their potential and their seeming inability to act on it – but it’s nevertheless an interesting sandbox to explore the greater frontiers of virtual immersion and social ontology. To this end, Japanese researchers have wired up a Second Life avatar to respond to the thoughts of a paraplegic.

…he wore headgear with three electrodes monitoring brain waves related to his hands and legs. Even though he cannot move his legs, he imagined that his character was walking.

He was then able to have a conversation with the other character using an attached microphone, said the researchers at Japan’s Keio University.

…”In the near future, they would be able to stroll through Second Life shopping malls with their brain waves… and click to make a purchase,” Ushiba said.