Tagged: BCI

Running With Machine Herds

Continuing its annual tradition of walking the lines between genuine social goodyness and highfalutin’ techno utopianism, the TED2013 conference kicked off this week in Los Angeles. Gathering together some of the brighter minds and more well-heeled benefactors, attendees come to tease apart the phase space of possibility and to take a closer look at how we consciously examine and intentionally evolve our world. Among the many threads and themes, one in particular tugs deeply at both aspirational humanism and existential terror.

Continue reading

My IFTF Tech Horizons Perspective on Neuroprogramming

IFTF has published the 2010 research for their Technology Horizons program – When Everything is Programmable: Life in a Computational Age. This arc explored how the computational metaphor is permeating almost every aspect of our lives. I contributed the perspective on Neuroprogramming [PDF], looking at the ways technology & computation is directly interfacing with our brains & minds.

From the overview for the Neuroprogramming perspective:

Advances in neuroscience, genetic engineering, imaging, and nanotechnology are converging with ubiquitous computing to give us the ability to exert greater and greater control over the functioning of our brain, leading us toward a future in which we can program our minds. these technologies are increasing our ability to modify behavior, treat disorders, interface with machines, integrate intelligent neuroprosthetics, design more capable artificial intelligence, and illuminate the mysteries of consciousness. With new technologies for modulating and controlling the mind, this feedback loop in our co-evolution with technology is getting tighter and faster, rapidly changing who and what we are.

I also contributed to the Combinatorial Manufacturing perspective with Jake Dunagan. This perspective explores advances in nano-assembly & programmable matter. From the overview:

humans have always been makers, but the way humans manufacture is undergoing a radical transformation. tools for computational programming are converging with material science and synthetic biology to give us the ability to actually program matter—that is, to design matter that can change its physical properties based on user input or autonomous sensing. nanotechnology is allowing us to manipulate the atomic world with greater precision toward the construction of molecular assemblers. Researchers are designing “claytronics”: intelligent robots that will self-assemble, reconfigure, and respond to programmatic commands. And synthetic biologists are creating artificial organic machines to perform functions not seen in nature.

Direct Brain-Computer Interface Will Require a New Language of Interaction

[Cross-posted from Signtific.]

When Apple Computer recently released the 3.0 version of its iPhone OS one of the most anticipated new features was Cut & Paste. This simple task has been a staple of computing since GUIs were part of the OS, so why did it take Apple until it’s 3rd OS version to implement the feature for the iPhone?

As Apple tells it, there was incredible deliberation over how best to design the user experience. This is, after all, the first and only fully multi-touch mobile computing device. Apple has been meticulously developing and patenting the gestural language through which users interact with the device. Every scroll and pinch, zoom and drag is a consciously designed gesture adding to Apple’s growing lexicon of multi-touch interface. Implementing Cut & Paste was a substantial challenge to create the most accessible gestural commands within the narrow real-estate of the mobile screen.

Now, consider interacting with the same content types available on an iPhone or anywhere in the cloud, but remove the device interface and replace it with a HUD or direct brain interface. If the content is readily visible, either as an eyeglass orverlay or directly registered in the visual cortex, how do we give a UI element focus? How do you make a selection? How do you scroll and zoom? How do you invoke, execute, and dismiss programs? Can you speak internally to type text? How might a back-channel voice be distinguished from someone standing behind you? How do you manage focus changes between the digital content and the visual content of the real-world when both are superimposed in some state?

The fields of Human Computer Interaction and User Interface & Experience Design address these challenges for interacting with digital content and processes, but what new interaction modalities may be developed to better interface humans and computers? As we internalize computation and interaction, the disciplines of HCI & BCI will begin to interpenetrate in ways that may radically alter the conventions of the Information Age.