Category: cool tech

Quick Riffs on Autonomous Vehicles

This tweet got me riffing on potential outcomes & exploits available when autonomous vehicles become common:

I also “like” (or “find interesting”, in the Chinese proverbial sense) the idea of rogue agents seizing control over vehicular fleets to direct and coordinate their movements towards some sort of goal, e.g. assembling to bust a road barricade or defend a bank heist. Interesting times, indeed…

[Apologies/nods to Scott Smith.]


Valve signals hardware is the future of distribution

Gabe Newell, the co-founder and managing director at PC gaming powerhouse Valve Software, recently spoke with Kotaku about the shifting landscape of games distribution and his company’s move into the living room.

Ten years ago Valve established Steam as a primary distribution channel for its titles and add-on content. Just this month they’ve released Big Picture, establishing a foothold in the living room by essentially porting the Valve experience to the TV. With a new controller and interface, user’s can play games, stream content, and access Steam through Big Picture’s front-end.

Speaking to Kotaku, Newell suggested that Valve and other competitors will release custom branded hardware solutions for the living room within the next year. User’s would be able to buy an official Valve gaming console (likely to be a lightweight PC or Linux device) and plug it into their TV. While this may seem surprising to many who have suggested that console gaming is in decline, Newell let slip the compelling hook for game’s developers.

“Well certainly our hardware will be a very controlled environment… If you want more flexibility, you can always buy a more general purpose PC. For people who want a more turnkey solution, that’s what some people are really gonna want for their living room.”

As content has dematerialized and gotten loose and slippery, content houses have been trying to figure out how to put the genie back in the bottle and retain control over their IP. Hardware offers such a controlled environment and, thanks in large part to Apple, hardware manufacturing is easier than it’s ever been. It wouldn’t be too surprising if, a few years down the road, Valve decides to lock down distribution completely by shunting all its users onto a low-priced piece of branded hardware. Plug it into your TV, launch Steam, and pull content direct from the Valve server farm.

Now imagine if they release Half Life 3 and you can only buy it through their hardware…

[Related: Hardware, the ugly stepchild of Venture Capital, is having a glamor moment]


Recent Notes on Reality Capture & 3D Printing

It may be symptomatic of our times but the delta between weak signal & fast-moving trend seems to be getting shorter & shorter. Compelling innovations are bootstrapped rapidly into full-fledged solutions, enabling a highly-efficient lab-to-home ecosystem. While it’s been percolating for years, the emergence of consumer 3D printing really only landed on the hype cycle in the past 12 months or so but in this time there have been considerable advances.

Continue reading

Me, in the Kinect point cloud (video still).

my home project: kinect hacking

Over the weekend I bought a Kinect and wired it to my Mac. I’m following the O’Reilly/Make book, Making Things See, by Greg Borenstein (who kindly tells me I’m an amateur machine vision researcher). With the book I’ve set up Processing, a JAVA-lite coding environment, and the open source Kinect libraries, SimpleOpenNI & NITE. I’ve spent a good chunk of the weekend reading a hundred pages into the book and working through the project tutorials and I’ve generated some interesting interactions and imagery. There’s also a ton of tutorial vids on You Tube, natch, to help cut through the weeds and whatnot.

Continue reading

The State of Augmented Reality – ARE2012

Last week I attended and spoke at the Wednesday session of ARE2012, the SF Bay Area’s largest conference on augmented reality. This is the 3rd year of the conference and both the maturity of the industry and the cooling of the hype were evident. Attendance was lower than previous years, content was more focused on advertising & marketing examples, and there was a notable absence of platinum sponsors and top-tier enterprise attendees. On the surface this could be read as a general decline of the field but this is not the case.

A few things are happening to ferry augmented reality across the Trough of Disillusionment. This year there were more headset manufacturer’s than ever before. The need for AR to go hand’s-free is becoming more & more evident [my biases]. I saw a handful of new manufacturers I’d never even heard of before. And there they were with fully-functional hardware rendering annotations on transparent surfaces. In order for AR to move from content to utility it has to drive hardware development into HUD’s. Google see’s this as does any other enterprise player in the mobile game. Many of the forward-looking discussions effectively assume a head’s-up experience.

At the algorithmic level, things are moving quickly especially in the domain of edge detection, face tracking, and registration. I saw some really exceptional mapping that overlaid masks on people’s faces in realtime responding to movement & expressions without flickers or registration errors (except for the occasional super-cool New Aesthetic glitch when the map blurred off the user’s face if they moved too quickly). Machine vision is advancing at a strong pace and there was an ongoing thread throughout the conference about the challenges the broader industry faces in moving facial recognition technology into the mobile stack. It’s already there and works but the ethical and civil liberty issues are forcing a welcomes pause in consideration.

Qualcomm was the sole platinum sponsor, promoting its Vuforia AR platform. Sony had a booth showing some AR games (Pong!?) on their Playstation Vita device. But pretty much everyone in the enterprise tier stayed home, back in the labs and product meetings and design reviews, slowly & steadily moving AR into their respective feature stacks. Nokia is doing this, Google of course, Apple has been opening up the camera stream and patenting eyewear, HP is looking at using AR with Autonomy, even Pioneer has a Cyber Navi AR GPS solution. The same players that were underwriting AR conferences in exchange for marketing opportunities and the chance to poach young developers are now integrating the core AR stack into their platforms. This is both good & bad for the industry: good because it will drive standardization and put a lot of money behind innovation; bad because it will rock the world of the Metaio’s & Layar’s who have been tilling this field for years. Typically, as a young technology starts to gain traction and establish value, there follows a great period of consolidation as the big fish eat the little ones. Some succeed, many fall, and a new culture of creators emerges to develop for the winners.

So here we are. Augmented reality is flowing in three streams: Content and marketing grab eyeballs and easy money while conditioning the market to expect these experiences; developers extend the software stack towards real-time pixel-perfect recognition & mapping, enabling the solutions to actually, um, solve problems; and hardware manufacturers labor to bring AR into the many transparent surfaces through which we interact with the world, freeing our hands and augmenting our views with ubiquitous networked data. Across these domains sit content shops, emerging start-ups, the leading innovators ala Layar & Metaio, and the big fish enterprise companies that have had a piece of the tech game for years & years and aren’t going to miss out if AR goes supernova. The market is a bit shaky and very much uncertain for the SMB’s but it’s certainly maturing with the technology.

My sense is that everybody gets that AR is here to stay and has deep intrinsic value to the future of mobility and interface. How this will impact the many passionate folks curating & cultivating the field from the bottom-up remains to be seen.

GRASP, General Robotics, & Swarming Drones

This truly fantastic video went around the interwebs last week, featuring the work of U Penn robotics lab, GRASP. Notably, the video shows groups of small quadrotors flying in formation, following paths, and generally exhibiting both autonomy and collective behavior.

Flush with defense moneys, the GRASP team are doing some pretty amazing work. A survey of their current projects could be the basis for hundreds of scifi novels yet many of them are right on the edge of reality. Research includes:

Haptography – haptic photography
Reasoning in reduced information space
HUNT – heterogenous unmanned networked teams
Omni-directional vision
SUBTLE – situation understanding bot through language & environment
Modular robotics lab
SWARMS – scalable swarms of autonomous robots and mobile sensors

Peep their published lab papers for even deeper FutureNow goodness, eg “Multi-vehicle path planning in dynamically changing environments”.

Amon Tobin ISAM – Mixed-Media Sound & Projection Mapping

I saw Amon Tobin’s ISAM project a week ago at The Warfield theater in San Francisco. Literally jaw-dropping.

Visualizing ISAM from Leviathan on Vimeo.

Leviathan worked with frequent collaborator and renowned VJ Vello Virkhaus on groundbreaking performance visuals for electronic musician Amon Tobin, creating ethereal CG narratives and engineering the geometry maps for an entire stage of stacked cube-like structures. Taking the performance further, the Leviathan team also developed a proprietary projection alignment tool to ensure quick and accurate setup for the show, along with custom Kinect control & visualization utilities for Amon to command.

Signals, Challenges, & Horizons for Hand’s-Free Augmented Reality – ARE2011

Here’s the slidedeck from my recent talk at Augmented Reality Event 2011. I hope to post a general overview of the event soon, including some of the key trends that stood out for me in the space.

Augmented Reality Development Camp 2010 – Dec. 4th GAFFTA


I’m excited to announce that the second annual Bay Area Augmented Reality Developer’s Camp will be held on Saturday, December 4th, 2010, at the Gray Area Foundation For The Arts in San Francisco! This will be a free, open format, all-day unconference looking at the many aspects of Augmented Reality. We welcome hackers, developers, designers, product folks, biz devs, intellectuals, philosophers, tinkerers & futurists – anyone interested in this fascinating and revolutionary new technology.

If you’re interested please take a moment to sign up at the AR Dev Camp wiki.

Humanity+ Conference in Los Angeles, Dec. 4 & 5 at Caltech

If you like to live on the edge and gaze deeply into the future, a great collection of like-minded folks will be gathering to discuss the possible futures of humanity – and how to make them. From their site:

The Humanity+ at Caltech program will be divided into four main themed sessions. These sessions are:

* Re-Imagining Humans: Mind, Media and Methods
* Radically Increasing the Human Healthspan
* Redefining Intelligence: Artificial Intelligence, Intelligence Enhancement and Substrate-Independent Minds
* Business and Economy in the Era of Radical Technomorphosis

A wide range of interesting, professional speakers, from both the for-profit and non-profit worlds, will address each of the four themes.

Click through for the full program.

Is AR Ready for the Trough of Disillusionment?

hype cycle 2010

Gartner, one of the most trusted market analysis firms in the technology industry, just released it’s 2010 Hype Cycle Special Report. According to their research, augmented reality has entered the Peak of Inflated Expectations and will begin it’s slide down into the Trough of Disillusionment within the next year. To get a more visceral sense of what this means we can see that, again according to Gartner, public virtual worlds are just now at the bottom of the Trough looking for innovations and revenues to claw their way up onto the Slope of Enlightenment, having plummeted with the meteoric rise-and-fall of Second Life. The correlations between the VR curve & the AR curve are not lost on those of us who’ve been tracking both.

Looking at augmented reality it’s clear that much of the hype, especially here in the US, has been driven by the relentless need of marketers to grab eyes in a world of on-rushing novelty, coupled to the embryonic rush of a young developer community trying to prove it can be done. And to the credit of the developers they’ve indeed demonstrated the basic concept and shown that AR works and has a future but much implementation has entered the public marketplace as advertising gimmicks & hokey markups constrained by the limits of this nascent technology. While truly valuable & interesting work is happening in AR, particularly among university researchers, European factories, and the Dutch, the public mind only sees the gimmicks and the hype. As with virtual reality (now subtly re-branded as “virtual worlds” as if they’re embarrassed of those heady days of hope & hype), augmented reality cannot possibly live up to all the expectations set for it in time to meet the immediate gratification needs of the marketplace. Evangelists, pundits, marketers, and advertisers all feed the hype cycle while the developers & strategists keep their heads down toiling to plumb the foundations.

So if we accept that AR will necessarily pass through a PR & financial “Dark Night of the Soul” before reaching enlightenment, what then are the present challenges to the technology? Perhaps the largest barrier is the hardware. Using a cell phone to interrogate your surroundings is clearly of great value but it remains unclear which use cases benefit from rendering the results on the camera stream. Efforts like the Yelp Monocle are fun at first but the novelty quickly becomes overwhelmed by the challenges of the heavy-handed & occluding UI, the human interface (eg having to hold up your phone and “look” through it), and the need to have refined search, sort, and waypoint capabilities. Let’s not forget that the defining mythologies of AR – the sci-fi & cyberpunk visions of our expected futures – show augmented markups drawn heads-up on cool eyewear, in the near-term, and dance off of nanobots directly bonded to the optic stream in the far-term. Hyperbolic perhaps, but a fully-realized augmented reality must be a seamless, minimally intrusive and personally informative overlay on the world. AR will climb The Slope of Enlightenment with the help of a successful heads-up eyewear device capable of attracting significant market adoption. This is a challenge that cannot be met by the AR industry but depends on a Great White Hope like iShades or whatever offering the Jobs juggernaut may extrude in the next 3-5 years.

Hardware aside (there are challenges with cloud latency, GPS accuracy, and battery life, among others), the augmented reality stack has a ways to go before we can get to the type of standardization necessary to draw serious development. The current environment is as expected for such a young domain: balkanized platforms vying to become the first standard and fragmented developers playing with young SDK’s or just building their own kits. There’s a lot of sandboxing and minimal coordination & collaboration. This is one of the reasons public virtual worlds went into decline, in my opinion. When you’re dealing with reality – or it’s virtual approximation – walls tend to present a lot of general problems while offering only a few very select solutions. When architecting augmented reality platforms it should be paramount that the open internet is the core model. AR is simply a way to draw the net out on to the phenomenal world. As such it needs a common set of standards. For example the AR markup object is a fundamental component that will be used by all AR applications. How do you make it searchable, location-aware, federated, and share structured metadata? AR must work to enumerate the taxonomy of it’s architecture & experience models in order to begin working towards best practices & standards (some are already doing this such as the ARML standard proposed by Mobilizy). This is the only way that experience & interface will be broadly adopted and it’s the only way that a large enough development community will emerge to support the industry.

For augmented reality to make it through the Trough of Disillusionment it must formalize & standardize the core components for the visual, blended web. To this end companies like Layar and Metaio would be well-served by establishing strong partnerships and continue working with industry and civic bodies to understand exactly how AR can meet their needs. Likewise, working with the likes of IBM to build a visualization layer for the Smarter Planet. The marketing money will dry up so it’s imperative that the young platform companies collaborate to coordinate the standards under the hood, freeing them up to differentiate by the unique experiences & services they build on top. This may seem inevitable (or impossible, depending on your half-cup disposition) but look at virtual worlds – another technology that might be stronger if there were common standards & open movement across experiences. How Second Life, for example, has survived is by the soft & hard science work of unaffiliated university & corporate researchers trying to push the platform to be more than just a fancy chat experience. (Notably, the present heartbeat of Second Life does not appear to be the result of it’s management efforts.) AR would benefit by seeding this type of humanities and scientific work as much as possible, anchoring the technology in the very real needs of our world. To work on stuff that matters, to crib from Tim O’Reilly.

Gartner has generally been correct in it’s Hype Cycle prognostications. The timeframe is debatable, of course, but the report is instructional, provocative, and often impacts the degree of funding that moves into tech. Virtual worlds are a valuable model for augmented reality. The emergent AR players would do well to study both it’s decline into the Trough and it’s eventual, hopeful rise to enlightenment. The good news (and the freaky news) is that Gartner’s 2010 Hype Cycle indicates that human augmentation & brain computer interface are making headway up the Technology Trigger curve suggesting that both will show significant market presence within 5 years. So it’s likely that the dream of augmented reality will come to be, perhaps carried on the back of these even younger and more ambitious technologies.

For whatever failings or false starts the pundits may heap on augmented reality, it’s just too useful to be left behind. We want to see the world for what it is, rich with data & paths & affinities & memory. Those of us invested in its success would do well to work together to curate it’s passage through the Dark Night of the Hype Cycle.

[UPDATE: Marc Slocum over at O'Reilly RADAR (greatest horizon-scanning name evar!) elicited a very interesting comment from Layar CEO, Raimo van der Klein: "So we don't see AR as virtual Points of Interests around you. We see it as the most impactful mobile content out there." In some ways this challenges my assumption that AR is about visualizing the net & blending it with the hard world.]

IPTV Will Legitimize Web Video

[Cross-posted from a piece I wrote for Hukilau.]

While discussing the recent success of indie web video shop, Happy Little Guillotine Films, in securing a million dollar tie-in with 7-Eleven, Marc Huvstedt at TubeFilter notes the relative obscurity still visited upon the web series genre. Even Joss Whedon’s Dr. Horrible’s Sing-Along Blog scraped by on a $340,000 budget, he laments. Web TV, it seems, just can’t get enough investors exited about producing content.

But the problem isn’t a lack of compelling content. It’s that web video hasn’t been integrated into the primary consumption channel for serialized video entertainment. Viewership is scatterred, fleeting, and uncertain. IPTV is going to change this. Yesterday’s announcement of the new Google web TV device heralds the onrushing age of internet-enabled television currently being built out by Google, Sony, Samsung, Philips and many others ready to grab video from YouTube, Hulu, Google, (Hukilau!) etc… and bring it right to your living room. Imagine Dr. Horrible in HD on your widescreen LCD with live IM chat, twitter feed overlay, and mobile alerts for new episodes, fan contests, and transmedia spin-offs, back-ended with analytics, sentiment analysis, and ad-profiling, cut up with on-the-fly capture & remixing… You get the idea.

While traditional tv networks struggle to get into the social media persuasion game, internet producers were born & bread in leveraging social networks to grab eyes and build engaged fan bases. They’ll have a natural advantage in the set-top convergence.

Within 5 years many households will have upgraded to IPTV hardware and the browsing workflows will have been integrated. Viewers will more effectively search, filter, & share across the new media landscape, from traditional networks out into the long-tail of the web. Digital convergence in the wired living room will give web TV a huge lift in steady viewership and draw out increased investments in compelling, engaging, and ambitious stories from independent producers. IPTV invites the legions of independent talent to bring their stories & creations to the television audience. This will be incredibly disruptive.

[As an aside, keep an eye on Adobe's deal-making to get Flash as the standard interface layer for IPTV's.]

[Mike Elgin has a good post looking at some of the social opportunites with Smart Tv.]

[Engadget notes how the competition has reacted to the Google TV announcement.]

[Also from Engadget, a really good overview - Google TV: Everything you wanted to know...]

[Seriously. Keep an eye on Adobe partnerships with cable co's...]

Apple’s iPad Offers Salvation to Beleaguered Media Publishers

While the chorus of hand-picked pre-release iPad reviewers has pretty roundly declared it just as magical as Steve Jobs told us it would be, and how the interface sweetly beckons the user into it’s experience before gently disappearing to reveal some new oddly-posthuman machine love affair, not a whole lot is being said about what this device means to content publishers. The naysayers deride, among oh so many niggling things, it’s flat file system, lack of HDMI output, no USB, no Flash support, and virtual uselessness as an authoring platform but, clearly, that’s not what it’s really meant for. As many have noted, the iPad is a device designed primarily for consumption.

More specifically (and more importantly to the publishing & distribution biz), the iPad is a shiny, friendly, closed & gated, DRM’d device for finding, purchasing, and consuming new media, all managed by the secure & reliable iTunes Store. The user gets what is arguably a faster, more intuitive, and compelling experience that will probably have them throwing gobs of money at the next generation of digital media. Publishers get a delivery target that is a de facto store with all the innate moral understanding about payment and value and theft that comes with that context. And consumers get the ability to search, find, purchase, and consume media in one single, engaging mobile device.

In the iPad frontier, it’s explicitly OK for publishers to charge users for content. They have a whole new platform in which to innovate experiences that upsell users from the last generation’s content. You loved The Beatles remasters? Well now you can get The Beatles remasters with HD multimedia interactive album copy & studio videos for only $22.95 an album!

It’s no wonder that Disney, ABC, the Wall Street Journal, Netflix, Conde Nast, Harper Collins, Simon & Schuster, Penguin, Marvel, and many, many others have rushed to the new platform to plant their flags and set up shop. Marvel basically set up it’s own comic store on the device, as Netflix has done with video. The Wall Street Journal has the perfect premium gateway for their subscription model. News & magazine publishers barely breathing after the beating they’ve taken since the web forced them to give away all their content for free must be droooooling over the opportunity to create the next generation of news experiences in a gated platform. Likewise for the book publishers finally reaching the new frontier of interactive digital content more compelling than paper books now lining so many remainder shelves like dusty word bricks. And arguably, the planet may be at least partially relieved of some of the paper and ink waste bloating landfills (we’ll overlook the as-of-yet unresolved energetic/carbon burden of dematerializing into electronic containers…).

While many of us have been beckoning the new era of open content, the major media publishers have been begging for the lockdown offered by the iPad. To them, the device promises both a new platform for innovating compelling content, extending their business opportunities into the future landscape at a time when they’ve been so stuck in the past, and it offers the security of a trusted gate for managing purchases and IP protection. It’s more of a nightmare for a lot of people but for the majors it has to be a dream come true. I can only assume that Steve et al worked closely with these interests to make sure they help build an impressive content catalog and a massive hype machine to drive as many new buyers to the iPad as they can. Apple knows that it sells a lot more product when it has the major distributors on it’s side and, at this point, the Old Media houses are pretty much powerless in Steve’s patented Reality Distortion Field.

Questions remain, of course. They’ve already sold over a million units in pre-sale but will the price point hold enough momentum to herald the new age of digital content consumption? Fanboys and early adopters are not enough to sustain a publishing revolution. Apple will probably drop the entry level price in another year or so after it’s stacked up a solid catalog of content. Will the content be good enough to merit the costs? The Wall Street Journal thinks people will pay $17 a month for their service. I wonder if more news sites will follow the lead of the Wall Street Journal and start locking down their web content..? And how long until all the content houses push back and want to extend distribution to the next gen of iPad competitors? Well, it hasn’t been much of a problem for iTunes & the iPod so far. That ecosystem, with plenty of would-be competitors, has kept music publishers pretty happy in a time of otherwise dismal CD returns. Will Apple’s DRM solution be enough to stem the blood loss from file sharing? Face it kids, piracy is a problem for the industry. And face it, industry: your recycled, top-40, tent pole, hedge fund, bloated, over-managed content production models are done. Get used to the long tail of compelling new media niche content that costs half as much as it used to.

Whatever you think about Apple, however much you hate them for being so good at manipulating the public narrative in their favor, however much you detest-and-secretly-admire their obsessive design principles, their ability to dismiss seemingly obvious functionality, their iron-fisted distribution mamagement, and their cavalier “we don’t really worry about the business side” attitude towards their shareholders… Whatever. Apple has lined up pretty much the entire content industry, pointed them at a new playground, and guaranteed them a financial return on their efforts. Will it be enough to save their business in the face of the democratized world of free user content? The industry will abide and do it’s best to make compelling new content that’s only available on this very compelling new device.

[For a much more user-centered take, see Cory Doctorow's impassioned piece, Why I Won't Buy an iPad and Think You Shouldn't Either. Also see Joel Johnson's similarly impassioned counterpoint.]

[Andrew Keen summed it up nicely in this tweet: "my prediction: iPad will formalize chasm between Apple's high-end paid content model & Google's low-end free model. Adieu to mass media."]

[Quinn Norton discusses the Elephant in the room: the iPad is simply too expensive for most people.]

[Investor Howard Lindzon shows off the NASDAQ app w/ StockTwits support. Lovely UI!]

[Round-up of media brands currently on the iPad.]

Subcycle Multi-Touch Instrument Experiment

multi-touch the storm – interactive sound visuals – subcycle labs from christian bannister on Vimeo.

Christian Bannister, Subcycle Labs: “Things are starting to sound more song-like and I can really appreciate that. In previous builds everything sounded more like an experiment or a demo. Now I have something more akin to an experimental song. “

The Co-Evolution of Neuroscience & Computation

Image from Wired Magazine.

[Cross-posted from Signtific Lab.]

Researchers at VU University Medical Center in Amsterdam have applied the analytic methods of graph theory to analyze the neural networks of patients suffering from dementia. Their findings reveal that brain activity networks in dementia sufferers are much more randomized and disconnected than in typical brains. "The underlying idea is that cognitive dysfunction can be illustrated by, and perhaps even explained by, a disturbed functional organization of the whole brain network", said lead researcher Willem de Haan.

Of perhaps deeper significance, this work shows the application of network analysis algorithms to the understanding of neurophysiology and mind, suggesting a similarity in functioning between computational networks and neural networks. Indeed, the research highlights the increasing feedback between computational models and neural models. As we learn more about brain structure & functioning, these understandings translate into better computational models. As computation is increasingly able to model brain systems, we come to understand their physiology more completely. The two modalities are co-evolving.

The interdependence of the two fields has been most recently illustrated with the announcement of the Blue Brain Project which aims to simulate a human brain within 10 years. This ambitious project will inevitably drive advanced research & development in imaging technologies to reveal the structural complexities of the brain which will, in turn, yield roadmaps towards designing better computational structures. This convergence of computer science and neuroscience is laying the foundation for an integrative language of brain computer interface. As the two sciences get closer and closer to each other, they will inevitably interact more directly and powerfully, as each domain adds value to the other and the barriers to integration erode.

This feedback loop between computation and cognition is ultimately bringing the power of programming to our brains and bodies. The ability to create programmatic objects capable of executing tasks on our behalf has radically altered the way we extend our functionality by dematerializing technologies into more efficient, flexible, & powerful virtual domains. This shift  has brought an unprecedented ability to iterate information and construct hyper-technical objects. The sheer adaptive power of these technologies underwrites the imperative towards programming our bodies, enabling us to excercies unprecedented levels of control and augmnetation over our physical form, and further reveal the fabric of mind.


Brain-Computer Interface

In my present tenure as a Visiting Researcher at the Institute for the Future I’ve been posting a lot of Signals pertinent to Brain-Computer Interface over at the Signtific open source research site. My Signals are listed under the tag “ProgrammableEverything”.

Check ‘em out if you’re interested in the fascinating & accelerating field of BCI. Also feel free to add your own Signals you see in the world or are engaging in your professional research.


Companies to Watch: IBM & SAP

In a time of monumental change it’s important to look at how the big player’s are adapting. Their moves are typically the most heavily researched and financed attempts at divining the underlying currents and capitalizing on the shifting technological marketplace. It’s especially interesting when conservative tech stalwarts like IBM & SAP suddenly start looking cool.

Both IBM & SAP are moving quickly into 3 of the most powerful trends in computing, each of which are driven by the enormous amounts of data being captured across all domains: business intelligence & modeling, stream computing, and sustainable systems analysis.

IBM’s new initiative A Smarter Planet states succinctly, “the planet will be instrumented, interconnected, intelligent.” This is a powerful statement from one of the largest and most technologically advanced companies in the world. They’re not just talking about business. IBM CEO Sam Palmisano speaks to the really large-scale planetary challenges in creating smart infrastructures for energy, water, transport, and data.

A key component is the recently-announced System S project for supporting so-called Stream Computing.

System S is designed to perform real-time analytics using high-throughput data streams… to host applications that turn heterogeneous data streams into actionable intelligence… System S applications are able to take unstructured raw data and process it in real time.

“This is about what’s going to happen,” explains [director of high performance stream computing at IBM] Nagui Halim. “The thesis is that there are many signals that foreshadow what will occur if we have a system that is smart enough to pick them up and understand them. We tend to think it’s impossible to predict what’s going to happen; and in many cases it is. But in other cases there is a lot of antecedent information in the environment that strongly indicates what’s likely to be occurring in the future.”

With enough data you can start to create connections and patterns. With patterns you can derive meaning and ultimately be better enabled to make more accurate predictions. Since humans aren’t very well-adapted to processing large data sets, we build tools to handle the heavy lifting. Whether Wall Street indexes, ERP scenarios, government accounting, energy grid analysis, or dynamic climate models, serious hardware & software is required to process operational data into meaningful determinations and prescriptions.

SAP has introduced the Clear New World initiative built on their Business Objects service architecture. Again, the notion is that businesses, enterprises, and even governments can run more efficiently when there is a free-flow of data and a suite of integrated services to crunch and render the info into meaningful contexts.

It’s time to build greater visibility, transparency, and accountability into the way your organization works. Because being clear allows timely and relevant information to be available when and where it is needed. Clarity demonstrates that your company is willing and able to stay accountable to key stakeholders. Clarity helps call out inefficiencies, reveal your best customers, create credible sustainability, and give your business the flexibility needed to anticipate and respond to a complex, ever-changing, global environment.

[See James Governor's recent post for more on how SAP & IBM are tackling enterprise sustainability.]

Note the statements about accountability to stakeholders & creating credible sustainability. Clear data & clear reporting. Now consider the latest announcement about SAP for Public Sector “to support the management and reporting of economic stimulus funds”. As a plugin to their Business Objects suite, this utility drafts on the trends towards open accountability and government transparency, often termed Gov 2.0, to provide support for determining just how stimulus money is being spent.

Both IBM and SAP have the power to execute effectively on these strategies, though it remains to be seen how enterprise spending will move to implement these services or if the companies will offer flexible licensing to LLC’s working on the really challenging non-profit global issues. Likewise, SAP has suffered usability problems for years and their core object architecture is old and slow. They will need more than just branding and plugins to make a more transparent world.

Finally, it’s worth noting the branding for these projects. “A Smarter Planet” is a global posture indicating agency and identity on a planetary scale. This hints at the real deep trend across the human species towards a global sense of purpose and strategy. “Clear New World” acknowledges both the occlusions under which human endeavor has marched thus far and the great clarity of visibility we’re now gaining across all domains & enterprises, while admitting that indeed everything is changing and we are moving into a New World. The technology is stepping forward to help us more effectively manage the present and navigate into the unknown future. But of course like all foresight, it remains to be seen whether individuals will choose to act appropriately with the knowledge they come to possess…

Modeling & Superstructing

A core human competency is the capacity to model outcomes. This predictive ability has contributed to our successful growth as a species and provided the stage from which we extrude our technologies. We observe our world, log our experiences, and use this information to envision & plan our future possibilities. In the rush into tomorrow we’ve deputized machines to assist in our scenario modeling as our plans grow ever greater in scope.

Today we have tremendous amounts of data available about any system we wish to model. Drive platters are bulging into the terabytes just to store all of the information gathered by sensors, services, and empowered humans. Whether we study business networks, financial models, or natural systems, our awareness of their complexity has grown exponentially. Things are far wider and more interconnected than we could have imagined even 20 years ago.

All systems are sets of nodes with properties & variables that govern their behavior, coupled together by relational rules governing their interaction. The more complex a system, the more unique nodes and the more interconnections between nodes. Given the human constraint of being able to hold only 6 or 7 unique objects in mind at any given time it’s clear that we’re overwhelmed by even the relatively simple tasks of understanding, for example, a mid-size business structure enough to predict its future, especially when you consider the business system itself as a single node embedded in a much larger global socio-economic system. Imagine the difficulties climate modelers face trying to document global circulatory systems…

One emerging strategy for modeling complex systems looks to software and the floating-point wonders enabled by Moore’s Law. Computers are phenomenally capable of managing the inconceivable amounts of operations necessary to begin modeling dynamic systems. Yet, until very recently one needed to book time on a supercomputer cluster to run weather models or robust behavioral analysis. Even today’s bleeding hardware strains under the weight of such complexity. Research institutions have pursued natural systems modeling for some time and the business world has been paying attention. SAP now offers modeling capabilities with its business intelligence ERP solutions, enabling executives to run scenarios and envision possible outcomes of strategic decisions. Oracle recently acquired Hyperion, adding “performance management” to their suite of BI tools. You can bet these technologies will work their way into government & geopolitical protocols, as well as social & personal behavioral engineering as we increasingly track & model our lives.

Effectively, this pattern emulates the deeper shift from individual enterprise to collective collaborations. You can only model a complex system with another sufficiently complex system. However, even the most interesting algorithms are encumbered by the impositions of their logic: they can only be as creative as they were written. A second emerging strategy for modeling complex systems looks to deputize humans as processing nodes, crowdsourcing future possibilities across infinitely creative sets of minds. The Institute for the Future has taken this approach with its Signtific Lab and the Superstruct platform, leveraging the principles of gameplay to engage massive participation in envisioning scenarios.

The Superstruct games have drawn in thousands of players offering their thoughts & dreams of the future. Players become processing nodes for the chosen subject (eg. “when augmented reality is everywhere”, or “when personal satellites are as easy to deploy as websites”) iterating across large sets of potential outcomes. From these inputs, patterns emerge showing trends with greater frequency & momentum among the collective. Perhaps even more interesting – and where the Superstruct method is more flexible than computational modeling – are the outliers that emerge from players. Many of the most compelling signals of the future are those that completely break from current patterns. Indeed, one of the most fundamental prevailing shifts in the global paradigm is that change is accelerating in ways we cannot even imagine.

These two approaches both consider complex systems & scenario modeling from architectures that themselves are complex, object-oriented systems. The programmatic approach brings heavy-weight numeric bit-crunching to dynamic data streams, while the Superstructing approach offers wide-reaching creativity and human sensing. Augmenting one approach with the other will mark the next phase of predictive analysis necessary to safely navigate civilization through the future. Envisioning these scenarios and building compelling narratives around them will inevitably draw them into becoming.

Our lives are more & more complex and our enterprises & collaborations are commonly reaching global scales. The need to effectively model & predict is a fundamental human trait, reinforced in the face of escalating complexity in a hyper-connected, Read-Write world.