Category: cool tech

Adaptive, composable pools of compute – Gigaom Structure

Gigaom Structure

[My top-level notes from the events.gigaom.com/structure-2014/”>Gigaom Structure conference…]

The big picture – affordable and easy
The Structure conference focused on the evolving territory of cloud infrastructure, highlighting some fundamental shifts in the industry. First, the enterprise has been challenged to overcome the cost, deployment, and management overhead of adoption. However, many emerging businesses are serving this need by making it easier to deploy and run these services. Now pretty much all enterprises understand the value of moving into either a private on-premise or public multi-tenant cloud (and there was much discussion about when to co-lo and when to go public). Adoption is further enabled by the price war between Amazon, Microsoft, and soon Google that has driven public cloud services to become more affordable.

… agile & elastic
The second big shift is in making networks more flexible, elastic, and agile. Services are now more easily deployed across abstraction layers like virtual machines, or modularized into containers. Both VMware and Docker had a strong presence at Structure and most talks had some refrain about the relative merits of one versus the other. Network hardware is softening or virtualizing altogether into SDN and NFV solutions. It’s much easier and cheaper to update software than it is to update hardware. In parallel, more machine intelligence is displacing both hardware and human IT resources, enabling efforts in self-optimizing networks (SON). All of this makes for networks that are sensing and responding to constantly changing conditions.

…composable pools of compute
Third, compute power has become a distributed commodity that is dis-aggregated, addressable, and composable from anywhere on the network. Hypervisors and containers become the means for addressing compute pools, with services stretched across these hardware-agnostic abstraction layers. Notably, there was much talk about how the Internet of Things will force a reconfiguration of networks as billions of devices come on line, some of which require very low latency for their control loops. Pushing compute out to the edges where it’s needed for industrial IoT will spare the core from being overburdened by compute requests.

The Big Picture is starting to show a world awash in pools of computation and heterogeneous networks that are becoming more intelligent and adaptive.

Unicorns, Startups, and Giants: The new billion dollar dynamics of the digital landscape

unicornsIn my day job I help companies navigate the roiling seas of change kicked up by digital transformation. My team at Orange Silicon Valley has just released a large report looking at billion-dollar valuations in tech, the strategic opportunities in pursuing adjacencies as an adaptive posture, and a forecast of tech sectors and macro trends unfolding in the next 6 years or so. I was the lead researcher and writer on this one – I’m especially fond of the sector overviews and macro forecast.

Here’s the press release from my parent company, Orange.

A new report from Orange Silicon Valley called Unicorns, Startups, and Giants: The New Billion Dollar Dynamics of the Digital Landscape shows that tech ‘Unicorns’ are becoming more than just billion-dollar start-up superstars or fodder for talk of bubbles. They are the new engines of disruption reshaping the competitive landscape.

And here’s the report – a map for business to navigate rapid change and align with the fundamentals: Unicorns, Startups, and Giants: The new billion dollar dynamics of the digital landscape

Quick Riffs on Autonomous Vehicles

This tweet got me riffing on potential outcomes & exploits available when autonomous vehicles become common:

I also “like” (or “find interesting”, in the Chinese proverbial sense) the idea of rogue agents seizing control over vehicular fleets to direct and coordinate their movements towards some sort of goal, e.g. assembling to bust a road barricade or defend a bank heist. Interesting times, indeed…

[Apologies/nods to Scott Smith.]

.

Valve signals hardware is the future of distribution

Gabe Newell, the co-founder and managing director at PC gaming powerhouse Valve Software, recently spoke with Kotaku about the shifting landscape of games distribution and his company’s move into the living room.

Ten years ago Valve established Steam as a primary distribution channel for its titles and add-on content. Just this month they’ve released Big Picture, establishing a foothold in the living room by essentially porting the Valve experience to the TV. With a new controller and interface, user’s can play games, stream content, and access Steam through Big Picture’s front-end.

Speaking to Kotaku, Newell suggested that Valve and other competitors will release custom branded hardware solutions for the living room within the next year. User’s would be able to buy an official Valve gaming console (likely to be a lightweight PC or Linux device) and plug it into their TV. While this may seem surprising to many who have suggested that console gaming is in decline, Newell let slip the compelling hook for game’s developers.

“Well certainly our hardware will be a very controlled environment… If you want more flexibility, you can always buy a more general purpose PC. For people who want a more turnkey solution, that’s what some people are really gonna want for their living room.”

As content has dematerialized and gotten loose and slippery, content houses have been trying to figure out how to put the genie back in the bottle and retain control over their IP. Hardware offers such a controlled environment and, thanks in large part to Apple, hardware manufacturing is easier than it’s ever been. It wouldn’t be too surprising if, a few years down the road, Valve decides to lock down distribution completely by shunting all its users onto a low-priced piece of branded hardware. Plug it into your TV, launch Steam, and pull content direct from the Valve server farm.

Now imagine if they release Half Life 3 and you can only buy it through their hardware…

[Related: Hardware, the ugly stepchild of Venture Capital, is having a glamor moment]

model

Recent Notes on Reality Capture & 3D Printing

It may be symptomatic of our times but the delta between weak signal & fast-moving trend seems to be getting shorter & shorter. Compelling innovations are bootstrapped rapidly into full-fledged solutions, enabling a highly-efficient lab-to-home ecosystem. While it’s been percolating for years, the emergence of consumer 3D printing really only landed on the hype cycle in the past 12 months or so but in this time there have been considerable advances.

Continue reading

Me, in the Kinect point cloud (video still).

my home project: kinect hacking

Over the weekend I bought a Kinect and wired it to my Mac. I’m following the O’Reilly/Make book, Making Things See, by Greg Borenstein (who kindly tells me I’m an amateur machine vision researcher). With the book I’ve set up Processing, a JAVA-lite coding environment, and the open source Kinect libraries, SimpleOpenNI & NITE. I’ve spent a good chunk of the weekend reading a hundred pages into the book and working through the project tutorials and I’ve generated some interesting interactions and imagery. There’s also a ton of tutorial vids on You Tube, natch, to help cut through the weeds and whatnot.

Continue reading

The State of Augmented Reality – ARE2012

Last week I attended and spoke at the Wednesday session of ARE2012, the SF Bay Area’s largest conference on augmented reality. This is the 3rd year of the conference and both the maturity of the industry and the cooling of the hype were evident. Attendance was lower than previous years, content was more focused on advertising & marketing examples, and there was a notable absence of platinum sponsors and top-tier enterprise attendees. On the surface this could be read as a general decline of the field but this is not the case.

A few things are happening to ferry augmented reality across the Trough of Disillusionment. This year there were more headset manufacturer’s than ever before. The need for AR to go hand’s-free is becoming more & more evident [my biases]. I saw a handful of new manufacturers I’d never even heard of before. And there they were with fully-functional hardware rendering annotations on transparent surfaces. In order for AR to move from content to utility it has to drive hardware development into HUD’s. Google see’s this as does any other enterprise player in the mobile game. Many of the forward-looking discussions effectively assume a head’s-up experience.

At the algorithmic level, things are moving quickly especially in the domain of edge detection, face tracking, and registration. I saw some really exceptional mapping that overlaid masks on people’s faces in realtime responding to movement & expressions without flickers or registration errors (except for the occasional super-cool New Aesthetic glitch when the map blurred off the user’s face if they moved too quickly). Machine vision is advancing at a strong pace and there was an ongoing thread throughout the conference about the challenges the broader industry faces in moving facial recognition technology into the mobile stack. It’s already there and works but the ethical and civil liberty issues are forcing a welcomes pause in consideration.

Qualcomm was the sole platinum sponsor, promoting its Vuforia AR platform. Sony had a booth showing some AR games (Pong!?) on their Playstation Vita device. But pretty much everyone in the enterprise tier stayed home, back in the labs and product meetings and design reviews, slowly & steadily moving AR into their respective feature stacks. Nokia is doing this, Google of course, Apple has been opening up the camera stream and patenting eyewear, HP is looking at using AR with Autonomy, even Pioneer has a Cyber Navi AR GPS solution. The same players that were underwriting AR conferences in exchange for marketing opportunities and the chance to poach young developers are now integrating the core AR stack into their platforms. This is both good & bad for the industry: good because it will drive standardization and put a lot of money behind innovation; bad because it will rock the world of the Metaio’s & Layar’s who have been tilling this field for years. Typically, as a young technology starts to gain traction and establish value, there follows a great period of consolidation as the big fish eat the little ones. Some succeed, many fall, and a new culture of creators emerges to develop for the winners.

So here we are. Augmented reality is flowing in three streams: Content and marketing grab eyeballs and easy money while conditioning the market to expect these experiences; developers extend the software stack towards real-time pixel-perfect recognition & mapping, enabling the solutions to actually, um, solve problems; and hardware manufacturers labor to bring AR into the many transparent surfaces through which we interact with the world, freeing our hands and augmenting our views with ubiquitous networked data. Across these domains sit content shops, emerging start-ups, the leading innovators ala Layar & Metaio, and the big fish enterprise companies that have had a piece of the tech game for years & years and aren’t going to miss out if AR goes supernova. The market is a bit shaky and very much uncertain for the SMB’s but it’s certainly maturing with the technology.

My sense is that everybody gets that AR is here to stay and has deep intrinsic value to the future of mobility and interface. How this will impact the many passionate folks curating & cultivating the field from the bottom-up remains to be seen.

GRASP, General Robotics, & Swarming Drones

This truly fantastic video went around the interwebs last week, featuring the work of U Penn robotics lab, GRASP. Notably, the video shows groups of small quadrotors flying in formation, following paths, and generally exhibiting both autonomy and collective behavior.

Flush with defense moneys, the GRASP team are doing some pretty amazing work. A survey of their current projects could be the basis for hundreds of scifi novels yet many of them are right on the edge of reality. Research includes:

Haptography – haptic photography
Reasoning in reduced information space
HUNT – heterogenous unmanned networked teams
Omni-directional vision
SUBTLE – situation understanding bot through language & environment
Modular robotics lab
SWARMS – scalable swarms of autonomous robots and mobile sensors

Peep their published lab papers for even deeper FutureNow goodness, eg “Multi-vehicle path planning in dynamically changing environments”.

Amon Tobin ISAM – Mixed-Media Sound & Projection Mapping

I saw Amon Tobin’s ISAM project a week ago at The Warfield theater in San Francisco. Literally jaw-dropping.

Visualizing ISAM from Leviathan on Vimeo.

Leviathan worked with frequent collaborator and renowned VJ Vello Virkhaus on groundbreaking performance visuals for electronic musician Amon Tobin, creating ethereal CG narratives and engineering the geometry maps for an entire stage of stacked cube-like structures. Taking the performance further, the Leviathan team also developed a proprietary projection alignment tool to ensure quick and accurate setup for the show, along with custom Kinect control & visualization utilities for Amon to command.

Signals, Challenges, & Horizons for Hand’s-Free Augmented Reality – ARE2011

Here’s the slidedeck from my recent talk at Augmented Reality Event 2011. I hope to post a general overview of the event soon, including some of the key trends that stood out for me in the space.

Augmented Reality Development Camp 2010 – Dec. 4th GAFFTA

ardc2010

I’m excited to announce that the second annual Bay Area Augmented Reality Developer’s Camp will be held on Saturday, December 4th, 2010, at the Gray Area Foundation For The Arts in San Francisco! This will be a free, open format, all-day unconference looking at the many aspects of Augmented Reality. We welcome hackers, developers, designers, product folks, biz devs, intellectuals, philosophers, tinkerers & futurists – anyone interested in this fascinating and revolutionary new technology.

If you’re interested please take a moment to sign up at the AR Dev Camp wiki.

Humanity+ Conference in Los Angeles, Dec. 4 & 5 at Caltech

If you like to live on the edge and gaze deeply into the future, a great collection of like-minded folks will be gathering to discuss the possible futures of humanity – and how to make them. From their site:

The Humanity+ at Caltech program will be divided into four main themed sessions. These sessions are:

* Re-Imagining Humans: Mind, Media and Methods
* Radically Increasing the Human Healthspan
* Redefining Intelligence: Artificial Intelligence, Intelligence Enhancement and Substrate-Independent Minds
* Business and Economy in the Era of Radical Technomorphosis

A wide range of interesting, professional speakers, from both the for-profit and non-profit worlds, will address each of the four themes.

Click through for the full program.

Is AR Ready for the Trough of Disillusionment?

hype cycle 2010

Gartner, one of the most trusted market analysis firms in the technology industry, just released it’s 2010 Hype Cycle Special Report. According to their research, augmented reality has entered the Peak of Inflated Expectations and will begin it’s slide down into the Trough of Disillusionment within the next year. To get a more visceral sense of what this means we can see that, again according to Gartner, public virtual worlds are just now at the bottom of the Trough looking for innovations and revenues to claw their way up onto the Slope of Enlightenment, having plummeted with the meteoric rise-and-fall of Second Life. The correlations between the VR curve & the AR curve are not lost on those of us who’ve been tracking both.

Looking at augmented reality it’s clear that much of the hype, especially here in the US, has been driven by the relentless need of marketers to grab eyes in a world of on-rushing novelty, coupled to the embryonic rush of a young developer community trying to prove it can be done. And to the credit of the developers they’ve indeed demonstrated the basic concept and shown that AR works and has a future but much implementation has entered the public marketplace as advertising gimmicks & hokey markups constrained by the limits of this nascent technology. While truly valuable & interesting work is happening in AR, particularly among university researchers, European factories, and the Dutch, the public mind only sees the gimmicks and the hype. As with virtual reality (now subtly re-branded as “virtual worlds” as if they’re embarrassed of those heady days of hope & hype), augmented reality cannot possibly live up to all the expectations set for it in time to meet the immediate gratification needs of the marketplace. Evangelists, pundits, marketers, and advertisers all feed the hype cycle while the developers & strategists keep their heads down toiling to plumb the foundations.

So if we accept that AR will necessarily pass through a PR & financial “Dark Night of the Soul” before reaching enlightenment, what then are the present challenges to the technology? Perhaps the largest barrier is the hardware. Using a cell phone to interrogate your surroundings is clearly of great value but it remains unclear which use cases benefit from rendering the results on the camera stream. Efforts like the Yelp Monocle are fun at first but the novelty quickly becomes overwhelmed by the challenges of the heavy-handed & occluding UI, the human interface (eg having to hold up your phone and “look” through it), and the need to have refined search, sort, and waypoint capabilities. Let’s not forget that the defining mythologies of AR – the sci-fi & cyberpunk visions of our expected futures – show augmented markups drawn heads-up on cool eyewear, in the near-term, and dance off of nanobots directly bonded to the optic stream in the far-term. Hyperbolic perhaps, but a fully-realized augmented reality must be a seamless, minimally intrusive and personally informative overlay on the world. AR will climb The Slope of Enlightenment with the help of a successful heads-up eyewear device capable of attracting significant market adoption. This is a challenge that cannot be met by the AR industry but depends on a Great White Hope like iShades or whatever offering the Jobs juggernaut may extrude in the next 3-5 years.

Hardware aside (there are challenges with cloud latency, GPS accuracy, and battery life, among others), the augmented reality stack has a ways to go before we can get to the type of standardization necessary to draw serious development. The current environment is as expected for such a young domain: balkanized platforms vying to become the first standard and fragmented developers playing with young SDK’s or just building their own kits. There’s a lot of sandboxing and minimal coordination & collaboration. This is one of the reasons public virtual worlds went into decline, in my opinion. When you’re dealing with reality – or it’s virtual approximation – walls tend to present a lot of general problems while offering only a few very select solutions. When architecting augmented reality platforms it should be paramount that the open internet is the core model. AR is simply a way to draw the net out on to the phenomenal world. As such it needs a common set of standards. For example the AR markup object is a fundamental component that will be used by all AR applications. How do you make it searchable, location-aware, federated, and share structured metadata? AR must work to enumerate the taxonomy of it’s architecture & experience models in order to begin working towards best practices & standards (some are already doing this such as the ARML standard proposed by Mobilizy). This is the only way that experience & interface will be broadly adopted and it’s the only way that a large enough development community will emerge to support the industry.

For augmented reality to make it through the Trough of Disillusionment it must formalize & standardize the core components for the visual, blended web. To this end companies like Layar and Metaio would be well-served by establishing strong partnerships and continue working with industry and civic bodies to understand exactly how AR can meet their needs. Likewise, working with the likes of IBM to build a visualization layer for the Smarter Planet. The marketing money will dry up so it’s imperative that the young platform companies collaborate to coordinate the standards under the hood, freeing them up to differentiate by the unique experiences & services they build on top. This may seem inevitable (or impossible, depending on your half-cup disposition) but look at virtual worlds – another technology that might be stronger if there were common standards & open movement across experiences. How Second Life, for example, has survived is by the soft & hard science work of unaffiliated university & corporate researchers trying to push the platform to be more than just a fancy chat experience. (Notably, the present heartbeat of Second Life does not appear to be the result of it’s management efforts.) AR would benefit by seeding this type of humanities and scientific work as much as possible, anchoring the technology in the very real needs of our world. To work on stuff that matters, to crib from Tim O’Reilly.

Gartner has generally been correct in it’s Hype Cycle prognostications. The timeframe is debatable, of course, but the report is instructional, provocative, and often impacts the degree of funding that moves into tech. Virtual worlds are a valuable model for augmented reality. The emergent AR players would do well to study both it’s decline into the Trough and it’s eventual, hopeful rise to enlightenment. The good news (and the freaky news) is that Gartner’s 2010 Hype Cycle indicates that human augmentation & brain computer interface are making headway up the Technology Trigger curve suggesting that both will show significant market presence within 5 years. So it’s likely that the dream of augmented reality will come to be, perhaps carried on the back of these even younger and more ambitious technologies.

For whatever failings or false starts the pundits may heap on augmented reality, it’s just too useful to be left behind. We want to see the world for what it is, rich with data & paths & affinities & memory. Those of us invested in its success would do well to work together to curate it’s passage through the Dark Night of the Hype Cycle.

[UPDATE: Marc Slocum over at O'Reilly RADAR (greatest horizon-scanning name evar!) elicited a very interesting comment from Layar CEO, Raimo van der Klein: "So we don't see AR as virtual Points of Interests around you. We see it as the most impactful mobile content out there." In some ways this challenges my assumption that AR is about visualizing the net & blending it with the hard world.]

IPTV Will Legitimize Web Video

[Cross-posted from a piece I wrote for Hukilau.]

While discussing the recent success of indie web video shop, Happy Little Guillotine Films, in securing a million dollar tie-in with 7-Eleven, Marc Huvstedt at TubeFilter notes the relative obscurity still visited upon the web series genre. Even Joss Whedon’s Dr. Horrible’s Sing-Along Blog scraped by on a $340,000 budget, he laments. Web TV, it seems, just can’t get enough investors exited about producing content.

But the problem isn’t a lack of compelling content. It’s that web video hasn’t been integrated into the primary consumption channel for serialized video entertainment. Viewership is scatterred, fleeting, and uncertain. IPTV is going to change this. Yesterday’s announcement of the new Google web TV device heralds the onrushing age of internet-enabled television currently being built out by Google, Sony, Samsung, Philips and many others ready to grab video from YouTube, Hulu, Google, (Hukilau!) etc… and bring it right to your living room. Imagine Dr. Horrible in HD on your widescreen LCD with live IM chat, twitter feed overlay, and mobile alerts for new episodes, fan contests, and transmedia spin-offs, back-ended with analytics, sentiment analysis, and ad-profiling, cut up with on-the-fly capture & remixing… You get the idea.

While traditional tv networks struggle to get into the social media persuasion game, internet producers were born & bread in leveraging social networks to grab eyes and build engaged fan bases. They’ll have a natural advantage in the set-top convergence.

Within 5 years many households will have upgraded to IPTV hardware and the browsing workflows will have been integrated. Viewers will more effectively search, filter, & share across the new media landscape, from traditional networks out into the long-tail of the web. Digital convergence in the wired living room will give web TV a huge lift in steady viewership and draw out increased investments in compelling, engaging, and ambitious stories from independent producers. IPTV invites the legions of independent talent to bring their stories & creations to the television audience. This will be incredibly disruptive.

[As an aside, keep an eye on Adobe's deal-making to get Flash as the standard interface layer for IPTV's.]

[Mike Elgin has a good post looking at some of the social opportunites with Smart Tv.]

[Engadget notes how the competition has reacted to the Google TV announcement.]

[Also from Engadget, a really good overview - Google TV: Everything you wanted to know...]

[Seriously. Keep an eye on Adobe partnerships with cable co's...]

Apple’s iPad Offers Salvation to Beleaguered Media Publishers

While the chorus of hand-picked pre-release iPad reviewers has pretty roundly declared it just as magical as Steve Jobs told us it would be, and how the interface sweetly beckons the user into it’s experience before gently disappearing to reveal some new oddly-posthuman machine love affair, not a whole lot is being said about what this device means to content publishers. The naysayers deride, among oh so many niggling things, it’s flat file system, lack of HDMI output, no USB, no Flash support, and virtual uselessness as an authoring platform but, clearly, that’s not what it’s really meant for. As many have noted, the iPad is a device designed primarily for consumption.

More specifically (and more importantly to the publishing & distribution biz), the iPad is a shiny, friendly, closed & gated, DRM’d device for finding, purchasing, and consuming new media, all managed by the secure & reliable iTunes Store. The user gets what is arguably a faster, more intuitive, and compelling experience that will probably have them throwing gobs of money at the next generation of digital media. Publishers get a delivery target that is a de facto store with all the innate moral understanding about payment and value and theft that comes with that context. And consumers get the ability to search, find, purchase, and consume media in one single, engaging mobile device.

In the iPad frontier, it’s explicitly OK for publishers to charge users for content. They have a whole new platform in which to innovate experiences that upsell users from the last generation’s content. You loved The Beatles remasters? Well now you can get The Beatles remasters with HD multimedia interactive album copy & studio videos for only $22.95 an album!

It’s no wonder that Disney, ABC, the Wall Street Journal, Netflix, Conde Nast, Harper Collins, Simon & Schuster, Penguin, Marvel, and many, many others have rushed to the new platform to plant their flags and set up shop. Marvel basically set up it’s own comic store on the device, as Netflix has done with video. The Wall Street Journal has the perfect premium gateway for their subscription model. News & magazine publishers barely breathing after the beating they’ve taken since the web forced them to give away all their content for free must be droooooling over the opportunity to create the next generation of news experiences in a gated platform. Likewise for the book publishers finally reaching the new frontier of interactive digital content more compelling than paper books now lining so many remainder shelves like dusty word bricks. And arguably, the planet may be at least partially relieved of some of the paper and ink waste bloating landfills (we’ll overlook the as-of-yet unresolved energetic/carbon burden of dematerializing into electronic containers…).

While many of us have been beckoning the new era of open content, the major media publishers have been begging for the lockdown offered by the iPad. To them, the device promises both a new platform for innovating compelling content, extending their business opportunities into the future landscape at a time when they’ve been so stuck in the past, and it offers the security of a trusted gate for managing purchases and IP protection. It’s more of a nightmare for a lot of people but for the majors it has to be a dream come true. I can only assume that Steve et al worked closely with these interests to make sure they help build an impressive content catalog and a massive hype machine to drive as many new buyers to the iPad as they can. Apple knows that it sells a lot more product when it has the major distributors on it’s side and, at this point, the Old Media houses are pretty much powerless in Steve’s patented Reality Distortion Field.

Questions remain, of course. They’ve already sold over a million units in pre-sale but will the price point hold enough momentum to herald the new age of digital content consumption? Fanboys and early adopters are not enough to sustain a publishing revolution. Apple will probably drop the entry level price in another year or so after it’s stacked up a solid catalog of content. Will the content be good enough to merit the costs? The Wall Street Journal thinks people will pay $17 a month for their service. I wonder if more news sites will follow the lead of the Wall Street Journal and start locking down their web content..? And how long until all the content houses push back and want to extend distribution to the next gen of iPad competitors? Well, it hasn’t been much of a problem for iTunes & the iPod so far. That ecosystem, with plenty of would-be competitors, has kept music publishers pretty happy in a time of otherwise dismal CD returns. Will Apple’s DRM solution be enough to stem the blood loss from file sharing? Face it kids, piracy is a problem for the industry. And face it, industry: your recycled, top-40, tent pole, hedge fund, bloated, over-managed content production models are done. Get used to the long tail of compelling new media niche content that costs half as much as it used to.

Whatever you think about Apple, however much you hate them for being so good at manipulating the public narrative in their favor, however much you detest-and-secretly-admire their obsessive design principles, their ability to dismiss seemingly obvious functionality, their iron-fisted distribution mamagement, and their cavalier “we don’t really worry about the business side” attitude towards their shareholders… Whatever. Apple has lined up pretty much the entire content industry, pointed them at a new playground, and guaranteed them a financial return on their efforts. Will it be enough to save their business in the face of the democratized world of free user content? The industry will abide and do it’s best to make compelling new content that’s only available on this very compelling new device.

[For a much more user-centered take, see Cory Doctorow's impassioned piece, Why I Won't Buy an iPad and Think You Shouldn't Either. Also see Joel Johnson's similarly impassioned counterpoint.]

[Andrew Keen summed it up nicely in this tweet: "my prediction: iPad will formalize chasm between Apple's high-end paid content model & Google's low-end free model. Adieu to mass media."]

[Quinn Norton discusses the Elephant in the room: the iPad is simply too expensive for most people.]

[Investor Howard Lindzon shows off the NASDAQ app w/ StockTwits support. Lovely UI!]

[Round-up of media brands currently on the iPad.]

Subcycle Multi-Touch Instrument Experiment

multi-touch the storm – interactive sound visuals – subcycle labs from christian bannister on Vimeo.

Christian Bannister, Subcycle Labs: “Things are starting to sound more song-like and I can really appreciate that. In previous builds everything sounded more like an experiment or a demo. Now I have something more akin to an experimental song. “

The Co-Evolution of Neuroscience & Computation


Image from Wired Magazine.

[Cross-posted from Signtific Lab.]

Researchers at VU University Medical Center in Amsterdam have applied the analytic methods of graph theory to analyze the neural networks of patients suffering from dementia. Their findings reveal that brain activity networks in dementia sufferers are much more randomized and disconnected than in typical brains. "The underlying idea is that cognitive dysfunction can be illustrated by, and perhaps even explained by, a disturbed functional organization of the whole brain network", said lead researcher Willem de Haan.

Of perhaps deeper significance, this work shows the application of network analysis algorithms to the understanding of neurophysiology and mind, suggesting a similarity in functioning between computational networks and neural networks. Indeed, the research highlights the increasing feedback between computational models and neural models. As we learn more about brain structure & functioning, these understandings translate into better computational models. As computation is increasingly able to model brain systems, we come to understand their physiology more completely. The two modalities are co-evolving.

The interdependence of the two fields has been most recently illustrated with the announcement of the Blue Brain Project which aims to simulate a human brain within 10 years. This ambitious project will inevitably drive advanced research & development in imaging technologies to reveal the structural complexities of the brain which will, in turn, yield roadmaps towards designing better computational structures. This convergence of computer science and neuroscience is laying the foundation for an integrative language of brain computer interface. As the two sciences get closer and closer to each other, they will inevitably interact more directly and powerfully, as each domain adds value to the other and the barriers to integration erode.

This feedback loop between computation and cognition is ultimately bringing the power of programming to our brains and bodies. The ability to create programmatic objects capable of executing tasks on our behalf has radically altered the way we extend our functionality by dematerializing technologies into more efficient, flexible, & powerful virtual domains. This shift  has brought an unprecedented ability to iterate information and construct hyper-technical objects. The sheer adaptive power of these technologies underwrites the imperative towards programming our bodies, enabling us to excercies unprecedented levels of control and augmnetation over our physical form, and further reveal the fabric of mind.

 

Brain-Computer Interface

In my present tenure as a Visiting Researcher at the Institute for the Future I’ve been posting a lot of Signals pertinent to Brain-Computer Interface over at the Signtific open source research site. My Signals are listed under the tag “ProgrammableEverything”.

Check ‘em out if you’re interested in the fascinating & accelerating field of BCI. Also feel free to add your own Signals you see in the world or are engaging in your professional research.

Cheers!