Category: robot wars

Running With Machine Herds

Continuing its annual tradition of walking the lines between genuine social goodyness and highfalutin’ techno utopianism, the TED2013 conference kicked off this week in Los Angeles. Gathering together some of the brighter minds and more well-heeled benefactors, attendees come to tease apart the phase space of possibility and to take a closer look at how we consciously examine and intentionally evolve our world. Among the many threads and themes, one in particular tugs deeply at both aspirational humanism and existential terror.

Continue reading

Quick Riffs on Autonomous Vehicles

This tweet got me riffing on potential outcomes & exploits available when autonomous vehicles become common:

I also “like” (or “find interesting”, in the Chinese proverbial sense) the idea of rogue agents seizing control over vehicular fleets to direct and coordinate their movements towards some sort of goal, e.g. assembling to bust a road barricade or defend a bank heist. Interesting times, indeed…

[Apologies/nods to Scott Smith.]

.

GRASP, General Robotics, & Swarming Drones

This truly fantastic video went around the interwebs last week, featuring the work of U Penn robotics lab, GRASP. Notably, the video shows groups of small quadrotors flying in formation, following paths, and generally exhibiting both autonomy and collective behavior.

Flush with defense moneys, the GRASP team are doing some pretty amazing work. A survey of their current projects could be the basis for hundreds of scifi novels yet many of them are right on the edge of reality. Research includes:

Haptography – haptic photography
Reasoning in reduced information space
HUNT – heterogenous unmanned networked teams
Omni-directional vision
SUBTLE – situation understanding bot through language & environment
Modular robotics lab
SWARMS – scalable swarms of autonomous robots and mobile sensors

Peep their published lab papers for even deeper FutureNow goodness, eg “Multi-vehicle path planning in dynamically changing environments”.

Machine Aesthetics Video – Robot Readable World

BERG creative director, Timo Arnall, has published a video collecting “found machine-vision footage”. In his words:

How do robots see the world? How do they gather meaning from our streets, cities, media and from us? This is an experiment in found machine-vision footage, exploring the aesthetics of the robot eye.

I think it gets particularly poignant about 4 minutes in when the face tracking & recognition alphas make human TV hosts into odd, simplified charicatures, at once de-humanizing the hosts while betraying the limited sophistication of machines like children trying to capture the world in colorful crayons. Bonus points for the creeping irony of machines learning about humans through TV.

Robot readable world from Timo on Vimeo.

A Few More Notes on Machine Aesthetics

Olympus glitch, from Year of the Glitch

Scott Smith has a nice article about Our Complicated Love-Hate Relationship With Robots, exploring how robotics have been seeping into the public dialog of late. A couple of the links he cites were good reminders of previous work looking at the aesthetics of machine perception, notably Sensor-Vernacular from the fine folks at BERG and The New Aesthetic Tumblr by James Bridle.

If humanity is a reflection on the experience of perceiving and interacting with the world, what role does machine perception play in this experience? And if nature acts through our hands, to what ends are flocking drones and herds of autonomous machines? A taxonomy of machine perception seems necessary to understand the many ways in which the world can be experienced.

New Aesthetics of Machine Vision

I’ve grown fascinated by the technology of machine vision, but even more so with the haunting aesthetics captured through their eyes. There’s something deeply enthralling and existentially disruptive about the emergence of autonomous machines into our shared world, watching us, learning about us, and inevitably interacting with each other. It’s like a new inorganic branch of taxonomy is evolving into being. Anyway, two recent notes on this topic…

The first is this short series of images taken from a UAV drone and featured in the ACLU report, Protecting Privacy From Aerial Surveillance [PDF]. There’s a decent summary of the report at the New York Times.

Makes me think of Ian McDonald’s excellent novel, Brasyl, and the ad hoc indoctrination of Our Lady of Perpetual Surveillance into the extended canon of casual Orishas.

The second item of note is this haunting video of a 3D Scanner wandering the streets of Barcelona. It’s not any sort of smart machine – it’s just a dumb handheld scanner hitching a ride on a creative human – but it again evokes the aesthetic of a world seen through eyes very different from our own. The video really grabs me about a minute in:

alley posts from James George on Vimeo.

It seems to show a bizarre ghost world or a glimpse from another dimension into ours. The aesthetic (and the tech) is similar to LIDAR, which I had the luck to play around with a couple years ago – and which Radiohead employed to a very interesting end:

In some ways, I want to see these visions as analogous to the view through a wolf’s eyes in the 80′s flick, Wolfen (at 0:24 in this trailer):

Seeing through the eyes of machines isn’t especially new but it’s the awareness of the many adjacent, convergent technologies of pattern recognition, data analysis, biometrics, autonomous navigation, swarming algorithms, and AI that adds pressure to the long-held notion that machines might someday walk our world of their own accord. It seems much closer than ever before so it’s fascinating to watch the new aesthetics of machine vision move into the popular domain.

Failed Reality & Drone Ethnography

Two of the most interesting articles I’ve read this past week:

Reality as failed state

I believe part of the meta-problem is this: people no longer inhabit a single reality.

Collectively, there is no longer a single cultural arena of dialogue.

What many techno-scientists fail to understand – and thus find most frustrating – about dealing with climate change deniers is that the denier has no real interest in engaging at the scientist’s level of reality.

The point, for the climate denier, is not that the truth should be sought with open-minded sincerity – it is that he has declared the independence of his corner of reality from control by the overarching, techno-scientific consensus reality. He has withdrawn from the reality forced upon him and has retreated to a more comfortable, human-sized bubble.

…And all this is but one example of the ways in which the traditional ideological blocs of the Cold War have fragmented into complex multipartite civil reality wars.

Reality, you might say, as failed state; its interior collapsing into permanent conflict under the convergent pressures of deviant globalisation, its coasts predated upon by new mutant forms of memetic pirates.

Drone Ethnography

All of us that use the internet are already practicing Drone Ethnography. Look at the features of drone technology: Unmanned Aerial Vehicles (UAV), Geographic Information Systems (GIS), Surveillance, Sousveillance. Networks of collected information, over land and in the sky. Now consider the “consumer” side of tech: mapping programs, location-aware pocket tech, public-sourced media databases, and the apps and algorithms by which we navigate these tools. We already study the world the way a drone sees it: from above, with a dozen unblinking eyes, recording everything with the cold indecision of algorithmic commands honed over time, affecting nothing—except, perhaps, a single, momentary touch, the momentary awareness and synchronicity of a piece of information discovered at precisely the right time. An arc connecting two points like the kiss from an air-to-surface missile.

The Co-Evolution of Neuroscience & Computation


Image from Wired Magazine.

[Cross-posted from Signtific Lab.]

Researchers at VU University Medical Center in Amsterdam have applied the analytic methods of graph theory to analyze the neural networks of patients suffering from dementia. Their findings reveal that brain activity networks in dementia sufferers are much more randomized and disconnected than in typical brains. "The underlying idea is that cognitive dysfunction can be illustrated by, and perhaps even explained by, a disturbed functional organization of the whole brain network", said lead researcher Willem de Haan.

Of perhaps deeper significance, this work shows the application of network analysis algorithms to the understanding of neurophysiology and mind, suggesting a similarity in functioning between computational networks and neural networks. Indeed, the research highlights the increasing feedback between computational models and neural models. As we learn more about brain structure & functioning, these understandings translate into better computational models. As computation is increasingly able to model brain systems, we come to understand their physiology more completely. The two modalities are co-evolving.

The interdependence of the two fields has been most recently illustrated with the announcement of the Blue Brain Project which aims to simulate a human brain within 10 years. This ambitious project will inevitably drive advanced research & development in imaging technologies to reveal the structural complexities of the brain which will, in turn, yield roadmaps towards designing better computational structures. This convergence of computer science and neuroscience is laying the foundation for an integrative language of brain computer interface. As the two sciences get closer and closer to each other, they will inevitably interact more directly and powerfully, as each domain adds value to the other and the barriers to integration erode.

This feedback loop between computation and cognition is ultimately bringing the power of programming to our brains and bodies. The ability to create programmatic objects capable of executing tasks on our behalf has radically altered the way we extend our functionality by dematerializing technologies into more efficient, flexible, & powerful virtual domains. This shift  has brought an unprecedented ability to iterate information and construct hyper-technical objects. The sheer adaptive power of these technologies underwrites the imperative towards programming our bodies, enabling us to excercies unprecedented levels of control and augmnetation over our physical form, and further reveal the fabric of mind.

 

The Revolution is Being Twittered – Tehran is Connected


Image by .faramarz.

“The purpose of this guide is to help you participate constructively in the Iranian election protests through twitter.” So opens the #iranelection cyber war guide for beginners just posted today and widely distributed across the web through Twitter. The guide continues with precise information about what behaviors and syntaxes on Twitter are now being watched by the Iranian security apparatus; which hashtags are legitimate and which are state honey pots used to identify and block IP’s; how to pass new open proxies to those within the Tehranian resistance; and smart guidelines for those considering launching Denial of Service attacks on State websites. The author has compiled a brief & succinct guidebook to help global non-Iranians better help those in Iran who are trying to ensure that these events are not hidden from the eyes of the world.

The guide closes with: “Please remember that this is about the future of the Iranian people, while it might be exciting to get caught up in the flow of participating in a new meme, do not lose sight of what this is really about.” To me, this is about the future of all people.

As Clay Shirky noted, the events in Tehran mark a hugely important historic moment. Under an old theocratic and belligerent rulership, the modernist progressives from Iran’s urban center, Tehran, are using mobile communications and social networks to bypass the State and reach out to the world. Ahmadi’s swiftly-imposed net blackout has failed against the ingenuity of tech-enabled university students and the eagerness of sympathetic geeks across the world to help fight The Man (in this case, the authoritarian and repressive regime of the Ayatollah, the Guardian Council, and President Ahmadi-nejad). This marks a large state change in global power dynamics. In an age moving rapidly towards ubiquitous networked mobile computing, transparency and representation are the emerging foundations of civilization, simultaneously empowering the principles of Democracy while de-legitimizing the very notion of the State.

Perhaps even more surprising is the critical role of Twitter as the de facto global, real-time, open communication and collaboration channel. Using SMS, every mobile phone user on the planet has the ability to message Twitter and reach out to a global network. Twitter’s architecture guarantees an exponential distribution of information, and their lack of public shareholders allows them to take a more humanitarian posture. Protesters in Tehran were getting messages to hi-value nodes like Stephen Fry, John Perry Barlow, and William Gibson who then retweet the message to hundreds of thousands of their followers. By Monday #iranelection was the #1 trending term across Twitter and has stayed there since. Twitter is the primary channel for information coming in and out of Tehran regarding the contested election of it’s president – in a critical middle eastern Islamic nation, oil-rich with an aggressive posture towards the US and it’s allies, and who is poised on the brink of becoming a fully nuclear state. The out-of-left-field social networking phenomena has been so valuable to the goals of US interests in Iran that the U.S. State Department requested that Twitter postpone it’s scheduled service downtime.

The regime is now evicting reporters from Iran. The challenger, Moussavi, is likely not much different from Ahmadi-nejad. Both are pre-approved by the Ayatollah and Guardian Council. The pro-Moussavi population wants to see voting irregularities investigated and their “moderate” candidate approved as president. Tehran’s tech-savvy are redefining the fundamental relationship between people and governments. All power structures should be watching the events in Tehran and across the web. The people are getting smarter and bolder.

This is the age of empowered collectives striding across a globalized, hyper-connected world. In a virtualized information space, borders are less meaningful and countries are loose contextual buckets through which people interact. The swift assistance provided by western techies is not really about the US helping Iran, it’s about good, aspirational people trying to help other good, aspirational people. The playing field is leveling as humanity learns more and more about itself, overcoming fear and stereotypes and ignorance simply by communicating more effectively.

There will be a reaction as states work to retain power, upping their game to adapt to the new tech. And there will be darker consequences of these new tool as the All-Seeing Digital Eye rises over the land. We struggle now to free information but the next big struggle may be to secure it. All coins have two sides and all technologies will be bent to human will. Hopefully we’re all getting a little bit better at cooperating with each new day.

***This was written in a bit of a rush before I jet. Here are a couple more links:
Here’s a list of good info links.
Lyn Jeffery of IFTF writes Field Notes from the Iran Twitter Stream.
SF Gate article: SF Techie Stir Iranian Protests.
Jamais Cascio: The Dark Side of Twittering a Revolution.
And Hillary Clinton Defends Twitter Efforts for Iran.

Another Rant: On the Cloud, Augmented Reality, & the Networked World

[This is a reply I left recently to a Global Futures question about the near-future of the web. It goes a little off-topic at the end but such is the risk of systems analysis. Everything's connected.]

Within 10-15 years mobile devices will constantly interact with the world around us, analyzing objects, faces, signage, locations, and anything else their sensors can engage. Camera viewfinders will identify visual sources using algorithms to match them up with cloud data repositories. Bluetooth and GPS will interact on sub-channels silently exchanging relationships with embedded sensors across devices and objects. A user’s mobile device will become their IP address hosting much of their profile information and mediating relationships across social nets, commercial transactions, security clearances, and the array of increasingly smart objects and devices.

Cloud access and screen presence will be nearly ubiquitous further blurring the line between desktop, laptop, server, mobile devices, and the objects in our world. It will all be screens interfacing between data, objects, and humans. Amidst the overwhelming data/content glut we will outsource mathematical chores to cloud agents dedicated to scraping data and filtering the bits that are pertinent to our personalized affinities and needs. These data streams will be highly dynamic and cloud agents will send them to rich media layers that will render the results in comprehensible and meaningful displays.

The human sensorium and its interaction with reality will be highly augmented through mobile devices that layer rich information over the world around us. The digital world will move heavily into the natural analog world as the boundaries between the two further erode. This will be readily apparent in the increasing amount of communication we will receive from appliances, vehicles, storefronts, other people, animals, and even plants all wired to the cloud. Meanwhile, cloud agents will sort through vast amounts of human behavioral information creating smart profiles and socioeconomic and environmental systems models with incredible complexity and increasing predictive ability. The cloud itself will be made more intelligible to agents by the standardization of semantic web protocols implemented into most new sites and services. Agents will concatenate to tie services together into meta-functions, just as human collectives will be much more common as we move into increasingly multicellular functional bodies.

The sense of self and our philosophical paradigms will be iterating and revising on an almost weekly basis as we spread out across the cloud and innumerable virtual spaces connected through instantaneous communication. Virtual worlds themselves will be increasingly common but will break out of the walled-garden models of the present, allowing comm channels and video streams to move freely between them and the social web. World of Warcraft will have live video feeds from in-world out to device displays. Mobile GPS will report a user’s real-world location as well as their virtual location, mashing both into Google Maps and the SketchUp-enabled virtual map of the planet.

All of this abstraction will press back on the world and create even greater value for real face-to-face interactions. Familial bonds will be more and more cherished and local communities will take greater and greater control of their lives away from unreliable global supply chains and profit-driven corporate bodies. Most families will engage in some form of gardening to supplement their food supply. The state itself will be hollowed out through over-extended conflicts and insurgencies coupled with ongoing failures to manage domestic civic instabilities. Power outages and water failures will be common in large cities. This will of course further invigorate alternative energy technologies and shift civic responsibilities to local communities. US manufacturing will have partially shifted towards alternative energy capture and storage but much of the real successes will be in small progressive towns rallying around local resources, small-scale fab, and pre-existing economic successes.

All in all, the future will be a rich collage. Totally new and much the same as it has been.

twitchboard.net: the rise of personal cloud agents

The folks over at Twitchboard.net have the right idea. From their site:

TwitchBoard listens to your twitter account, and forwards messages on to other internet services based on what it hears. Our first service will automatically save any links you tweet to the del.icio.us bookmarking service. We’re working on connections to many other services — stay tuned!

This simple tool is a software agent built on the web platform. It lives on a server as a script watching your personal datastream – Twitter, in this case. The initial service notices when you have put an url in your tweet, grabs it, and passes it along to your del.icio.us account as a bookmark. It effectively concatenates two web services together to optimize your workflow and eliminate the need to double post. It extends the function of Twitter to include the function of Del.icio.us recapitulating the phylogenetic imperative evolving from unicellular function to multicellular. Twitterl.icio.us!

Twitchboard represents the emerging class of cloud agents that will help us sort and search the massive volumes of data we interact with regularly. Our connections are getting too dense and the data we’re working with is growing far too big for us humans to handle manually. We need subroutines customized to our interests, affiliations, businesses, and collaborations that can do the heavy data lifting for us while we focus on the meaningful expressions these agents will create for us from the noise.

Increasingly we’ll have swarms of such agents running across our digital lives doing our bidding and the bidding of numerous marketing and security agencies as well. These tools will have particular value across the enterprise where they will monitor workflows & financial movements, gather market data from clouds, and sift through productivity metrics to formulate valuable business intel. Agents will tell us about our lives and our health delivering colorful abstracts with pretty animated datasets showing how much we drove this week, how many miles we walked, tasks completed vs. outstanding, and much more feedback based on an array of scripts & sensors.

Twitchboard is using the fertile comm grounds of Twitter and it’s API to watch the datastream for keywords that can drive additional services. You can bet they’re also deriving all sorts of interesting meta-patterns from the Twitter feed that will be plugged into further modular mashups and visualizations. Through it’s popularity and the openness of it’s API Twitter is lighting a roadmap towards the semantic web. Groups like Twitchboard are building the services reading the machine web and helping us better manage the mountains of data piling up, meanwhile giving rise to a class of autonomous agents moving across devices, sensors, cameras, and clouds.

[Kudos to Sarah Perez of ReadWriteWeb for mentioning me & this post in her column!]

Cat Cam Finally Within Reach

Via Gadget Lab:

The budget needed for an at-home surveillance system has just been slashed to a couple of Jeffersons. The eyeCam Micro Wireless camera, a plug-and-play with a wireless transmission range of 450 ft., is now down to $40, making it one of the most affordable spy video gadgets out there.

Click-through for sample video – spy cam attached to Dragonfly remote heli, ie personal neighborhood surveillance drone. Federal laws may apply.

Do You Like Our Owl?

The Tyrell Corporation:

Based in Los Angeles in the year 2019, Tyrell is named after its founder Dr. Eldon Tyrell and is a high-tech biocorp primarily concerned with the production of life-like androids called replicants. Tyrell’s slogan is “More human than human”. The headquarters for the corporation is over 700-stories tall. The Tyrell corporation is the only outfit making Nexus-6 replicants which are so human-like that the only way L.A.P.D Blade Runner Units can indentify them is to sit suspects down and go through an exhausting empathy test called the “Voight-Kampff Scale.”

The Tyrell Corporation was also involved in the exporting of replicant labor to the outer space colonies for situations deemed too dangerous and degrading for regular humans such as military operations, high risk industrial work, prostitution and slave labor. One could call it interstellar commerce or just growing an army of slaves.

Monkeys Taught to Control Robots. Humanity On the Run.

In a staggering breech of public interest, U of Pitt researchers have taught a couple of rhesus macaque monkeys to control a robotic prosthesis with their mind. No word on when the lab fires will be extinguished and the rampaging robo-monkeys will be defeated.

Two monkeys have managed to use brain power to control a robotic arm to feed themselves. The feat marks the first time a brain-controlled prosthetic limb has been wielded to perform a practical task.

Previous demonstrations in monkeys and humans have tapped into the brain to control computer cursors and virtual worlds, and even to clench a robot hand. But complicated physical activities like eating are “a completely different ball game”, says Andrew Schwarz, a neurological engineer at the University of Pittsburgh…

Yeah, right. Those robo-monkeys will be running the Pentagon within days.

Militarized Robotic Biomimics Coming Soon

In a disturbing-but-not-surprising move, the U.S. military is contracting the development of small robotic biomimics for field deployment. Equipped with sensors and networked relays these robocritters will likely end up scurrying through apartment complexes at home and abroad, ala Minority Report. Expect swarming behaviors, social intelligence, and networked biometrics.

Everybody freeze for the spiders…

British defence giant BAE Systems is creating a series of tiny electronic spiders, insects and snakes that could become the eyes and ears of soldiers on the battlefield, helping to save thousands of lives [ed note: the video shows bugs being used to target a building for rocket attack].

Prototypes could be on the front line by the end of the year, scuttling into potential danger areas such as booby-trapped buildings or enemy hideouts to relay images back to troops safely positioned nearby.

Soldiers will carry the robots into combat and use a small tracked vehicle to transport them closer to their targets.

Then they would swarm into the building and relay images back to the soldiers’ hand-held or wrist-mounted computers, warning them of any threats inside.

BAE Systems has just signed a £19million contract to develop the robots for the US Army.

Parting Notes on ETech

This was a great conference and the most consistent collection of speakers and topics I’ve ever experienced. Very fun and inspiring. Lots of hip 30-somethings trying to dream up tomorrow and make it real. It was a a very balanced, yet cutting-edge talk aimed at an eager (and surprisingly mixed-gender)crowd. I noticed that most folks were using Mac laptops – this part of the edge seems to prefer Apple – and it was fascinating to watch many who were blogging the talks while pulling up references dropped by the speakers, tweeting out to Twitter, and snapping/downloading/posting photos in real-time. As speakers dropped references I was pulling them up on my laptop and dropping links into my blog notes.

In the lobby a team was showing off a data viz video mapping real-time communications connecting NYC to the rest of the world. Andrea noticed that a surprising number were with an Italian city called Perugia. Maybe next year they could map the live feed of all web traffic from ETech. Imagine the bitstreams rising off such a gathering of digiterati.

Maybe it was just the Sudafed coursing through our virus-ridden veins (thank you Portland) but ETech was a total intellectual turn-on, from ambient objects, Asian mobile media, green policy and sustainability, hardware hacking & drone building, Austrian post-Situationists, neuroengineering, and the digital salvation of Democracy itself.

I hope I can go back next year!

Open Source Hardware (Limor Fried & Philip Torrone) – ETech08

Hardware is much easier to copy now. Hardware & software is blurring – ex: firmware updates.
Speed of hardware hacking is remarkable.

Why open source hardware? Contribute to the pool of knowledge; freedom to pursue software/hardware creativity; community development and quality; excitement about building things; education;

Layers:
- Hardware/mechanical diagrams: 2D models, vector, DXF or AI (KiCAD)
- Scematics & circuit diagrams: PDF, BMP, GIF, PNG
- Parts list (Bill of Materials): data sheets (x0xb0x TB303)
- Layout diagrams: physical map of parts
- Core/Firmware: on-board source code
- Software/API
Like most developers, they don’t mention the human interface layer.

Roomba has an open API. Companies that release open platforms find much greater value (and mindshare) from user mods.
Ambient Orb publishes schematics and parts list. Neuros OSD publishes schematics (semi-open but falls short).
Hardware is mostly based on patents, not copyright. Licensing: CC, GPL, BSD, MIT
Chumby: programmable data portal.

Other open source hardware resources (business models): Fab@Home, Daisy MP3 player, Adafruit, Arduino open-source electronics prototyping platform. See also Make magazine & the Maker Fair.

Cool stuff: Twittering plants with Arduino – plants that call you and say they need to be watered (Twitter as SMS bridge); Open prosthetics; Minty Boost open source USB charger;

Ed note: Imagine an online repository of mechanical diagrams for DIY desktop fab/rep…