Tagged: cloudagents

How to keep PDF relevant? Flash and semantics.

My thoughts submitted to the Adobe Reader Blog for the post Take the Adobe Reader Survey. As a former Adobe employee who worked on Acrobat & PDF I have a lot of personal interest in seeing the format grow and evolve.

The growing public perception is that PDF is too bulky and increasingly too opaque for the networked world. This is because PDF’s have not kept up with the prevailing trends of transparency, findability, and collaboration. PDF is important as a container with certain rights & privileges (DigSig, Security, Markup, Forms), but the data inside a PDF is far more important. Currently, PDF’s are way too opaque, too bloated, and do not clearly convey value to most users. This is especially true on mobile (why would I chose to view PDF on mobile if not required by an enterprise I need to engage with?). For most enterprises and customers, PDF is a cloud of data more than a display standard. It’s value is no longer in consistent display of fonts and formatting. It’s in the data within the millions of PDF’s that the IRS has, for example. Even as a Forms front-end it’s difficult to see why Reader/Acrobat is a better solution than a robust customizable Flash interface. The Flash-based Portfolios feature is a step in this direction.

How can Reader add value to the massive volumes of archival PDF that already exist? Answer: 1) replace Reader with a robust, customizable Flash front-end, and 2) engineer semantic data* into new & existing PDF’s so that cloud agents can sift through the documents and return meaningful results. Both of these strategies should focus heavily on supporting Live Cycle for both distilling and evaluation of PDF’s.

The static viewer model is dying. People need to be able to search, sort, find, annotate, and share. Reader is already too heavy to be of value in a browser, much less on a mobile device. Any mobile solution must dis-aggregate formatting from data and be able to dynamically reconfigure the display to present only the important data/form elements to the mobile user. At the very least, PDF’s need some serious reformatting before they can be of any real value on the mobile platform. There’s just not enough real estate. Furthermore, any PDF-mobile solution must begin with the realization that mobile = personal, collaborative, locative.

If Adobe doesn’t do this, you can bet there will be lucrative opportunities for others who understand that the value of data is no longer in it’s formatting. It’s in accessibility and structured reporting. Frankly, any business intelligence solution that doesn’t address the growing heap of PDF’s lying in their servers will fail to really leverage their own data effectively.

* I think I’m starting to use the term “semantic” a bit loosely. Essentially, I’m suggesting that Acrobat should engineer active creation of RDF structures inside PDF COS and as header info. PDFLib should extend to support both writing & reading of this framework. Likewise, top-down text analysis should spider both doc text and COS to construct relevant metadata (RDF & taxonomies) written into the PDF file header. The point is to make PDF’s as transparent & searchable as possible to those actors & agents with access rights.

Another Rant: On the Cloud, Augmented Reality, & the Networked World

[This is a reply I left recently to a Global Futures question about the near-future of the web. It goes a little off-topic at the end but such is the risk of systems analysis. Everything’s connected.]

Within 10-15 years mobile devices will constantly interact with the world around us, analyzing objects, faces, signage, locations, and anything else their sensors can engage. Camera viewfinders will identify visual sources using algorithms to match them up with cloud data repositories. Bluetooth and GPS will interact on sub-channels silently exchanging relationships with embedded sensors across devices and objects. A user’s mobile device will become their IP address hosting much of their profile information and mediating relationships across social nets, commercial transactions, security clearances, and the array of increasingly smart objects and devices.

Cloud access and screen presence will be nearly ubiquitous further blurring the line between desktop, laptop, server, mobile devices, and the objects in our world. It will all be screens interfacing between data, objects, and humans. Amidst the overwhelming data/content glut we will outsource mathematical chores to cloud agents dedicated to scraping data and filtering the bits that are pertinent to our personalized affinities and needs. These data streams will be highly dynamic and cloud agents will send them to rich media layers that will render the results in comprehensible and meaningful displays.

The human sensorium and its interaction with reality will be highly augmented through mobile devices that layer rich information over the world around us. The digital world will move heavily into the natural analog world as the boundaries between the two further erode. This will be readily apparent in the increasing amount of communication we will receive from appliances, vehicles, storefronts, other people, animals, and even plants all wired to the cloud. Meanwhile, cloud agents will sort through vast amounts of human behavioral information creating smart profiles and socioeconomic and environmental systems models with incredible complexity and increasing predictive ability. The cloud itself will be made more intelligible to agents by the standardization of semantic web protocols implemented into most new sites and services. Agents will concatenate to tie services together into meta-functions, just as human collectives will be much more common as we move into increasingly multicellular functional bodies.

The sense of self and our philosophical paradigms will be iterating and revising on an almost weekly basis as we spread out across the cloud and innumerable virtual spaces connected through instantaneous communication. Virtual worlds themselves will be increasingly common but will break out of the walled-garden models of the present, allowing comm channels and video streams to move freely between them and the social web. World of Warcraft will have live video feeds from in-world out to device displays. Mobile GPS will report a user’s real-world location as well as their virtual location, mashing both into Google Maps and the SketchUp-enabled virtual map of the planet.

All of this abstraction will press back on the world and create even greater value for real face-to-face interactions. Familial bonds will be more and more cherished and local communities will take greater and greater control of their lives away from unreliable global supply chains and profit-driven corporate bodies. Most families will engage in some form of gardening to supplement their food supply. The state itself will be hollowed out through over-extended conflicts and insurgencies coupled with ongoing failures to manage domestic civic instabilities. Power outages and water failures will be common in large cities. This will of course further invigorate alternative energy technologies and shift civic responsibilities to local communities. US manufacturing will have partially shifted towards alternative energy capture and storage but much of the real successes will be in small progressive towns rallying around local resources, small-scale fab, and pre-existing economic successes.

All in all, the future will be a rich collage. Totally new and much the same as it has been.

A Brief Rant on Cloud Agents and Business Intelligence

From a comment I left over at ReadWriteWeb about What’s Next After Web 2.0?:

Business Intelligence. The enterprise will increasingly use cloud agents and semantic analytics to better understand their customers, markets, finances, and internal workflows. Companies will engage in behavioral modeling and web meme profiling more aggressively. With diminishing worforce resources due to budgetary constraints, increased investment into automation and intelligent software solutions will give businesses more information and feedback without requiring as many large paychecks. Electronic business workflows, services, and applications will evolve to write more intelligent metadata and semantic subtext into file formats while similarly reporting usage analytics out to dyanamic data streams. All of this data will be sorted by cloud agents, filtered, parsed, and then rendered to rich media layers (eg Flash) for practical visualization and analysis. All documents and file types will evolve to contain more legacy information about who and how the file was created, when & where, who has access rights and to what degree, who has reviewed them and what comments have been attached. Such intelligent files will enable greater and greater usage by both human and cloud agents.