We are only a few weeks into the year and it already seems clear that one of the major trends will be integration activity to “orchestrate” information sources – to create lenses to MASH and focus information for our Personal Information Environments.
Activity Stream integration
Social networks were a big factor in 2008 and social networkers were among the first off the blocks in 2009 to catch my attention with a meeting on January 9th at the offices of Six Apart to discuss standards for activity streams. People belong to different social networks but cannot easily (if at all) communicate between these networks – solving this problem will be like the day when email users on different email systems could email each other.
News Stream Integration
Another early set of activities that caught my attention were the discussions about RSS overload and the need to deal with this somehow. RSS is an essential tool for pulling information into your environment but with the dramatic growth of the web even RSS has trouble coping. Michael Kowalchik describes how our feed readers and our use of them are based on the older email paradigm of inboxes and a must read all items attitude. Kowalchik says that both feed readers and our attitudes to information need to change – ” people will increasingly want to experience information, not be slaves to it”. Kowalchik describes Mike Winner’s the “River of News Concept” which informed many news aggregators including Grazr – “the name “grazr” comes from, grazing information, not drowning in it.”
Activity and News Stream MASHING
Another item that caught my attention was the way the way on-line social media responded to the Hudson River Plane Crash. There have been many stories of the way news breaks first on social networks and about how the major news corporations make use of material from people camera phones but what caught my attention this time was the way in which social media itself could offer coverage. Kevin Sablan’s Almighty Link used storytlr to gather feeds from Twitter, Flickr, YouTube and Vimeo to create an aggregated “real-time “story”. He describes “the hard part was editing, or what Tim Windsor calls curating, the approximately 700 bits of information into some semblance of a disjointed story”. The result was “a stream of moments captured by individual storytellers, the “lifestream” not of a person, but an event.” There was also a Hudsonplane Friendfeed room which could be regarded as a “web2.0 viralism mashup” equivalent of a newsroom of the event.
Beyond Google – The Real Time Web
Writing for RWW Bernard Lunn uses the web 2.0 response to the Hudson Plane Crash to illustrate the way in which the web has moved from IBM (mainframe) to Microsoft (client-server) to Google (on-line) and is now moving beyond Google’s grasp and into real time. He argues that “It’s the Real-Time Web that will unseat Google. This idea has been percolating for a while, but it took a plane landing in the Hudson River to make it obvious. Google cannot be real-time. It indexes the historical web, and it does it better and faster than anyone else.”
PIE and MASH a Lens For a Semantic Web
With all the activity and news streams flooding into my on-line environment I feel my river of news is more like a rapid – I want something to pre-process the streams and present me a river instead of a torrent. I want to be able to search and define sources; aggregate them and sort their presentation according to my own criteria. For example, I would like to input items on Cloud Computing from Twitter; Youtube; blogs and traditional news sources and web sites. The part that I think will develop this year is the difficult next step of pre-processing the information sources. Quantitative pre-processing tools exist – tools like Postrank will exam social bookmarking statistics, blog hits, referrals and comment quantities to rank feeds but what I would like is some form of qualitative pre-processing – this is the difficult part – for what do I mean by Qualitative. At the moment my qualitative assessment of information is associated with people and recommendations. To find news I check Twitter first and see what my network is talking about, then I check the RWW and Mashable etc for RSS feeds. In terms of Quality I would need a way to weight feeds according to mentions of sources and people – not just numbers of hits.
In order to apply qualitative criteria to information sources either the information sources must carry additional information (meta data like tags,statistics, Microformats and RDFa) or a tool must be able to extract data from the context of the information source – how it is associated in the web – how richly it is associated and with what. I seem to be talking about the semantic web and this is not surprising as semantics (meaning) is largely about associations and relationships between things – the more meaningful something is the more deeply and richly it is associated with other things and meanings.
People are getting used to Personal Information Environments (PIE) – systems like iGoogle or Netvibes where you can suck in various information and display it in various ways.However, PIE tools look set for a revolution in 2009 if
Marc Canter’s DiSo Dashboard proposal gains traction. By implementing DiSo dashboard proposals popular PIEs could extend and integrate across social networks and Lifestream activity as well as RSS mega aggregation.
I’m hoping that tools will become smarter in 2009 and help me manage my information sources more meaningfully – I will be keeping an eye on the DiSo project in general and the DiSo Dashboard idea in particular,