Martin’s Weblog

The Harlem Shake: The Network Event horizon

The Harlem Shake signals a new era and a new type of “mass” production

Traditional “modern” economics, culture, media and identity have been shaped and expressed within a framework of industrialisal standardised mass production and consumption – we see this everywhere from fashion, dance, music, politics, food and education – it has become pervasive – it has become culture.

However, the Internet changes everything. With so many people so well connected anyone who is connected can be heard globally and contribute – ideas travel quicker and faster than ever before. In “Apocalypse: The Network Event Horizon” I describe how the Internet has let “the Genie is out of the bottle  and “Too Big To Know” , Ruining Everything and helping a “generation to find its voice. We are approaching a point of no return – a network Event horizon – a Web Squared Technium where scale, scope and the self-reinforcing social and technology power laws of a technology mediated connectivist memetic (Cemetic) culture generate a cambrian explosion of diversity, uncertainty and non-linear emergent viral exponential change.”

Gangnum style represented a cross over point – it was a traditionally produced official product which people connected, copied – it was heavily choreographed and eminnetly reproduceable . Cross over was represented by its viral spread through the Internet and the way it was remixed in throusands of parodies and different contexts.

Harlem Shake is the cross over – there is no official video, instead there is a simple framework for people to make there own video. In the Harlem Shake we have a signal of a new type of “mass” production and consumption. Instead of a standard item being manufactured and consumed on scale we have differentiated and unique items being manufactured and consumed on scope. Harlem shake represents a shift from the old economies of scale to a new economy of scope.

Harlem shake signals the “Network Event horizon” a shift to a cemetic long-tailmaker – hacker economy of scope, creativity, non-linear, differentiated and personalised peer production and consumption.

It will be interesting to see how politics, economics, education, media and identity play out as the network event horizon approaches.


February 17, 2013 Posted by | culture, education, IT and society, media, social media, society | , , , , , | Leave a comment

PIE and MASH: a Lens For a Semantic Web

We are only a few weeks into the year and it already seems clear that one of the major trends will be integration activity to “orchestrate” information sources – to create lenses to MASH and focus information for our Personal Information Environments.

Activity Stream integration

 Social networks were a big factor in 2008 and social networkers were among the first off the blocks in 2009 to catch my attention with a meeting on  January 9th at the offices of Six Apart to discuss standards for activity streams. People belong to different social networks but cannot easily (if at all) communicate between these networks – solving this problem will be like the day when email users on different email systems could email each other.

News Stream Integration

Another early set of activities that caught my attention were the discussions about RSS overload and the need to deal with this somehow. RSS is an essential tool for pulling information into your environment but with the dramatic growth of the web even RSS has trouble coping. Michael Kowalchik describes how our feed readers and our use of them are based on the older email paradigm of inboxes and a must read all items attitude. Kowalchik says that both feed readers and our attitudes to information need to change –  ” people will increasingly want to experience information, not be slaves to it”. Kowalchik describes Mike Winner’s the “River of News Concept” which informed many news aggregators including Grazr – “the name “grazr” comes from, grazing information, not drowning in it.”

Activity and News Stream MASHING

Another item that caught my attention was the way the way on-line social media responded to the Hudson River Plane Crash. There have been many stories of the way news breaks first on social networks and about how the major news corporations make use of material from people camera phones but what caught my attention this time was the way in which social media itself could offer coverage. Kevin Sablan’s Almighty Link used storytlr to gather feeds from Twitter, FlickrYouTube and Vimeo to create an aggregated “real-time “story”. He describes “the hard part was editing, or what Tim Windsor calls curating, the approximately 700 bits of information into some semblance of a disjointed story”. The result was “a stream of moments captured by individual storytellers, the  “lifestream” not of a person, but an event.”  There was also a Hudsonplane Friendfeed room which could be regarded as a “web2.0 viralism mashup” equivalent of a newsroom of the event.

Beyond Google – The Real Time Web

Writing for RWW Bernard Lunn uses the web 2.0 response to the Hudson Plane Crash to illustrate the way in which the web has moved from IBM (mainframe) to Microsoft (client-server) to Google (on-line) and is now moving beyond Google’s grasp and into real time. He argues that “It’s the Real-Time Web that will unseat Google. This idea has been percolating for a while, but it took a plane landing in the Hudson River to make it obvious. Google cannot be real-time. It indexes the historical web, and it does it better and faster than anyone else.”

PIE and MASH a Lens For a Semantic Web

With all the activity and news streams flooding into my on-line environment I feel my river of news is more like a rapid – I want something to pre-process the streams and present me a river instead of a torrent. I want to be able to search and define sources; aggregate them and sort their presentation according to my own criteria. For example, I would like to input items on Cloud Computing from Twitter; Youtube; blogs and traditional news sources and web sites. The part that I think will develop this year is the difficult next step of pre-processing the information sources. Quantitative pre-processing tools exist – tools like Postrank will exam social bookmarking statistics, blog hits, referrals and comment quantities  to rank feeds but what I would like is some form of qualitative pre-processing – this is the difficult part – for what do I mean by Qualitative. At the moment my qualitative assessment of information is associated with people and recommendations. To find news I check Twitter first and see what my network is talking about, then I check the RWW and Mashable etc for RSS feeds. In terms of Quality I would need a way to weight feeds according to mentions of sources and people – not just numbers of hits.

In order to apply qualitative criteria to information sources either the information sources must carry additional information (meta data like tags,statistics, Microformats and RDFa) or a tool must be able to extract data from the context of the information source – how it is associated in the web – how richly it is associated and with what. I seem to be talking about the semantic web and this is not surprising as semantics (meaning) is largely about associations and relationships between things – the more meaningful something is the more deeply and richly it is associated with other things and meanings.

People are getting used to Personal Information Environments (PIE) – systems like iGoogle or Netvibes where you can suck in various information and display it in various ways.However, PIE tools look set for a revolution in 2009 if

 Marc Canter’s DiSo Dashboard proposal gains traction. By implementing DiSo dashboard  proposals popular PIEs could extend and integrate across social networks and Lifestream activity as well as RSS mega aggregation.


I’m hoping that tools will become smarter in 2009 and help me manage my information sources more meaningfully – I will be keeping an eye on the DiSo project in general and the DiSo Dashboard  idea in particular,

January 18, 2009 Posted by | media, semantic web, web 2 | , , , | 1 Comment

I Never make Predictions and Never Will: Media Predictions For 2009

Many comment that in our on-line activities will leave little past, however it is certainly true that our recent past is better documented that ever before. You can access my twitter activity from this time last year here is my first tweet for example and you can see all my past blogs including my Predictions for 2008 to see how good, bad or ugly they were.

William Gibson’s quote “The Future is Already Here – It’s Just Not Evenly Distributed” is a powerful and practical idea for working out what is going to happen in the short term – extrapolate the current edge and current trends. Things usually get smaller, lighter, easier, cheaper and more functional and common place – computers and telephones are good examples of this. We must also beware of technological determinism – we have to consider the complex interaction of contextual factors (economy, culture etc) that can change the “trajectory” of any extrapolation.

Using my crystal ball to throw the light of the recent past into the near future – it all seems quite cloudy to me and everything I see is on-line. The big theme for the year will be on-line everything and as more go on line network effects will cause more to go on-line resulting in an explosive growth in on-line activity. Despite of (or maybe even due to) economic problems 2009 could be a significant year for the information age – when many 20th century physical industrial activities are moved on-line.

Let’s try to focus some of this.


The industrial processes of the 20th century to represent, distribute and consume information will continue to disappear – information is intangible anyway and so is ideal for on-line virtualisation.

Audio: Audio set the example of how information can move away from the physical the stories of Napster and iTunes are now history and the Nokia music service of 12 months unlimited free perpetual but protected downloads takes this model almost as far as it can go. With music content “infinitely” copy-able and accessible monetization has to move to an incidental model of distribution deals and sponsorship.  The SeeqPod service indicates that on-line streaming/access could be a significant development. It will always be useful to have off-line copies of your favourite music but the advantages of streaming are there for everyone. For consumers there can be an “infinite” range to listen to on-demand without management – just search. For the industry on-line listens can be instrumented (pun intended) and monetised with “incidental” and direct marketing. There is of course better control – music can be published free at source (e.g. from Sony), with the publisher monetising through “incidental” services but also allowing 3rd party API and streaming access for downstream services. I like the concept of on-line downstream “radio stations” such as, Lastfm and Pandora. Indeed Pandora offers an excellent example of the benefits of the on-line model in driving further interest by what could be described as audio surfing.

Video: Youtube and the iPlayer provides good examples of what we might expect in video. iPlayer has many advantages (on-demand view) especially with the BBC “transmitting” live on the net via iPlayer. Youtube in particular has become such a mainstream distribution method – standard TV channels, organisations, political parties and of course individuals are all there (e.g. BBC, Channel 4, Obama, Google and have a look at the Governator).

Words: Newspapers and magazines all have good on-line presence and for many their on-line activity is increasingly necessary and important. Pew research finds that the Internet has overtaken newspaper as a source of news for many people and for young people the Net is the main source of news. All the newspapers now offer excellent RSS feeds and various incidental services such as reader and journalist blogs, podcasts and various systems interfaces for systems such as Facebook and Twitter for example.  The New York Times indicates how “newspapers” may develop – with the news of their API development program to “make the NYT programmable. To start 2009 the NYT release Represent – it mashes geographical information with various web data to present information about the politicians who represent geographical areas in New York. Books I feel will also succumb eventually – the physicality of a book (cover, typeface etc) are much like the physicality of old vinyl records. During 2008 e-book readers became a lot better and the advantages for industry and consumers over paper became tangible. Although the e-book reader really is useable now it is another purchase and item to carry around and look after – I would prefer to access an books on my smartphone or laptop/netbook. I think that e-books will break the market in for consolidation onto standard equipment – a prediction for 2010 I recon.     

MASH Media

I can’t resist it but in 2009 “the medium is the message” (sorry). Possibly the biggest development in media will be the way it all gets mixed up – once they all share the same medium then they can mix and match. 2008 has seen the start of this and it is becoming increasingly common – 2009 will see a lot more of this. Already in 2008 we see newspapers with plenty of additional media content – the Guardian tech weekly for example has audio, video, blogs, Facebook, Twitter etc and text – the Bivings Report indicates just how active Newspapers are on the web – e.g. all have RSS feeds, 75% accept comments on articles and most now have free access.  Cross platform media outlet Current indicate how media may develop – in their US Election coverage they have been combining video coverage with input from Twitter, Digg and 12seconds.TV.

All this and I haven’t even covered how Youtube is going live and HD.

The message is – if you want media you need to get on-line.

January 4, 2009 Posted by | ICT, media, predictions | , | 1 Comment