In 2004 ‘web feeds’ were becoming extremely popular in the tech community. People were keen to label ‘web pages’ as old, obsolete, clumsy and resource ‘heavy’. It was the time of ‘Web 2.0’, the time when web ‘surfers’ were gradually getting rid of Internet Explorer 6, when Ajax was starting to make its appearance on more and more web applications.
Suddenly everyone started expecting feeds. Everywhere. Feeds for everything any site had in store: calendar/event information, news, media, archives, categories, tags, software updates etc. Feeds were demanded (and almost exclusively found) in loosely defined quasi-machine readable formats, like RSS and Atom: immature syndication formats ‘abused’, tasked to provide functionality not originally envisioned by their authors. Functionality that people ‘wanted now’, that was tangible, contrary to the elusive dream of a Semantic Web, an abstract notion that perhaps only Tim Berners-Lee might try to explain. From a tech-only convenience, feeds became mainstream.
Feeds gradually became the ‘de facto’ medium through which millions of people around the world found and consumed information — a use well beyond their original purpose (syndication and notification of new or updated content, not consumption of said content). Many companies touted RSS support in their products and services in 2005. Among them, Apple, when Steve Jobs, in his typical used-car salesman fashion, touted Safari’s support for RSS. The incorporation of RSS in desktop applications and the browser never worked for most of us; web aggregators and feed readers soon became the dominant medium through which feeds were accessed. Among all feed readers, Google Reader, rapidly became the most popular; part of the daily routine for the vast majority of people and the main source of their text media consumption.
The years since 2005 brought many changes to the web. The proliferation of ‘standards’, the rise of social networking, the increased centralisation of the internet. A few days ago paidContent published an article titled “The Death of the RSS Reader”. In this article it presented the argument that people have gradually moved away from RSS feeds; the reason for this, as stated in this article, was the increased use of Facebook and twitter.
I have long stated my belief that web feeds, like so many internet and web technologies before them, were a good thing. I was always sorry to see the standards being abused, I was sorry to see stagnation reigning, both in terms of innovation on the core web technologies and the applications. Innovation that would enable us to have a richer, more open and more sophisticated internet experience. Still, ‘web feeds’ were good and a move forward, despite their drawbacks.
On the other hand, I have never — personally — found any meaning to twitter and Facebook as media consumption/notification mechanisms; they definitely do not replace RSS as a content syndication medium; they most assuredly do not provide a (superior) content notification mechanism (let alone a presentation mechanism); what those social networking sites do, and arguably do better than any alternative, is provide social context. They let me see what friends and acquaintances ‘like’. They let me express whether I like or dislike something. And that’s very different to what RSS does.
What Tartakoff’s article fails to explain are the reasons why people seem to move away from RSS readers; First, it is — in my opinion — inevitable, for the RSS reader model to saturate one’s reading experience; reading text on a feed reader is dull and tiring. It is æsthetically mediocre and ergonomically flawed. Then there’s interaction: in 2010 people online seem to read ‘fewer’ coherent texts, but interact more through smaller text snippets or status updates.
The fundamental issue behind those trends have nothing to do with RSS. It has to do with consumption of media. With information overload. Social networking sites provide a bizarre way out; the ‘nuclear option’: resigning. It is my impression that people just stopped using feed readers, just stopped reading. I know several people that have done exactly that; and it is sad. When thinking of information overload, the logical next step would be to consider content filtering and recommendation. Picking needles in a haystack of articles, posts, status updates, news and editorials. Picking the right needles.
This is something that people have been trying to do for a long time. Think Slashdot, one of the early attempts at community aggregation of interesting content online. Google itself shyly included some social functionality in Google Reader. Then there’s quasi-algorithmic selection; this is where things become a bit harder; there has been no winner in this field (and perhaps this is one area where a great technical and usable implementation, backed by sufficient funding might result in a killer web application), although many have tried (an example is Techmeme, a popular technology news aggregator that used to use computers to do the picking, but has now turned to human editors).
In an ideal world, social networking sites like Facebook and twitter would never have affected people’s habits of consuming and producing meaningful information online. They might even be considered grotesque centralised monstrosities; attempts by corporations to control expression and mine people’s lives purely for monetary gain. HTML, FOAF, RSS, OpenID, (a working) OAuth (and their successors) and a whole lot of other three to five letter acronyms, all names for open technologies that power the ‘web’, would provide the necessary social context, semantic content representation, notification and delivery of information in a distributed, open, usable and æsthetically pleasing manner. Beyond the walled garden of any one megacorp.
Sadly we don’t live in an ideal world. And while most of the internet, the web and a number of related technologies were created with a fully decentralised model in mind, today we depend on an extremely small number of companies for an increasingly large part of our everyday experience. And that’s sad, not only because these companies have never proven that they are worthy of our trust, not only because they provide no guarantees for their service, not merely because they are not regulated or controlled in any way, but — most importantly of all — because those companies provide extremely little control to us over our own data.
The ‘net is much more than any single company or group of companies. More than Google, Facebook or twitter. That’s how it started and that’s how it still is. In the eleven years of its existence RSS and its descendants are today found almost everywhere. From blogs like this one to complex web services. That it has stopped being the current ‘fad’, the ephemeral buzzword of this month or year is irrelevant. It still matters and it’s still important; what people should be concerned about is not ‘the death of RSS’ for it is not dead, but how they’ve moved from ‘open’ RSS and blogging to ‘closed’ Facebook and twitter in the short time of five years. Was it worth it?