[Infowarrior] - We're heading Straight for AOL 2.0
Richard Forno
rforno at infowarrior.org
Wed Aug 5 18:11:28 CDT 2015
We're heading Straight for AOL 2.0
August 5, 2015
http://jacquesmattheij.com/aol-20
Before ‘HTTP’, whenever a new kind of application was invented (say ‘file sharing’, or ‘address book’ or ‘messaging’) someone would sit down with a bunch of others and would discuss this problem at some length. Then they’d draft up a document describing the problem they intended to solve and the protocol layer they came up with to address this problem. That document would then be sent out to various parties that might have an interest in using this protocol who then would supply their feedback and as a result of that a more complete version of the original document would be released. And so on until the resulting protocol was considered mature enough for implementation. This process, centered around documents called ‘RFC’s is what got us IP (the internet protocol), TCP (the transport control protocol), HTTP (the world wide web), SMTP (email), the DNS (the domain name system), FTP (the file transfer protocol) and many other extremely useful building blocks for the modern internet.
The implementation of these protocols and their integration into applications was then left to the rest of the world, the standards body had - beyond maybe a reference implementation - no interest in that stage of the proceedings and certainly no commercial interest. Protocols came, were adopted and eventually replaced, either by something better or they died due to a lack of adoption.
And then something strange and for the most part un-expected happened. Where before all of the protocols were layered on top of the Transport Control Protocol (or UDP in some cases), or TCP in computer programmer lingo, a protocol was invented that was so successful that it in turn became a transport layer all by itself (not without associated problems). The reason for this is that instead of delivering software to the end users which then implemented this protocol using executables for the various platforms HTTP allowed to deliver both the visual part of the application (the user interface) and (eventually) the rest of the client portion of the application in one go. The existence of firewalls (which block off a lot of the ports otherwise accessible for peer-to-peer and client-server computing) further accelerated this to the point where instead of drafting RFCs for publicly available and open protocols companies now deliver one half of their application and some custom protocol over HTTP and never mind inter-operability with other services or playing nice.
The end result of all that is that we’re rapidly moving from an internet where computers are ‘peers’ (equals) to one where there are consumers and ‘data owners’, silos of end user data that work as hard as they can to stop you from communicating with other, similar silos.
Imagine an internet where every other protocol except for the most closely related to ‘plumbing’ ones (TCP/IP/UDP/DNS) are no longer open but closed. That may sound far-fetched but even though the number of RFCs is still growing the last RFC with an article in the wikipedia list of rfcs is the iCalendar Specification (RFC 5545) and it dates from 2009. Since then there has been a lot of movement on the web application front but none of those has resulted in an open protocol for more than one vendor (or open source projects) to implement. One explanation is that we now have all the protocols that we need, another is that more and more protocols are layered on top of HTTP in a much more proprietary manner.
This is a dangerous development, the end-game of which is an internet that is about as closed as it could get by removing all the interoperability and replacing it with custom and incompatible protocols over HTTP, maybe with the occasional server talking to another server in the background.
Email will probably be the last to go, when the last user of it finally gives up and moves to gmail so they can continue to communicate with their contacts or maybe they give up entirely. RSS (an open content syndication protocol on top of HTTP, which I think is a nice way to illustrate that it is possible to use HTTP as a layer and play nice at the same time) is already an endangered species, XMPP support is slowly but surely being removed (just imagine a phone system where every number you call to may require a different telephone), NNTP has been ‘mostly dead’ for years (though it still has some use the real replacement of usenet for discussion purposes appears to be Reddit and mailinglists) and so on. The only protocols that are developed nowadays that are open are typically related to plumbing (moving bits of data around), not application level protocols which determine how a whole class of applications around a similar theme can talk to each other.
The biggest internet players count users as their users, not users in general. Interoperability is a detriment to such plays for dominancy. So there are clear financial incentives to move away from a more open and decentralized internet to one that is much more centralized. Facebook would like its users to see Facebook as ‘the internet’ and Google wouldn’t mind it if their users did the same thing and so on. It’s their users after all. But users are not to be owned by any one company and the whole power of the internet and the world wide web is that it’s peer to peer, in principle all computers connected to it are each others equals, servers one moment, clients the next.
If the current trend persists we’re heading straight for AOL 2.0, only now with a slick user interface, a couple more features and more users. I personally had higher hopes for the world wide web when it launched. Wouldn’t it be ironic if it turned out that the end-run the WWW did around AOL because it was the WWW was open and inclusive ended up with different players simply re-implementing the AOL we already had and that we got rid of because it was not the full internet.
So, if you’re going to design a webapp and you wish to help revert this trend (assuming that is still possible): Please open up your protocols, commit to keeping them open and publish a specification. And please never do what twitter did (start open, then close as soon as you gain traction).
Posted by Jacques Mattheij August 5, 2015
--
It's better to burn out than fade away.
More information about the Infowarrior
mailing list