With all the interest in WebSockets lately, it would be easy to write off HTTP long-polling as a less capable, legacy mechanism. The truth is that long-polling is completely sufficient for today’s modern web applications, and additionally it enjoys many benefits not found in WebSockets or other mechanisms. Here’s why you might want to implement HTTP long-polling on purpose in the present day:

It’s just as fast

Anyone reading this blog likely already knows this, but for completeness I want to address this possible myth. Long-polling delivers live updates as fast as any other TCP-based mechanism, such as XMPP or WebSockets. Don’t let the word “poll” in there throw you off. Yes, high throughput applications (>1 message/sec) may experience latency, and of course there is extra latency if a message needs to be delivered between polls (~1% chance of hitting perhaps a 300ms penalty), but for most applications these conditions are completely acceptable.

Simple clients

Reliable publish-subscribe systems usually work by having a “best effort” data updating mechanism paired with a request/response interface for retrieving data history. XEP-0060 is an example of this (best effort event message stanzas, with iq interface to backstore), as would be anything based on PubSubHubbub (best effort webhooks, with HTTP RSS/Atom backstore). Since the data updates are not 100% reliable, the backstore should be periodically queried by receivers ensure data has not been missed.

Even reliable connection-oriented streams (such as XEP-0198, or any other kind of resumable stream protocol based on TCP or WebSockets) are really no simpler or more resilient than the above-mentioned pubsub systems when you get down to the details. A sender will try its best to deliver data to the other, but if this fails (say, TCP send failure or timeout), then it is up to the receiver to renegotiate a session resumption, which is essentially a query for data history. In order to recognize an invalid connection as early as possible to ensure data has not been missed, the receiver should periodically ping the sender.

Long-polling simplifies this on the client side by rolling together the notification mechanism and the backstore query/ping into a single request. Simply performing the polling request on a regular interval gives you a very reliable realtime system. You might think that having to repeatedly poll a resource every minute is excessive, but the fact is that non-long-polling approaches need a similar amount of repeated polls/pings to be reliable.

The primary drawback to long-polling is that once the sender has sent data, it cannot send further data until a new poll request arrives. Also, since a new poll is made after each bit of data is received, this actually can result in excessive polling if the data rate is high enough. So, there is a trade to be made here. If the average message rate per recipient is 1 message per minute (likely quite high for most applications, if you really think about it), then long-polling should not be any more latent or resource heavy than any other approach and is probably worth the trade for client simplicity. If your only concern is latency then your message rate could in fact be something like 1 per second without long-polling giving you any trouble. However, if even that isn’t good enough, say because you’re implementing a fast-paced game, then yes, long-polling is wrong for the job. I dare say though, that for a typical realtime web app, long-polling is plenty fine.

Strong connectivity

IP addresses change. Gone are the days of fixed, stable, wired links for all. Now we have Wi-Fi and mobile users, with their associated roaming issues. Long-lived connection protocols used by typical consumers need ways of detecting failure quickly and resuming sessions to avoid data loss. In the present day, your access mechanism is broken if it is not capable of transcending IP addresses.

Fortunately, and quite possibly by accident, HTTP is generally immune to IP address changes. Requests are short lived and state is stored independently. Long-polling is easy and “just works” in today’s hostile networking environments.

APIs

If you want to offer a realtime API today, long-polling is a good way to go. You can be RESTful, and as discussed earlier it’s easy to work with and developers are less likely to screw up while using it. You could use HTTP streaming instead, or another pubsub approach, but just know that it will require more effort of the developer to handle resumption scenarios.

Also, nobody produces APIs via WebSockets. So, even if you have some realtime delivery code for your website today using WebSockets, when it comes time to offer an API to developers you’ll most likely want to design a separate interface using something more commonplace (e.g. something HTTP based). This isn’t to say someone couldn’t offer an API over WebSockets, it’s just that it isn’t a trendy thing to do yet. I am looking forward to this changing though, so please let me know if you’ve seen a WebSockets-based API out there. UPDATE: It seems people are starting to do this. For example, see Blockchain.

Compatible

I save the most uninteresting point for last. It goes without saying that long-polling works with every language, HTTP library, and browser.

Conclusion

In conclusion, long-polling isn’t that bad after all, and you should be proud to implement it intentionally without anyone looking at you funny.