In serverless architectures, application logic is often powered by short-lived execution environments. But what does this mean for long-lived connections? After all, many applications today depend on persistent WebSocket or SSE connections in order to exchange data in realtime.


The answer is unsurprising: either the execution limitations must be tolerated, or long-lived connections must be handled by a separate component. We’ll go over these approaches below.


First, let’s establish terms. In this article, and often in the context of push/streaming technology, connection doesn’t merely mean TCP connection. A load balancer may maintain TCP connections with clients and servers, but the existence of TCP connections on either side of the load balancer doesn’t imply there’s an active end-to-end channel between the client and server.

Instead, connection has a higher-level meaning, such as a WebSocket session or an HTTP stream that goes all the way to the server. A long-lived connection is simply a connection that lasts for a long time, giving the server a way to push data to the client at any moment.

When to use long-lived connections

If you have a set of microservices that communicate using known, stable addresses (e.g. by sending HTTP POSTs to each other), then you may not need long-lived connections. However, if you need to stream data to a dynamic group of 0 to N peers whose addressess may change, then it makes sense to use long-lived connections.

Long-lived connections are particularly useful for streaming data to end-user clients, such as browsers or mobile devices. They may also be useful inside your network when you don’t want to make certain receivers addressable, such as job processors.

The FaaS problem

Most serverless execution environments are function-based, or “function-as-a-service”. Examples of FaaS services include AWS Lambda, Cloudflare Workers, and Fly. Typically, FaaS services bill by execution time and have a maximum time limit per call. Attempting to maintain a long-lived connection from within a FaaS function can be impractical, as you’ll usually be charged for idle time as well as be regularly disconnected.

Server-side approach 1: delegate connection management

One approach to supporting long-lived connections is to delegate their management to a capable gateway/proxy. This way, the client can establish a long-lived connection (e.g. an HTTP stream or a WebSocket) with the gateway, and the gateway and backend can interact using normal HTTP requests, including notifying the gateway when there is data to push to the client. Since the backend only ever has to process short-lived requests, such gateways work well with serverless backends.

What’s particularly powerful about this delegation approach is that it’s transparent. Clients don’t need to be aware of your decision to go serverless. For example, you could migrate a stateful backend running on normal servers to a FaaS backend with connection delegation, without having to change client code.

Examples of connection delegation services: Fanout Cloud, AWS API Gateway, and Streamdata.

Server-side approach 2: delegate your API

Another approach to supporting long-lived connections is to have clients connect to an independent service capable of relaying data between multiple parties. This could be a traditional message broker service or a realtime database service. A client can connect to the service to listen for data, and data can be pushed to the client by making an API call to the service (e.g. to send a message or change database data). Making such API calls is easy to do from a serverless backend.

Messaging and realtime database services not only manage the client connections, they also manage the API that the clients communicate with and may provide SDKs. This can be convenient if you don’t mind clients depending on a specific third-party SDK or messaging protocol.

Examples of message broker services: Fanout Cloud (Bayeux), Pusher, and PubNub.

Examples of realtime database services: Appbase and Google Firebase.

Server-side approach 3: long-lived execution

Finally, you can simply be a rebel and let your functions run long. This may be workable if you can live within the limitations of your FaaS service. Each service has different limitations, so you’ll need to look at the fine print.

For example, you can technically implement HTTP long-polling (aka hanging responses) with most FaaS environments, as long as you set your long-polling timeout to be less than the execution timeout. Of course, you may be billed for idle time.

Not all FaaS services bill for time the same way though. Cloudflare Workers bills by CPU time rather than clock time, meaning you don’t get charged while your function is idle. There is still a maximum time limit, but it is of CPU time, so you can probably push a lot of data before you get forcefully disconnected.

If you decide to manage long-lived connections from within your functions, bear in mind that each function invocation can only manage a single client connection. Further, if the goal is to push data to clients, then each invocation may need its own outbound connection to a data source in order to know when there is data to push (1:1 ratio of client to data source connections). This may have scalability/overhead implications.

Client-side approaches

We are not aware of any cloud services capable of managing long-lived outbound connections on behalf of serverless backends. This is certainly an open problem though. We’ve talked to developers who wish they could consume WebSocket APIs from AWS Lambda.

That said, we (Fanout) have produced an open source project called Zurl which can sort of do something like this. It can maintain long-lived outbound HTTP and WebSocket connections with servers, but it is controlled using ZeroMQ instead of HTTP and of course you need to run it yourself.


Handling long-lived connections in a serverless architecture is indeed possible today with the right tools, at least from the server-side. Looking ahead, it will be interesting to see how the limitations of FaaS services evolve, and whether cloud services for handling outbound connections will become available.