WebSockets vs Long Polling vs SSE

WebSockets vs Long Polling vs SSE

HTTP is request-response — the client always initiates. Real-time features (notifications, live feeds, chat) need the server to push data to the client. Three patterns exist, each with different tradeoffs.

The Evolution of Server Push

Short Polling          Long Polling            SSE                    WebSocket
                                           (HTTP stream)          (protocol upgrade)

C ──GET /poll──► S     C ──GET /poll──► S    C ──GET /events──► S   C ──Upgrade──► S
C ◄── 204 ────── S     S holds open          S ◄────────────────    ◄══════════════►
(repeat every Ns)      S ◄── 200 (data) ─    S sends chunks         (full duplex)
                       C ──GET /poll──► S     as they arrive         continuously
                       (reconnect)

Short polling wastes requests — the server almost always has nothing new. Long polling and SSE are HTTP-based; WebSocket is a separate protocol.

Long Polling

The client sends a request. The server holds the connection open until it has data to send (or a timeout fires), then responds. The client immediately sends another request.

Client                          Server
  │──── GET /notifications ────►│
  │                             │ (holds open — 29s elapsed)
  │◄─── 200 {"msg": "new order"}│
  │──── GET /notifications ────►│ (immediately reconnects)
  │                             │ (timeout — 30s)
  │◄─── 204 No Content ─────────│
  │──── GET /notifications ────►│

Key properties:

  • Standard HTTP — works through every proxy, firewall, load balancer
  • One in-flight request per client at all times
  • Reconnect overhead: each cycle pays TCP/TLS setup (unless keep-alive reuses the connection)
  • Server must correlate the reconnecting client back to its state

Where long polling is still used: Twilio, Stripe webhooks fallback, environments where WebSockets are blocked by firewalls.

Server-Sent Events (SSE)

The client makes a single HTTP GET. The server responds with Content-Type: text/event-stream and keeps the response body open, writing events as they occur. The client never closes the connection voluntarily.

GET /events HTTP/1.1
Accept: text/event-stream

HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache

data: {"type":"price_update","symbol":"AAPL","price":189.42}\n\n

event: alert\n
data: {"msg":"order filled"}\n\n

: heartbeat\n\n

Wire format:

FieldMeaning
data:Event payload (one line per field, blank line terminates event)
event:Named event type (client uses addEventListener('alert', ...))
id:Last-event-ID; sent as Last-Event-ID header on reconnect
retry:Tells client how many ms to wait before reconnecting
: commentHeartbeat / keep-alive (ignored by client, prevents proxy timeout)

Auto-reconnect: The browser’s EventSource API reconnects automatically with Last-Event-ID, allowing the server to resume from where the stream left off.

HTTP/2 advantage: Each SSE subscription is one HTTP/2 stream — many subscriptions share a single TCP connection. Under HTTP/1.1, browsers cap connections at 6 per origin, limiting concurrent SSE streams.

WebSockets

WebSocket starts as HTTP, then upgrades to a persistent full-duplex TCP connection. Either side can send frames at any time.

Upgrade handshake:

GET /ws HTTP/1.1
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Sec-WebSocket-Version: 13

HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=

After the 101, the connection is no longer HTTP. The server and client exchange frames, not requests/responses.

Frame types:

OpcodePurpose
0x1 TextUTF-8 payload (JSON messages)
0x2 BinaryRaw bytes (Protobuf, MessagePack, audio)
0x8 CloseGraceful shutdown with status code
0x9 PingKeepalive probe
0xA PongKeepalive reply

Key properties:

  • Full-duplex — server and client both push without waiting
  • Binary support — efficient for audio, video, game state
  • No built-in auto-reconnect — application must implement
  • Each connection is a stateful, persistent TCP socket (important for scaling)

Side-by-Side Comparison

Long PollingSSEWebSocket
DirectionServer → ClientServer → ClientBidirectional
ProtocolHTTPHTTPws:// / wss://
Persistent connectionNo (reconnects each cycle)YesYes
Browser APIfetch / XMLHttpRequestEventSourceWebSocket
Auto-reconnectManual✅ Built-in (EventSource)Manual
HTTP/2 multiplexing❌ (separate TCP)
Binary support❌ (text only)
Proxy / firewall friendly✅ (plain HTTP)✅ (plain HTTP)Sometimes blocked
Load balancer supportRequires sticky sessions
Overhead per messageHigh (HTTP headers each cycle)Low (chunked stream)Very low (2–10 byte frame header)

Scaling WebSocket Connections

WebSocket connections are stateful — a persistent TCP socket exists between the client and a specific server process. This breaks horizontal scaling assumptions.

Problem: message fan-out across instances

Client A ──── WS ──── Server 1 ┐
Client B ──── WS ──── Server 1 │  If Client C sends a message,
Client C ──── WS ──── Server 2 │  Server 2 must notify Server 1
Client D ──── WS ──── Server 3 ┘  to push to Clients A and B

Solution: pub/sub bus behind the servers

Client A ──── WS ──── Server 1 ────► Redis Pub/Sub ◄──── Server 2 ──── WS ──── Client C
Client B ──── WS ──── Server 1 ◄──── (subscribed)       Server 2 ──── WS ──── Client D

Each server subscribes to Redis (or Kafka, NATS) channels. When a message arrives on any server, it publishes to the bus; all other servers deliver it to their connected clients.

Sticky sessions: Without pub/sub, the load balancer must route a client to the same server every time (sticky by IP or session cookie). This creates uneven load and complicates deploys.

⚠️

A single WebSocket server process typically handles 10k–100k concurrent connections before hitting file descriptor limits or memory pressure. Plan connection counts early — a chat app with 1M online users needs ~10–100 WebSocket server processes.

SSE scaling is simpler: SSE is stateless from the load balancer’s perspective — any server can serve an SSE stream as long as it can subscribe to the same event source (database, message bus). No sticky sessions required.

When to Use Each

Use caseBest fitReason
Live notifications (new email, order update)SSEServer→client only; HTTP/2 multiplexing; auto-reconnect
Live dashboard / stock tickerSSEContinuous server push; no client→server messages needed
Chat / collaborative editingWebSocketBidirectional — client and server both send frequently
Multiplayer game stateWebSocketBinary frames, low overhead, low latency
Live sports scoresSSEBroadcast to many clients; server push only
Presence indicators (“X is typing”)WebSocketClient must push events to server
Environment blocks WebSockets (corporate proxy)SSE or Long PollingHTTP-based protocols bypass WS restrictions
IoT telemetry ingestWebSocketBinary, persistent, bidirectional for command & control