The delay between when a video frame is captured or sent and when it appears on the viewer's screen.
Video latency is the time delay between the source of a video and its playback on the viewer's device. For live streaming, this means the gap between an event happening in real life and a remote viewer seeing it. For on-demand video, latency refers to the time between clicking play and the first frame appearing. High latency means noticeable delays; low latency means the experience feels instant. Think of latency as the lag in a phone call — a small delay is imperceptible, but a long one makes conversation awkward.
Latency in video delivery has multiple sources. For live streaming, the pipeline includes capture, encoding, packaging into segments, distribution to CDN edge servers, download to the viewer's device, buffering, and decoding. Each step adds delay.
HLS, the most common streaming protocol, introduces latency primarily through segmentation. Standard HLS uses 6-second segments and recommends a buffer of three segments before playback starts, adding roughly 18-30 seconds of latency. Low-Latency HLS (LL-HLS) reduces this to 2-3 seconds by using partial segments and server push.
WebRTC achieves sub-second latency because it skips segmentation entirely, sending video frames directly between peers. However, it does not scale to large audiences without expensive infrastructure.
For on-demand (pre-recorded) video, latency primarily means startup time — how quickly the first frame appears after clicking play. This depends on CDN proximity, player initialization speed, and the size of the initial buffer. A well-configured CDN and player can achieve sub-second startup times for on-demand content.
The acceptable latency depends entirely on the use case. Live auctions and sports betting need sub-second latency. Live events can tolerate 5-10 seconds. On-demand video just needs fast startup. Understanding latency helps you choose the right technology stack and set appropriate expectations with stakeholders who may not realize that "live" does not always mean "instant."
host.video optimizes on-demand video startup time through multi-CDN delivery with over 700 Tbps of capacity, ensuring that the first frame appears as quickly as possible regardless of where the viewer is located.