I'm trying to learn more about simulcast and I'm a little confused about how it relates to WebRTC.
Simulcast is often interchangeably used with Streaming - but in the WebRTC world, these are two different things!
Streaming is a one-way live communication setup where a speaker (or a set of speakers) is broadcasting their video/audio to an audience of viewers -- eg: going “live” on YouTube or Instagram.
This is facilitated by a number of underlying protocols like RTMP, HLS, WebRTC - you can choose any one of these depending on the desired level of interaction and latency between the speaker and the viewer
Simulcast is a technique used in WebRTC calls - and it represents a fundamental behavior of video (of any format: live or static)
To explain Simulcast - first, imagine you’re watching a live stream of your favorite sport. This is a live game and you want to stay as in sync as possible with what’s happening on the ground. The catch: you’re on a choppy internet connection. Now - what would you prefer to take place? 1) wait for the live stream to buffer and watch it at a delay 2) keep watching the live stream but at a lower quality.
You’d likely choose option 2. And that's what usually happens - you'll notice the quality is lowered but the stream does not stop.
This happens for two-way video calls too - because the needs are the same: you want to stay in sync with the other participants in the call and you don’t want interruptions.
Simulcast is a WebRTC technique that enables this behavior: i.e. it brings the ability to switch between different ‘layers’ of video quality. With Simulcast, every participant in a call is capable of sending and receiving different layers allowing the client application to control video quality programmatically
Simulcast is a crucial building block for any live video use case:
All in all - Simulcast enables building resilient, reliable, and high-quality video calls with WebRTC.