Can someone help me understand what a "codec" is and how it relates to WebRTC?
I've heard this term in the past, and while I think I have a general idea of what it is, I don't feel confident talking about it.
In WebRTC we send video and audio between peers. How does it actually send video for example? We can start thinking about what a digital image is: basically a bunch of pixels with a certain width and height. This would be what we call a raw image (the pixels in raw images can actually be represented in many ways, but let's forget about that for now). We can now think of a video as a bunch of sequential raw images, one after the other.
A high resolution image could be 1280 width and 720 height and that would be 921600 pixels (and each pixel would use some bits to represent the right color). So, now we have to send all those bits across the network (or even store them in a file). That's when video encoding come to help. The video encoding will convert those raw images in a way to maximize compression and quality and it will use a codec to do so.
There are different types of codecs and they are different for video (H.264, VP8, VP9...) and audio (Opus, for example), because video and audio is represented in different ways.
So, the video/audio sent with WebRTC is encoded using those codecs to ensure we have great images or sound without the need to send all the raw pixels or audio samples and thus saving network.
Hopefully that makes sense!