IETF Week: Media and QUIC (the future in 5 or so years)
We closely track, experiment, play with ideas that are moving the Real-time Media Delivery field forward. This post is summary of Microsoft's observation. Kudos to the team, links within show the details, this is a high level summary. If you have thoughts, comments -- feel free to share below.
Before I dive into the results, quickly explaining the set up. The sender sent media captured by
getUserMedia, encoded with
webtransport. The application use the ideas presented in rtp over quic and the default QUIC congestion control (read more discussion on congestion control in the github thread). The receiver receives the QUIC packets over
webtransport, reconstructs the video frame and plays it back using
Summarising their findings:
- Glass-to-Glass (G2G) latency considerably higher than the frame latency. For example, 1Mbps Full HD Video at 30FPS , the G2G was on average 630ms, while the frame latency was 100ms.
- They did not observe any frame re-ordering at the receiver. (interesting... we will need to delve deeper into the mapping of video frame to QUIC stream)
- Bandwidth utilisation was application limited, i.e., not enough video packets were generated (my initial thought is to consider filling the unutilised bandwidth with padding or repeating the last fragment of video data or some part of the existing I-Frame/Golden Frame)
While the latencies are high and not ideal fore real-time, these can be tuned and improved. Specifically, for real-time, the congestion control implemented in webtransport is not ideal. Ergo, the first improvement for webrtc-related real-time usecases would be to tune the application and at the transport level. Nonetheless, the vanilla results looked pretty good.