We are playing around with filters and backgrounds, but whenever we enable in testing, our gpu skyrockets. Is there a way around this?
In general, real-time processing of video or audio tracks in a web browser uses a lot of CPU, GPU, or both. There isn't always a way around this.
Here's a WebRTC sample showing how to process video data in a worker thread.
I do not believe this is available yet in Safari.
+1 to everything Kwindla said 👆️
Unfortunately, additional CPU + GPU load is inevitable as each video frame must be run through the segmentation model (in the case of backgrounds) for inference. Various factors can influence this load, including:
While additional CPU+GPU load is unavoidable, there are some approaches one can use to optimize this. At Daily, we did a lot of testing to optimize this. Obviously reducing video frame size is one way to improve the load, however that does compromise the end user's experience. Reducing video frame rate is another similar optimization technique
We found that - one of the best approaches was to enable different frame rates on the video and the ML inference. So that one can optimize how frequently a new inference is calculated on the video without compromising the video frame rate as visible to the end user.
Hope this is useful.