Virtual Background with WebRTC in Android

What's this blog post about?

In this blog, we discuss implementing Virtual Background in Android with WebRTC using mlkit selfie segmentation. The feature works best under uniform lighting conditions and requires a high-performance mobile device for smooth user experience. We explore various approaches to achieve virtual backgrounds, such as updating the WebRTC MediaStream, creating another virtual video source from the camera source, and using Android CameraX APIs. However, we find that processing on VideoFrame is necessary for our use case. The most challenging part is getting the VideoFrame out for every frame from WebRTC for processing. We utilize setVideoProcessor API available with VideoSource to achieve this. After segmentation, we use Porter.Duff mode to draw the segmented output with the background image on the Canvas and create an updated VideoFrame. The pipeline takes approximately 40-50ms on a 360p resolution as measured on OnePlus6.


Date published
Oct. 21, 2022

Ashish Kumar Verma

Word count

Hacker News points
None found.


By Matt Makai. 2021-2024.