Integration with WebRTC applications

Have more questions? Submit a request

Can we integrate's SDK on a WebRTC session?

When integrating our SDK in an application, our SDK takes control of the device's camera the moment the application activates the SDK.

Since our SDK takes control of the camera to process the frames required to extract the user's vital signs, it is impossible to use the camera for any parallel processes (such as WebRTC).


Is there any way to use's SDK within a WebRTC app?

Yes, there are two ways to integrate's SDK within a WebRTC native application.

  1. A waiting room before a session begins:

    It means that before the WebRTC session begins, the patient goes to a "waiting room" and then measures themselves using our SDK.

    The moment the vital signs are extracted, you can start a WebRTC session, and the results are allowed to be shared with a Doctor via a dedicated API.

  2. Closing the Video stream while the WebRTC endures:


    This scenario is not supported and must be validated by the client

    WebRTC protocol controls the audio and the video inputs.

    It means that while the WebRTC process has already started, you can shut the video input down on demand and then start measuring the patient's vital signs using's SDK.

    When the measurement is finished, you can then share the results via a dedicated API, deactivate's SDK and re-enable the video for the WebRTC session.



Articles in this section

Was this article helpful?
0 out of 0 found this helpful



Please sign in to leave a comment.