🛰️ WebRTC Android is Google's WebRTC pre-compiled library for Android by Stream. It reflects the recent WebRTC Protocol updates to facilitate real-time video chat using functional UI components, Kotlin extensions for Android, and Compose.
Since Google no longer supported the WebRTC library for Android for many years (even JCenter has been shut down, so the library is not available now), we decided to build our own pre-compiled WebRTC core library that reflects recent WebRTC commits with some improvements.
👉 Check out who's using WebRTC Android.
You can see the use cases of this library in the repositories below:
- stream-video-android: 📲 An official Android Video SDK by Stream, which consists of versatile Core + Compose UI component libraries that allow you to build video calling, audio room, and, live streaming apps based on Webrtc running on Stream's global edge network.
- webrtc-in-jetpack-compose: 📱 This project demonstrates WebRTC protocol to facilitate real-time video communications with Jetpack Compose.
If you want to have a better grasp of how WebRTC works, such as basic concepts of WebRTC, relevant terminologies, and how to establish a peer-to-peer connection and communicate with the signaling server in Android, check out the articles below:
- Building a Video Chat App: WebRTC on Android (Part1)
- Building a Video Chat App: WebRTC in Jetpack Compose (Part2)
- HTTP, WebSocket, gRPC or WebRTC: Which Communication Protocol is Best For Your App?
- WebRTC Protocol: What is it and how does it work?
Stream Video SDK for Compose is the official Android SDK for Stream Video, a service for building video calls, audio rooms, and live-streaming applications. Stream's versatile Video SDK has been built with this webrtc-android library, and you can check out the tutorials below if you want to get more information.
Add the below dependency to your module's build.gradle
file:
dependencies {
implementation "io.getstream:stream-webrtc-android:1.1.1"
}
See how to import the snapshot
Snapshots of the current development version of AvatarView are available, which track the latest versions.
To import snapshot versions on your project, add the code snippet below on your gradle file.
repositories {
maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' }
}
Next, add the below dependency to your module's build.gradle
file.
dependencies {
implementation "io.getstream:stream-webrtc-android:1.1.2-SNAPSHOT"
}
Once you import this library, you can use all of the org.webrtc
packge functions, such as org.webrtc.PeerConnection
and org.webrtc.VideoTrack
. For more information, you can check out the API references for WebRTC packages.
Here are the most commonly used APIs in the WebRTC library, and you can reference the documentation below:
- PeerConnection: Provides methods to create and set an SDP offer/answer, add ICE candidates, potentially connect to a remote peer, monitor the connection, and close the connection once it’s no longer needed.
- PeerConnectionFactory: Create a
PeerConnection
instance. - EglBase: Holds EGL state and utility methods for handling an egl 1.0 EGLContext, an EGLDisplay, and an EGLSurface.
- VideoTrack: Manages multiple
VideoSink
objects, which receive a stream of video frames in real-time and it allows you to control theVideoSink
objects, such as adding, removing, enabling, and disabling. - VideoSource: Used to create video tracks and add VideoProcessor, which is a lightweight abstraction for an object that can receive video frames, process them, and pass them on to another object.
- AudioTrack: Manages multiple
AudioSink
objects, which receive a stream of video frames in real-time and it allows you to control theAudioSink
objects, such as adding, removing, enabling, and disabling. - AudioSource: Used to create audio tracks.
- MediaStreamTrack: Java wrapper for a C++
MediaStreamTrackInterface
. - IceCandidate: Representation of a single ICE Candidate, mirroring
IceCandidateInterface
in the C++ API. - SessionDescription: Description of an RFC 4566 Session. SDPs are passed as serialized Strings in Java-land and are materialized to SessionDescriptionInterface as appropriate in the JNI layer.
- SurfaceViewRenderer: Display the video stream on a SurfaceView.
- Camera2Capturer: The
Camera2Capturer
class is used to provide video frames for aVideoTrack
(typically local) from the provided cameraId.Camera2Capturer
must be run on devicesBuild.VERSION_CODES.LOLLIPOP
or higher. - Camera2Enumerator
If you want to learn more about building a video chat application for Android using WebRTC, check out the blog post below:
Stream WebRTC Android supports some useful UI components for WebRTC, such as VideoTextureViewRenderer
. First, add the dependency below to your module's build.gradle
file:
dependencies {
implementation "io.getstream:stream-webrtc-android-ui:$version"
}
VideoTextureViewRenderer
is a custom TextureView that implements VideoSink and SurfaceTextureListener.
Usually, you can use SurfaceViewRenderer to display real-time video streams on a layout if you need a simple video call screen without overlaying video frames over another one. However, it might not work well as you expect if you suppose to need to design a complex video call screen, such as one video call layout should overlay another video call layout, such as the example below:
For this case, we'd recommend you use VideoTextureViewRenderer
like the example below:
<io.getstream.webrtc.android.ui.VideoTextureViewRenderer
android:id="@+id/participantVideoRenderer"
android:layout_width="match_parent"
android:layout_height="match_parent"
/>
You can add or remove VideoTrack like the below:
videoTrack.video.addSink(participantVideoRenderer)
videoTrack.video.removeSink(participantVideoRenderer)
Stream WebRTC Android supports some Jetpack Compose components for WebRTC, such as VideoRenderer
and FloatingVideoRenderer
. First, add the dependency below to your module's build.gradle
file:
dependencies {
implementation "io.getstream:stream-webrtc-android-compose:$version"
}
VideoRenderer
is a composable function that renders a single video track in Jetpack Compose.
VideoRenderer(
videoTrack = remoteVideoTrack,
modifier = Modifier.fillMaxSize()
eglBaseContext = eglBaseContext,
rendererEvents = rendererEvents
)
You can observe the rendering state changes by giving RendererEvents
interface like the below:
val rendererEvents = object : RendererEvents {
override fun onFirstFrameRendered() { .. }
override fun onFrameResolutionChanged(videoWidth: Int, videoHeight: Int, rotation: Int) { .. }
}
FloatingVideoRenderer
represents a floating item that features a participant video, usually the local participant. You can use this composable function to overlay a single video track on another, and users can move the floating video track with user interactions.
You can use FloatingVideoRenderer
with VideoRenderer
like the example below:
var parentSize: IntSize by remember { mutableStateOf(IntSize(0, 0)) }
if (remoteVideoTrack != null) {
VideoRenderer(
videoTrack = remoteVideoTrack,
modifier = Modifier
.fillMaxSize()
.onSizeChanged { parentSize = it },
eglBaseContext = eglBaseContext,
rendererEvents = rendererEvents
)
}
if (localVideoTrack != null) {
FloatingVideoRenderer(
modifier = Modifier
.size(width = 150.dp, height = 210.dp)
.clip(RoundedCornerShape(16.dp))
.align(Alignment.TopEnd),
videoTrack = localVideoTrack,
parentBounds = parentSize,
paddingValues = PaddingValues(0.dp),
eglBaseContext = eglBaseContexteglBaseContext,
rendererEvents = rendererEvents
)
}
Stream WebRTC Android supports some useful extensions for WebRTC based on Kotlin's Coroutines. First, add the dependency below to your module's build.gradle
file:
dependencies {
implementation "io.getstream:stream-webrtc-android-ktx:$version"
}
addRtcIceCandidate
is a suspend function that allows you to add a given IceCandidate
to a PeerConnection
. So you can add an IceCandidate
to a PeerConnection
as Coroutines-style, not callback-style.
pendingIceMutex.withLock {
pendingIceCandidates.forEach { iceCandidate ->
connection.addRtcIceCandidate(iceCandidate)
}
pendingIceCandidates.clear()
}
You can create a SessionDescription
, which delegates SdpObserver
with Coroutines styles:
suspend fun createAnswer(): Result<SessionDescription> {
return createSessionDescription { sdpObserver -> connection.createAnswer(sdpObserver, mediaConstraints) }
}
This is an instruction for setting up Chromium Dev Tool if you need to compile the WebRTC core library by yourself with this project.
-
You need to set up depot tools to build & fetch Chromium codebase.
-
You should fetch the chromium WebRTC repository from the Google's repository against HEAD commits.
Note: Chromium WebRTC core libraries can be bulit only in Linux OS. Every step takes its own time based on the machine specs and internet speed, so make sure every step is completed without interruption.
You need to set up AWS instance (pre-requiests):
- Ubuntu 14.04 LTS (trusty with EoL April 2022)
- 8 GB memory ram
- At least 50 GB HDD/SSD storage
To compile the pre-built WebRTC library for Android, you must follow the steps below: