await call.camera.toggle();
// or
await call.camera.enable();
await call.camera.disable();
Camera & Microphone
If you want to see the device management API in action, you can check out the sample app.
Camera management
The SDK does its best to make working with the camera easy. We expose the following objects on the call:
Start-stop camera
It’s always best to await these calls, however the SDK does its best to resolve potential race conditions: the last call always “wins”.
Here is how you can access the status:
call.camera.state.status; // enabled, disabled or undefined
call.camera.state.status$.subscribe(console.log); // Reactive value for status, you can subscribe to changes
Status is updated once the camera is actually enabled or disabled. Use call.camera.state.optimisticStatus
(and the corresponding observable) for the “optimistic” status that is updated immediately after toggling the camera.
The initial status of the camera is undefined
. If you don’t change the initial status, the default backend settings will be applied once the participant joins the call.
If you’re building a lobby screen, this is how you can apply the backend settings:
await call.get();
const defaultCameraStatus = call.state.settings?.video.camera_default_on;
defaultCameraStatus ? call.camera.enable() : call.camera.disable();
List and select devices
// List devices
// The error handler is called if the user denies permission to use camera
call.camera
.listDevices()
.subscribe({ next: console.log, error: console.error });
// Select device
call.camera.select('device Id');
Here is how you can access the selected device:
call.camera.state.selectedDevice; // currently selected camera
call.camera.state.selectedDevice$.subscribe(console.log); // Reactive value for selected device, you can subscribe to changes
Camera permissions
In a browser, you can use the following API to check whether the user has granted permission to access the connected cameras:
call.camera.state.hasBrowserPermission$.subscribe((value) => {
if (value) {
// User has granted permission
} else {
// User has denied or not yet granted permission
}
});
Camera direction
On mobile devices it’s useful if users can switch between the front and back cameras:
await call.camera.selectDirection('front'); // or back
// Flip camera
await call.camera.flip();
This is how you can access the camera direction:
call.camera.state.direction; // front, back or undefined
call.camera.state.direction$.subscribe(console.log); // Reactive value for direction, you can subscribe to changes
The initial direction of the camera is undefined
. If you don’t change the initial status, the default backend settings will be applied once the participant joins the call.
If you’re building a lobby screen, this is how you can apply the backend settings:
await call.get();
// fallback in case no backend setting
let defaultDirection: CameraDirection = 'front';
const backendSetting = call.state.settings?.video.camera_facing;
if (backendSetting) {
defaultDirection = backendSetting === 'front' ? 'front' : 'back';
}
call.camera.selectDirection(defaultDirection);
Render video
In call
Follow our Rendering Video and Audio guide.
Lobby preview
This is how you can show a preview video before joining the call:
const call = client.call('default', '123');
await call.camera.enable();
const videoEl = document.createElement('video');
videoEl.playsInline = true;
videoEl.muted = true;
videoEl.autoplay = true;
if (videoEl.srcObject !== call.camera.state.mediaStream) {
videoEl.srcObject = call.camera.state.mediaStream || null;
if (videoEl.srcObject) {
videoEl.play();
}
}
Access to the Camera’s MediaStream
Our SDK exposes the current mediaStream
instance that you can use for your needs (for example, local recording, etc…):
// current mediaStream instance
call.camera.state.mediaStream;
// Reactive value for mediaStream, you can subscribe to changes
call.camera.state.mediaStream$.subscribe((mediaStream) => {
const [videoTrack] = mediaStream.getVideoTracks();
console.log('Video track', videoTrack);
});
Microphone management
The SDK does its best to make working with the microphone easy. We expose the following objects on the call:
Start-stop microphone
await call.microphone.toggle();
// or
await call.microphone.enable();
await call.microphone.disable();
It’s always best to await these calls, however the SDK does its best to resolve potential race conditions: the last call always “wins”.
This is how you can access the microphone status:
call.microphone.state.status; // enabled, disabled or undefined
call.microphone.state.status$.subscribe(console.log); // Reactive value for status, you can subscribe to changes
Status is updated once the microphone is actually enabled or disabled. Use call.microphone.state.optimisticStatus
(and the corresponding observable) for the “optimistic” status that is updated immediately after toggling the microphone.
The initial status of the microphone is undefined
. If you don’t change the initial status, the default backend settings will be applied once the participant joins the call.
If you’re building a lobby screen, this is how you can apply the backend settings:
await call.get();
const defaultMicStatus = call.state.settings?.audio.mic_default_on;
if (defaultMicStatus) {
call.microphone.enable();
} else {
call.microphone.disable();
}
List and select devices
// List devices
// The error handler is called if the user denies permission to use microphone
call.microphone
.listDevices()
.subscribe({ next: console.log, error: console.error });
// Select device
call.microphone.select('device Id');
This is how you can access the selected device:
call.microphone.state.selectedDevice; // currently selected microphone
call.microphone.state.selectedDevice$.subscribe(console.log); // Reactive value for selected device, you can subscribe to changes
Microphone permissions
In a browser, you can use the following API to check whether the user has granted permission to access the connected microphones:
call.microphone.state.hasBrowserPermission$.subscribe((value) => {
if (value) {
// User has granted permission
} else {
// User has denied or not yet granted permission
}
});
Play audio
In call
Follow our Playing Video and Audio guide.
Lobby preview
On lobby screens a common UX pattern is to show a visual indicator of the detected audio levels coming from the selected microphone. The client exposes the createSoundDetector
utility method to help implement this functionality. Here is an example of how you can do that:
<progress id="volume" max="100" min="0"></progress>
import { createSoundDetector } from '@stream-io/video-client';
let cleanup: Function | undefined;
call.microphone.state.mediaStream$.subscribe(async (mediaStream) => {
const progressBar = document.getElementById('volume') as HTMLProgressElement;
if (mediaStream) {
cleanup = createSoundDetector(
mediaStream,
(event) => {
progressBar.value = event.audioLevel;
},
{ detectionFrequencyInMs: 100 },
);
} else {
await cleanup?.();
progressBar.value = 0;
}
});
Speaking while muted notification
When the microphone is disabled, the client will automatically start monitoring audio levels, to detect if the user is speaking. This feature is enabled by default unless the user doesn’t have the permission to send audio or explicitly disabled.
This is how you can subscribe to these notifications:
call.microphone.state.speakingWhileMuted; // current value
call.microphone.state.speakingWhileMuted$.subscribe((isSpeaking) => {
if (isSpeaking) {
console.log(`You're muted, unmute yourself to speak`);
}
}); // Reactive value
// to disable this feature completely:
await call.microphone.disableSpeakingWhileMutedNotification();
// to enable it back:
await call.microphone.enableSpeakingWhileMutedNotification();
Access to the Microphone’s MediaStream
Our SDK exposes the current mediaStream
instance that you can use for your needs (for example, local recording, etc…):
// current mediaStream instance
call.microphone.state.mediaStream;
// Reactive value for mediaStream, you can subscribe to changes
call.microphone.state.mediaStream$.subscribe((mediaStream) => {
const [audioTrack] = mediaStream.getAudioTracks();
console.log('Audio track', audioTrack);
});
Noise Cancellation
Check our Noise Cancellation guide.
Camera and Microphone AI/ML Filters
Both the Camera and the Microphone allow you to apply AI/ML filters to the media stream. This can be useful for various use-cases, such as:
- applying a video effects such as background blurring, or background replacement
- applying a custom video filter (e.g. color correction, or face detection)
- applying a custom audio filter (e.g. noise reduction)
import { Call } from '@stream-io/video-client';
let call: Call;
// apply a custom video filter
const { unregister: unregisterMyVideoFilter } = camera.registerFilter(
function myVideoFilter(inputMediaStream: MediaStream) {
// initialize the video filter, do some magic and
// return the modified media stream
return {
output: mediaStreamWithFilterApplied,
stop: () => {
// optional cleanup function
},
};
},
);
// apply a custom audio filter
const { unregister: unregisterMyAudioFilter } = microphone.registerFilter(
function myAudioFilter(inputMediaStream: MediaStream) {
// initialize the audio filter, do some magic and
// return the modified media stream
return {
output: mediaStreamWithFilterApplied,
stop: () => {
// optional cleanup function
},
};
},
);
// unregister the filters
unregisterMyVideoFilter();
unregisterMyAudioFilter();
Filters can be registered and unregistered at any time, and the SDK will take care of the rest.
Filters can be chained, and the order of registration matters.
The first registered filter will be the first to modify the raw MediaStream
.
A filter may be asynchronous. In this case, the function passed to the registerFilter
method should synchronously
return an object where output
is a promise that resolves to a MediaStream
:
camera.registerFilter(function myVideoFilter(inputMediaStream: MediaStream) {
// note that output is returned synchronously!
return {
output: new Promise((resolve) => resolve(mediaStreamWithFilterApplied)),
stop: () => {
// optional cleanup function
},
};
});
Registering a filter is not instantaneous. If you need to know when a filter is registered (e.g. to show
a spinner in the UI), await for the registered
promise returned by the registerFilter
method:
const { registered } = camera.registerFilter(
(inputMediaStream: MediaStream) => {
// your video filter
},
);
await registered;
// at this point, the filter is registered
Once a filter(s) is registered, the SDK will send the last returned MediaStream
to the remote participants.
Speaker management
List and select devices
Browser support
Selecting audio output device isn’t supported by all browsers, this is how you can check the availability:
call.speaker.state.isDeviceSelectionSupported;
// List devices
// The error handler is called if the user denies permission to use audio
call.speaker.listDevices().subscribe({
next: (devices) => console.log(devices),
error: (err) => console.error(err),
});
// Select device
call.speaker.select('device Id');
Device id can also be an empty string, that means to use the system’s default audio output.
Here is how you can access the selected device:
call.speaker.state.selectedDevice; // currently selected audio output
// Reactive value for selected device, you can subscribe to changes
call.speaker.state.selectedDevice$.subscribe((selectedDevice) => {
console.log(selectedDevice);
});
Set volume
Volume has to be a number between 0 and 1. 0 means mute the audio output.
call.speaker.setVolume(0.5);
The default system value is 1.
Here is how you can access the selected device:
call.speaker.state.volume; // current volume
// Reactive value for volume, you can subscribe to changes
call.speaker.state.volume$.subscribe((volume) => {
console.log(volume);
});
Set participant volume
You can control the volume of a specific participant in the call:
import { StreamVideoParticipant } from '@stream-io/video-client';
let p: StreamVideoParticipant;
// will set the volume of the participant to 50%
call.speaker.setParticipantVolume(p.sessionId, 0.5);
// will mute the participant
call.speaker.setParticipantVolume(p.sessionId, 0);
// will reset the volume to the default value
call.speaker.setParticipantVolume(p.sessionId, undefined);