RecordRTC Documentation / RecordRTC Wiki Pages / RecordRTC Demo / WebRTC Experiments
RecordRTC is a JavaScript-based media-recording library for modern web-browsers (supporting WebRTC getUserMedia API). It is optimized for different devices and browsers to bring all client-side (pluginfree) recording solutions in single place.
Please check dev directory for development files.
- RecordRTC API Reference
- MRecordRTC API Reference
- MediaStreamRecorder API Reference
- StereoAudioRecorder API Reference
- WhammyRecorder API Reference
- Whammy API Reference
- CanvasRecorder API Reference
- GifRecorder API Reference
- Global API Reference
Browser | Support | Features |
---|---|---|
Firefox | Stable / Aurora / Nightly | Audio+Video (Both local/remote) |
Google Chrome | Stable / Canary / Beta / Dev | Audio+Video (Both local/remote) |
Opera | Stable / NEXT | Audio/Video Separately |
Android | Chrome / Firefox / Opera | Audio/Video Separately |
Microsoft Edge | Normal Build | Only Audio |
Media File | Bitrate/Framerate | encoders | Framesize | additional info |
---|---|---|---|---|
Audio File (WAV) | 1411 kbps | pcm_s16le | 44100 Hz | stereo, s16 |
Video File (WebM) | 60 kb/s | (whammy) vp8 codec yuv420p | -- | SAR 1:1 DAR 4:3, 1k tbr, 1k tbn, 1k tbc (default) |
- RecordRTC to Node.js
- RecordRTC to PHP
- RecordRTC to ASP.NET MVC
- RecordRTC & HTML-2-Canvas i.e. Canvas/HTML Recording!
- MRecordRTC i.e. Multi-RecordRTC!
- RecordRTC on Ruby!
- RecordRTC over Socket.io
- ffmpeg-asm.js and RecordRTC! Audio/Video Merging & Transcoding!
- RecordRTC / PHP / FFmpeg
- Record Audio and upload to Nodejs server
- ConcatenateBlobs.js - Concatenate multiple recordings in single Blob!
- Remote audio-stream recording or a real p2p demo
- Mp3 or Wav Recording
- Record entire DIV including video, image, textarea, input, drag/move/resize, everything
- Record canvas 2D drawings, lines, shapes, texts, images, drag/resize/enlarge/move via a huge drawing tool!
- Record Canvas2D Animation
- WebGL animation recording
- Plotly - WebGL animation recording
You can also try a chrome extension for screen recording:
NPM install
npm install recordrtc
# you can use with "require" (browserify/nodejs)
var RecordRTC = require('recordrtc');
var recorder = RecordRTC({}, {
type: 'video',
recorderType: RecordRTC.WhammyRecorder
});
console.log('\n--------\nRecordRTC\n--------\n');
console.log(recorder);
console.log('\n--------\nstartRecording\n--------\n');
recorder.startRecording();
console.log('\n--------\nprocess.exit()\n--------\n');
process.exit()
Here is how to use require
:
var RecordRTC = require('recordrtc');
var Whammy = RecordRTC.Whammy;
var WhammyRecorder = RecordRTC.WhammyRecorder;
var StereoAudioRecorder = RecordRTC.StereoAudioRecorder;
// and so on
var video = new Whammy.Video(100);
var recorder = new StereoAudioRecorder(stream, options);
<!-- link npm package scripts -->
<script src="./node_modules/recordrtc/RecordRTC.js"></script>
There are some other NPM packages regarding RecordRTC:
bower install
bower install recordrtc
<!-- link bower package scripts -->
<script src="./bower_components/recordrtc/RecordRTC.js"></script>
<!-- CDN -->
<script src="https://cdn.WebRTC-Experiment.com/RecordRTC.js"></script>
<!-- non-CDN -->
<script src="https://www.WebRTC-Experiment.com/RecordRTC.js"></script>
You can even link specific releases:
<!-- use 5.4.1 or any other version -->
<script src="https://github.com/muaz-khan/RecordRTC/releases/download/5.4.1/RecordRTC.js"></script>
<script src="https://cdn.webrtc-experiment.com/gumadapter.js"></script>
<script>
function successCallback(stream) {
// RecordRTC usage goes here
}
function errorCallback(error) {
// maybe another application is using the device
}
var mediaConstraints = { video: true, audio: true };
navigator.mediaDevices.getUserMedia(mediaConstraints).then(successCallback).catch(errorCallback);
</script>
You'll be recording both audio/video in single WebM or Mp4 container.
var recordRTC;
function successCallback(stream) {
// RecordRTC usage goes here
var options = {
mimeType: 'video/webm', // or video/webm\;codecs=h264 or video/webm\;codecs=vp9
audioBitsPerSecond: 128000,
videoBitsPerSecond: 128000,
bitsPerSecond: 128000 // if this line is provided, skip above two
};
recordRTC = RecordRTC(stream, options);
recordRTC.startRecording();
}
function errorCallback(error) {
// maybe another application is using the device
}
var mediaConstraints = { video: true, audio: true };
navigator.mediaDevices.getUserMedia(mediaConstraints).then(successCallback).catch(errorCallback);
btnStopRecording.onclick = function () {
recordRTC.stopRecording(function (audioVideoWebMURL) {
video.src = audioVideoWebMURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
};
var recordRTC = RecordRTC(audioStream);
recordRTC.startRecording();
recordRTC.stopRecording(function(audioURL) {
audio.src = audioURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
Demos:
You can record 4 parallel videos/streams (WebRTC Conference):
var arrayOfStreams = [localStream, remoteStream1, remoteStream2, remoteStream3];
var recordRTC = RecordRTC(arrayOfStreams, {
type: 'video',
mimeType: 'video/webm', // or video/webm\;codecs=h264 or video/webm\;codecs=vp9
previewStream: function(stream) {
// it is optional
// it allows you preview the recording video
}
});
recordRTC.startRecording();
recordRTC.stopRecording(function(singleWebM) {
video.src = singleWebM;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
This simply means that: You can record 4 users video conference in single WebM or Mp4 container!
Points:
- Instead of passing single
MediaStream
, you are passing array ofMediaStreams
- You are keeping the
type=video
- You will get single webm or mp4 according to your
mimeType
This section is related to above section "Record Multiple Videos".
MultiStreamRecorder.js
now supports addStream
method; which allows you add additinoal streams on the fly (during a recording):
recorder.startRecording();
setTimeout(function() {
recorder.addStream(anotherStream);
}, 5000);
Points:
- You can add any kind of stream: audio and/or video.
- You can add any number of streams; there is no limit.
Simply set volume=0
or muted=true
over <audio>
or <video>
element:
videoElement.muted = true;
audioElement.muted = true;
Otherwise, you can pass some media constraints:
function successCallback(stream) {
// RecordRTC usage goes here
}
function errorCallback(error) {
// maybe another application is using the device
}
var mediaConstraints = {
audio: {
mandatory: {
echoCancellation: false,
googAutoGainControl: false,
googNoiseSuppression: false,
googHighpassFilter: false
},
optional: [{
googAudioMirroring: false
}]
},
};
navigator.mediaDevices.getUserMedia(mediaConstraints).then(successCallback).catch(errorCallback);
Everything is optional except type:'video'
:
var options = {
type: 'video',
frameInterval: 20 // minimum time between pushing frames to Whammy (in milliseconds)
};
var recordRTC = RecordRTC(mediaStream, options);
recordRTC.startRecording();
recordRTC.stopRecording(function(videoURL) {
video.src = videoURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
Everything is optional except type:'gif'
:
// you must "manually" link:
// https://cdn.webrtc-experiment.com/gif-recorder.js
var options = {
type: 'gif',
frameRate: 200,
quality: 10
};
var recordRTC = RecordRTC(mediaStream || canvas || context, options);
recordRTC.startRecording();
recordRTC.stopRecording(function(gifURL) {
mediaElement.src = gifURL;
});
You can say it: "HTML/Canvas Recording using RecordRTC"!
<script src="https://cdn.WebRTC-Experiment.com/RecordRTC.js"></script>
<script src="https://cdn.webrtc-experiment.com/screenshot.js"></script>
<div id="elementToShare" style="width:100%;height:100%;background:green;"></div>
<script>
var elementToShare = document.getElementById('elementToShare');
var recordRTC = RecordRTC(elementToShare, {
type: 'canvas'
});
recordRTC.startRecording();
recordRTC.stopRecording(function(videoURL) {
video.src = videoURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
</script>
See a demo: /Canvas-Recording/
You can even record Canvas2D drawings:
<script src="https://cdn.webrtc-experiment.com/RecordRTC/Whammy.js"></script>
<script src="https://cdn.webrtc-experiment.com/RecordRTC/CanvasRecorder.js"></script>
<canvas></canvas>
<script>
var canvas = document.querySelector('canvas');
var recorder = new CanvasRecorder(window.canvasElementToBeRecorded, {
disableLogs: false
});
// start recording <canvas> drawings
recorder.record();
// a few minutes later
recorder.stop(function(blob) {
var url = URL.createObjectURL(blob);
window.open(url);
});
</script>
Live Demo:
Watch a video: https://vimeo.com/152119435
It is a function that can be used to initiate recorder however skip getting recording outputs. It will provide maximum accuracy in the outputs after using startRecording
method. Here is how to use it:
var audioRecorder = RecordRTC(mediaStream, {
recorderType: StereoAudioRecorder
});
var videoRecorder = RecordRTC(mediaStream, {
recorderType: WhammyRecorder
});
videoRecorder.initRecorder(function() {
audioRecorder.initRecorder(function() {
// Both recorders are ready to record things accurately
videoRecorder.startRecording();
audioRecorder.startRecording();
});
});
After using stopRecording
, you'll see that both WAV/WebM blobs are having following charachteristics:
- Both are having same recording duration i.e. length
- Video recorder is having no blank frames
- Audio recorder is having no empty buffers
This method is really useful to sync audio/video outputs.
You can ask RecordRTC to auto stop recording after specific duration. It accepts one mandatory and one optional argument:
recordRTC.setRecordingDuration(milliseconds, stoppedCallback);
// the easiest one:
recordRTC.setRecordingDuration(milliseconds).onRecordingStopped(stoppedCallback);
Try a simple demo; paste in the chrome console:
navigator.mediaDevices.getUserMedia({
video: true
}).then(function(stream) {
var recordRTC = RecordRTC(stream, {
recorderType: WhammyRecorder
});
// auto stop recording after 5 seconds
recordRTC.setRecordingDuration(5 * 1000).onRecordingStopped(function(url) {
console.debug('setRecordingDuration', url);
window.open(url);
})
recordRTC.startRecording();
}).catch(function(error) {
console.error(error);
});
This method can be used to clear old recorded frames/buffers. Snippet:
recorder.clearRecordedData();
If you're using recorderType
then you don't need to use type
. Second one will be redundant i.e. skipped.
You can force any Recorder by passing this object over RecordRTC constructor:
var audioRecorder = RecordRTC(mediaStream, {
recorderType: StereoAudioRecorder
})
It means that ALL_BROWSERS will be using StereoAudioRecorder i.e. WebAudio API for audio recording.
This feature brings remote audio recording support in Firefox, and local audio recording support in Microsoft Edge.
Note: Chrome >=50
supports remote audio+video recording.
You can even force WhammyRecorder
on Firefox however webp format isn't yet supported in standard Firefox builds. It simply means that, you're skipping MediaRecorder API in Firefox.
If you are NOT using recorderType
parameter then type
parameter can be used to ask RecordRTC choose best recorder-type for recording.
// if it is Firefox, then RecordRTC will be using MediaStreamRecorder.js
// if it is Chrome or Opera, then RecordRTC will be using WhammyRecorder.js
var recordVideo = RecordRTC(mediaStream, {
type: 'video'
});
// if it is Firefox, then RecordRTC will be using MediaStreamRecorder.js
// if it is Chrome or Opera or Edge, then RecordRTC will be using StereoAudioRecorder.js
var recordVideo = RecordRTC(mediaStream, {
type: 'audio'
});
Set minimum interval (in milliseconds) between each time we push a frame to Whammy recorder.
var whammyRecorder = RecordRTC(videoStream, {
recorderType: WhammyRecorder,
frameInterval: 1 // setTimeout interval
});
You can disable all the RecordRTC logs by passing this Boolean:
var recorder = RecordRTC(mediaStream, {
disableLogs: true
});
You can force StereoAudioRecorder to record single-audio-channel only. It allows you reduce WAV file size to half.
var audioRecorder = RecordRTC(audioStream, {
recorderType: StereoAudioRecorder,
numberOfAudioChannels: 1 // or leftChannel:true
});
It will reduce WAV size to half!
This feature is useful only in Chrome and Microsoft Edge (WAV-recorders). It can work in Firefox as well.
var options = {
type: 'video',
video: {
width: 320,
height: 240
},
canvas: {
width: 320,
height: 240
}
};
var recordVideo = RecordRTC(MediaStream, options);
Note: Firefox seems has a bug. It is unable to pause the recording. Seems internal bug because they correctly changes
mediaRecorder.state
fromrecording
topaused
but they are unable to pause internal recorders.
RecordRTC pauses recording buffers/frames.
recordRTC.pauseRecording();
If you're using "initRecorder" then it asks RecordRTC that now its time to record buffers/frames. Otherwise, it asks RecordRTC to not only initialize recorder but also record buffers/frames.
recordRTC.resumeRecording();
Optionally get "DataURL" object instead of "Blob".
recordRTC.getDataURL(function(dataURL) {
mediaElement.src = dataURL;
});
Get "Blob" object. A blob object looks similar to input[type=file]
. Which means that you can append it to FormData
and upload to server using XMLHttpRequest object. Even socket.io nowadays supports blob-transmission.
blob = recordRTC.getBlob();
A virtual URL. It can be used only inside the same browser. You can't share it. It is just providing a preview of the recording.
window.open( recordRTC.toURL() );
Invoke save-as dialog. You can pass "fileName" as well; though fileName argument is optional.
recordRTC.save('File Name');
Here is how to customize Buffer-Size for audio recording?
// From the spec: This value controls how frequently the audioprocess event is
// dispatched and how many sample-frames need to be processed each call.
// Lower values for buffer size will result in a lower (better) latency.
// Higher values will be necessary to avoid audio breakup and glitches
// bug: how to minimize wav size?
// workaround? obviously ffmpeg!
// The size of the buffer (in sample-frames) which needs to
// be processed each time onprocessaudio is called.
// Legal values are (256, 512, 1024, 2048, 4096, 8192, 16384).
var options = {
bufferSize: 16384
};
var recordRTC = RecordRTC(audioStream, options);
Following values are allowed for buffer-size:
// Legal values are (256, 512, 1024, 2048, 4096, 8192, 16384)
If you passed invalid value then you'll get blank audio.
Here is jow to customize Sample-Rate for audio recording?
// The sample rate (in sample-frames per second) at which the
// AudioContext handles audio. It is assumed that all AudioNodes
// in the context run at this rate. In making this assumption,
// sample-rate converters or "varispeed" processors are not supported
// in real-time processing.
// The sampleRate parameter describes the sample-rate of the
// linear PCM audio data in the buffer in sample-frames per second.
// An implementation must support sample-rates in at least
// the range 22050 to 96000.
var options = {
sampleRate: 96000
};
var recordRTC = RecordRTC(audioStream, options);
Values for sample-rate must be greater than or equal to 22050 and less than or equal to 96000.
If you passed invalid value then you'll get blank audio.
You can pass custom sample-rate values only on Mac (or additionally maybe on Windows 10).
This option allows you set MediaRecorder output format:
var options = {
mimeType 'video/webm', // or video/webm\;codecs=h264 or video/webm\;codecs=vp9
bitsPerSecond: 128000
};
var recorder = RecordRTC(mediaStream, options);
Note: For chrome, it will simply auto-set type:audio or video
parameters to keep supporting StereoAudioRecorder.js
and WhammyRecorder.js
.
That is, you can skip passing type:audio
parameter when you're using mimeType
parameter.
The chosen bitrate for the audio and video components of the media. If this is specified along with one or the other of the above properties, this will be used for the one that isn't specified.
var options = {
mimeType 'video/webm', // or video/mp4 or audio/ogg
bitsPerSecond: 128000
};
var recorder = RecordRTC(mediaStream, options);
The chosen bitrate for the audio component of the media.
var options = {
mimeType 'audio/ogg',
audioBitsPerSecond: 128000
};
var recorder = RecordRTC(mediaStream, options);
The chosen bitrate for the video component of the media.
var options = {
mimeType 'video/webm', // or video/mp4
videoBitsPerSecond: 128000
};
var recorder = RecordRTC(mediaStream, options);
Note: "initRecorder" is preferred over this old hack. Both works similarly.
Useful to recover audio/video sync issues inside the browser:
recordAudio = RecordRTC( stream, {
onAudioProcessStarted: function( ) {
recordVideo.startRecording();
}
});
recordVideo = RecordRTC(stream, {
type: 'video'
});
recordAudio.startRecording();
onAudioProcessStarted
fixes shared/exclusive audio gap (a little bit). Because shared audio sometimes causes 100ms delay...
sometime about 400-to-500 ms delay.
Delay depends upon number of applications concurrently requesting same audio devices and CPU/Memory available.
Shared mode is the only mode currently available on 90% of windows systems especially on windows 7.
Using autoWriteToDisk
; you can suggest RecordRTC to auto-write to indexed-db as soon as you call stopRecording
method.
var recordRTC = RecordRTC(MediaStream, {
autoWriteToDisk: true
});
autoWriteToDisk
is helpful for single stream recording and writing to disk; however for MRecordRTC
; writeToDisk
is preferred one.
You can write recorded blob to disk using writeToDisk
method:
recordRTC.stopRecording();
recordRTC.writeToDisk();
You can get recorded blob from disk using getFromDisk
method:
// get all blobs from disk
RecordRTC.getFromDisk('all', function(dataURL, type) {
type == 'audio'
type == 'video'
type == 'gif'
});
// or get just single blob
RecordRTC.getFromDisk('audio', function(dataURL) {
// only audio blob is returned from disk!
});
For MRecordRTC; you can use word MRecordRTC
instead of RecordRTC
!
Another possible situation!
var recordRTC = RecordRTC(mediaStream);
recordRTC.startRecording();
recordRTC.stopRecording(function(audioURL) {
mediaElement.src = audioURL;
});
// "recordRTC" instance object to invoke "getFromDisk" method!
recordRTC.getFromDisk(function(dataURL) {
// audio blob is automaticlaly returned from disk!
});
In the above example; you can see that recordRTC
instance object is used instead of global RecordRTC
object.
<script src="https://cdn.WebRTC-Experiment.com/RecordRTC.js"></script>
<!-- link this file as well -->
<script src="/dev/RecordRTC.promises.js"></script>
<script>
// use "RecordRTCPromisesHandler" instead of "RecordRTC"
var recorder = new RecordRTCPromisesHandler(mediaStream, options);
recorder.startRecording().then(function() {
}).catch(function(error) {
//
});
recorder.stopRecording().then(function(url) {
var blob = recorder.blob;
recorder.getDataURL().then(function(dataURL) {
//
}).catch(function(error) {})
}).catch(function(error) {
//
});
</script>
- Recorderjs for audio recording
- whammy for video recording
- jsGif for gif recording
Contribute in RecordRTC.org domain
The domain www.RecordRTC.org is open-sourced here:
- Stackoverflow: https://stackoverflow.com/questions/tagged/recordrtc
- Github: https://github.com/muaz-khan/RecordRTC/issues
- Disqus: https://www.webrtc-experiment.com/RecordRTC/#ask
- Email: [email protected]
RecordRTC.js is released under MIT licence . Copyright (c) Muaz Khan.