A quick and easy solution to integrate Symbl.ai's transcriptions and conversational AI services with Dyte's SDK.
- Please find the Dyte integration logic in your codebase which may look like the following.
// Somewhere in your codebase
const meeting = await DyteClient.init(...)
- On top of the file where integration was found, import this package.
import {
activateTranscriptions,
deactivateTranscriptions,
addTranscriptionsListener,
removeTranscriptionsListener
} from '@dytesdk/symbl-transcription';
- Now you can activate Symbl transcriptions.
activateTranscriptions({
meeting: meeting, // From DyteClient.init
symblAccessToken: 'ACCESS_TOKEN_FROM_SYMBL_AI',
connectionId: 'SOME_ARBITRARY_CONNECTION_ID', // optional,
speakerUserId: 'SOME_ARBITRARY_USER_ID_FOR_SPEAKER', // optional
symblStartRequestParams: { // optional. Subset of https://docs.symbl.ai/reference/streaming-api-reference#start_request
noConnectionTimeout: 0,
config: {
sentiment: false,
},
},
symblStreamingMessageCallback: (event) => { // optional. If you need it for a custom use case
console.log('event from symbl')
},
});
This method internally connects with Symbl using Websocket connection & automatically forwards the audio to them, while your Mic is on. On receiving transcriptions from Symbl, we broadcast those transcriptions to all the participants of the meeting, including the speaker, being referred by meeting.self
.
connectionId
field is optional. If not passed, value of meeting.meta.roomName
will be used as connectionId
.
speakerUserId
field is optional. If not passed, value of meeting.self.clientSpecificId
will be used as speakerUserId
.
symblStartRequestParams
field is optional. In case you want to control Symbl settings further, you can override the values by passing just the fields to override, from https://docs.symbl.ai/reference/streaming-api-reference#start_request.
We perform deep merge of the passed value with the defaults, therefore no need to construct complete start_request message. For example, If you want to add just the email field to speaker and also want to change noConnectionTimeout to 300, you can do so using the following code snippet.
activateTranscriptions({
meeting: meeting, // From DyteClient.init
symblAccessToken: 'ACCESS_TOKEN_FROM_SYMBL_AI',
connectionId: 'SOME_ARBITRARY_CONNECTION_ID', // optional,
speakerUserId: 'SOME_ARBITRARY_USER_ID_FOR_SPEAKER', // optional
symblStartRequestParams: { // optional. Any subset of https://docs.symbl.ai/reference/streaming-api-reference#start_request
noConnectionTimeout: 300,
speaker: {
email: '[email protected]',
}
},
symblStreamingMessageCallback: (event) => { // optional. If you need it for a custom use case
console.log('event from symbl')
},
});
Note: If, in case, the passed fields are incorrect or poorly placed, conversation might not get created. In such cases, an error would be logged in developer console for you to debug further.
- If you want to show transcriptions to a participant or for
self
, you can do so using the following snippet.
addTranscriptionsListener({
meeting: meeting,
noOfTranscriptionsToCache: 200,
transcriptionsCallback: (allFormattedTranscriptions) => { console.log(allFormattedTranscriptions); },
})
Above code snippet helps you segregate speakers from listeners.
For example, If you know that a participant is only meant to act as a listener, you can avoid calling activateTranscriptions
and simply only call addTranscriptionsListener
that runs solely over Dyte, thus reducing concurrent connections to Symbl thus giving you a potential cost benefit.
Using transcriptionsCallback
you can populate the transcriptions in your app/website at any desired place.
NOTE: For every partial or complete sentence, transcriptionsCallback
will be called, with all formatted transcriptions.
Once meeting is over, deactivate the transcription generation.
deactivateTranscriptions({
meeting: meeting, // From DyteClient.init
});
In a similar fashion, remove the transcriptions listener, once the meeting is over.
removeTranscriptionsListener({meeting: meeting});
- Go to https://symbl.ai/ and register.
- Find your appId and appSecret on Symbl.ai post registeration in account settings.
- Run this CURL.
curl -k -X POST "https://api.symbl.ai/oauth2/token:generate" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d $'{
"type" : "application",
"appId": "YOUR_APP_ID",
"appSecret": "YOUR_APP_SECRET"
}'
Please pass a unique connectionId
for this meeting and a unique speakerUserId
for the speaker while activating treanscriptions using activateTranscriptions
method.
This would help you use subscribe API of Symbl, located at https://docs.symbl.ai/reference/subscribe-api along with better control over the speakers.
To see the demo or to test the Symbl integration, please go to https://github.com/dyte-in/symbl-transcription and clone the repo and run the npm script named dev
.
git clone https://github.com/dyte-in/symbl-transcription.git
cd symbl-transcription
npm install
npm run dev
It will run a server on localhost:3000 serving the HTML containing the sample integration from index.html.
Please use the following URL to see the Default Dyte Meeting interface.
https://localhost:3000/?authToken=PUT_DYTE_PARTICIPANT_AUTH_TOKEN_HERE&symblAccessToken=PUT_SYMBL_ACCESS_TOKEN_HERE
In case you are still using v1 meetings, please use the following URL.
https://localhost:3000/?authToken=PUT_DYTE_PARTICIPANT_AUTH_TOKEN_HERE&symblAccessToken=PUT_SYMBL_ACCESS_TOKEN_HERE&roomName=PUT_DYTE_ROOM_NAME_HERE
Once the Dyte UI is loaded, please turn on the Mic and grant permissions, if asked. Post that, try speaking sentences in English (default) to see the transcriptions.