-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming text to speech? #45
Comments
I have not tried it, but there is an overload of the AudioClient.GenerateSppechFromText that might work for you: public virtual ClientResult GenerateSpeechFromText(BinaryContent content, RequestOptions options = null); to call this in streaming mode: RequestOptions options = new() { BufferResponse = false };
var json = BinaryData.FromObjectAsJson(new {
model = "tts-1",
input = "Today is a wonderful day to build something people love!",
voice = "alloy"
});
AudioClient client = ...;
var result = client.GenerateSpeechFromText(BinaryContent.Create(json), options);
PipelineResponse response = result.GetRawResponse();
using Stream stream = response.ContentStream; // very important to dispose the stream |
This seems to work nicely. Thanks! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I see in the official docs at https://platform.openai.com/docs/guides/text-to-speech the following:
There's python code for it, apparently, but it seems to be unsupported in this library so far. If it's supported, can you point me there? Otherwise, it would be really nice to have, as the latency with any vaguely long chunk of text for TTS is intolerably long for offering the ability to read back chat replies.
The text was updated successfully, but these errors were encountered: