You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
faster-whisper relies on ctransformers2, which only work on NVIDIA cards at the moment.
With normal whisper (whisper library from OpenAI or whisper imported from transformers library) you can use GPU accelerated transcription on any card that reports cuda. For example I'm using pytorch-rocm and can use cude accelerated pytorch with my AMD 6900XT.
It would be great to have an option when calling Transcriptor class to pass a variable indicating if it should use faster-whisper or regular whisper. (It would be amazing if it could autodetect, but as this things change week to week I believe a variable would be enough for this)
Thank you!
The text was updated successfully, but these errors were encountered:
faster-whisper relies on ctransformers2, which only work on NVIDIA cards at the moment.
With normal whisper (whisper library from OpenAI or whisper imported from transformers library) you can use GPU accelerated transcription on any card that reports cuda. For example I'm using pytorch-rocm and can use cude accelerated pytorch with my AMD 6900XT.
It would be great to have an option when calling Transcriptor class to pass a variable indicating if it should use faster-whisper or regular whisper. (It would be amazing if it could autodetect, but as this things change week to week I believe a variable would be enough for this)
Thank you!
The text was updated successfully, but these errors were encountered: