awerks / whisperx

Fast automatic speech recognition (70x realtime with large-v2) with word-level timestamps and speaker diarization.

  • Public
  • 14.7K runs
  • GitHub
  • Paper
  • License

Run time and cost

This model costs approximately $0.39 to run on Replicate, or 2 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 5 minutes. The predict time for this model varies significantly based on the inputs.

Readme

WhisperX provides fast automatic speech recognition (70x realtime with large-v2) with word-level timestamps and speaker diarization.

Whisper is an ASR model developed by OpenAI, trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds. OpenAI’s whisper does not natively support batching, but WhisperX does.

Users can either directly upload an audio file or input a URL to the audio. For larger files, it is recommended to upload to S3 with transfer acceleration enabled or utilize a CloudFront distribution.

This implementation of WhisperX supports transcription of all supported Whisper languages, and alignment of English, French, German, Spanish, Italian, Japanese, Chinese, Dutch, Ukrainian, Portuguese, Arabic, Czech, Russian, Polish, Hungarian, Finnish, Persian, Greek, Turkish, Danish, Hebrew, Vietnamese, Korean, Urdu, Telugu, Hindi, Romanian, Swedish, Indonesian audio.

English alignment model is loaded at the setup time.

This model uses Speaker Diarization@2.1.

The return format is {“segment” …, “language”: detected language}. You can use this tool to convert it to subtitles.