You're looking at a specific version of this model. Jump to the model overview.

thomasmol /whisper-diarization:fb9b6db8

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
file_string
string
Either provide: Base64 encoded audio file,
file_url
string
Or provide: A direct audio file URL
file
string
Or an audio file
group_segments
boolean
True
Group segments of same speaker shorter apart than 2 seconds
transcript_output_format
string (enum)
both

Options:

words_only, segments_only, both

Specify the format of the transcript output: individual words with timestamps, full text of segments, or a combination of both.
num_speakers
integer

Min: 1

Max: 50

Number of speakers, leave empty to autodetect.
language
string
Language of the spoken words as a language code like 'en'. Leave empty to auto detect language.
prompt
string
Vocabulary: provide names, acronyms and loanwords in a list. Use punctuation for best accuracy.
offset_seconds
integer
0
Offset in seconds, used for chunked inputs

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'properties': {'language': {'title': 'Language', 'type': 'string'},
                'num_speakers': {'title': 'Num Speakers', 'type': 'integer'},
                'segments': {'items': {},
                             'title': 'Segments',
                             'type': 'array'}},
 'required': ['segments'],
 'title': 'Output',
 'type': 'object'}