You're looking at a specific version of this model. Jump to the model overview.

shaltielshmid /dictalm2.0:a6643aac

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Input prompt
max_new_tokens
integer
512

Min: 1

Max: 2048

Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens
integer
0

Max: 2048

Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
temperature
number
0.75

Max: 5

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
top_p
number
0.9

Max: 1

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
top_k
integer
50

Min: -1

When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
stop_sequences
string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
length_penalty
number
1

Max: 5

A parameter that controls how long the outputs are. If < 1, the model will tend to generate shorter outputs, and > 1 will tend to generate longer outputs.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'title': 'Output', 'type': 'string'}