You're looking at a specific version of this model. Jump to the model overview.

tomasmcm /fin-llama-33b:d57f72b4

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Text prompt to send to the model.
max_length
integer
128

Min: 1

Maximum number of tokens to generate. A word is generally 2-3 tokens
temperature
number
0.8

Min: 0.01

Max: 5

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.8 is a good starting value.
top_p
number
0.95

Min: 0.01

Max: 1

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
repetition_penalty
number
1

Min: 0.01

Max: 5

Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'title': 'Output', 'type': 'string'}