You're looking at a specific version of this model. Jump to the model overview.

ritabratamaiti /instructmix-llama-3b:69599f11

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
instruction
string
Describes the task the model should perform. Must be provided if format_prompt is True.
input
string
Optional context or input for the task. Used to generate a prompt if format_prompt is True.
response_prefix
string
Optional response prefix. It will be added to the beginning of the response and will help guide the response generation.
temperature
number
0.1

Min: 0.01

Max: 5

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic. (minimum: 0.01; maximum: 5)
top_p
number
0.75

Min: 0.01

Max: 1

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens (minimum: 0.01; maximum: 1)
top_k
integer
40

Min: 1

Max: 100

The number of highest probability vocabulary tokens to keep for top-k-filtering (minimum: 1; maximum: 100)
num_beams
integer
4

Min: 1

Max: 10

Number of beams for beam search (minimum: 1; maximum: 10)
max_new_tokens
integer
128

Min: 1

Max: 512

Maximum number of new tokens to generate. A word is 2-3 tokens. (minimum: 1; maximum: 512)

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'title': 'Output', 'type': 'string'}