You're looking at a specific version of this model. Jump to the model overview.
lucataco /hunyuan-1.8b-instruct:4d7625dd
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
User prompt to generate from.
|
|
system_prompt |
string
|
You are a helpful assistant.
|
System instruction used in chat template.
|
max_tokens |
integer
|
512
Min: 1 Max: 8192 |
Maximum new tokens to generate.
|
temperature |
number
|
0.7
Max: 2 |
Sampling temperature.
|
top_p |
number
|
0.8
Max: 1 |
Nucleus sampling p.
|
top_k |
integer
|
40
|
Top-k sampling (<=0 to disable).
|
repetition_penalty |
number
|
1.05
Min: 0.5 Max: 2 |
Repetition penalty (>1 discourages repeats).
|
stop |
string
|
Optional comma-separated list of stop strings.
|
|
stop_token_ids |
string
|
Optional comma-separated list of integer stop token IDs.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'title': 'Output', 'type': 'string'}