fofr / star-trek-llama

llama-7b trained on the Memory Alpha Star Trek Wiki

If you haven’t yet trained a model on Replicate, we recommend you read one of the following guides.

Pricing

Trainings for this model run on Nvidia A100 (40GB) GPU hardware, which costs $0.00115 per second.

Create a training

Install the Python library:

pip install replicate

Then, run this to create a training with fofr/star-trek-llama:f68c7724 as the base model:

import replicate

training = replicate.trainings.create(
  version="fofr/star-trek-llama:f68c77246547da41231789a135e5383e801bf2bb73ea811cb7053a703ac535d8",
  input={
    ...
  },
  destination=f"{username}/<destination-model-name>"
)

print(training)
curl -s -X POST \
-d '{"destination": "{username}/<destination-model-name>", "input": {...}}' \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/models/fofr/star-trek-llama/versions/f68c77246547da41231789a135e5383e801bf2bb73ea811cb7053a703ac535d8/trainings

The API response will look like this:

{
  "id": "zz4ibbonubfz7carwiefibzgga",
  "version": "f68c77246547da41231789a135e5383e801bf2bb73ea811cb7053a703ac535d8",
  "status": "starting",
  "input": {
    "data": "..."
  },
  "output": null,
  "error": null,
  "logs": null,
  "started_at": null,
  "created_at": "2023-03-28T21:47:58.566434Z",
  "completed_at": null
}

Note that before you can create a training, you’ll need to create a model and use its name as the value for the destination field.