zsxkib / hello-world

A "Hello World" model for me to get to grips with `cog` and Replicate

  • Public
  • 41 runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model runs on CPU hardware. Predictions typically complete within 19 seconds.

Readme

Replicate Hello World with Cog 🌍

Welcome to the replicate-hello-world project! This is a simple “Hello World” function designed to help me get started with Cog and Replicate.

If you’re also new to Cog and Replicate you should check out the code!

Project Overview 🔮

This project contains a Python script that defines a greeting predictor. The predictor is a simple function that takes a name as input and returns a greeting message.

If you’d like details on how to push your own HelloWorld model, read on! ⚙️
Otherwise, just type in a "name" into the Replicate instance! 🖍️


Predictor Class Overview 🚀

The Predictor class in our script extends the BasePredictor class from the Cog library. It consists of two main methods:

  1. setup(): This method is called once during the initialization of the Predictor instance. It’s used to set up any necessary attributes or data for the Predictor. In our case, we’re initializing a greetings attribute with the string “Hello”.

  2. predict(name: str): This method is the core of our model. It’s called to make predictions based on the provided name. It uses the greetings attribute of the instance and the provided name to create a greeting message. The name is passed as an argument to the predict method.

Here’s a simple example of how to use the Predictor:

> p = Predictor()
> print(p.predict("Sakib"))

Hello, Sakib!

Running the Model Locally 🏡

To run the model locally, you will first need to build it using Cog. You can do this by running the following command in your terminal:

$ cog build

This command will build a Docker image of the model.

After building the model, you can make a prediction by running the following command:

$ cog predict -i name="Sakib"

This command will start a Docker container of the model and make a prediction using the input you provided ("Sakib" in this case). You should see output similar to the following:

Running prediction...
Hello, Sakib!

This indicates that the model successfully made a prediction.

Note that the specifics of how to use the cog command may depend on your setup and environment, so be sure to consult the Cog documentation for more detailed information.