lucataco / llama-2-7b-chat

Meta's Llama 2 7b Chat - GPTQ

  • Public
  • 20.2K runs
  • GitHub
  • Paper
  • License

Run time and cost

This model costs approximately $0.0040 to run on Replicate, or 250 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 6 seconds.

Readme

This an attempt at an implementation of the model: TheBloke/Llama-2-7b-Chat-GPTQ

A quantized version of Llama 2 7b model

Give me a follow if you like my work! @lucataco93