Run time and cost
This model costs approximately $0.21 to run on Replicate, or 4 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia L40S GPU hardware.
Predictions typically complete within 4 minutes.
The predict time for this model varies significantly based on the inputs.
Readme
About
This is a Cog implementation of Ollama’s deepseek-r1 70b model using the default Q4_K_M weights
Description:
DeepSeek’s first-generation reasoning models, achieving performance comparable to OpenAI-o1 across math, code, and reasoning tasks.