nateraw/llama-2-70b-chat-awq

llama-2-70b-chat quantized with AWQ and served with vLLM

Public
87 runs

Want to make some of these yourself?

Run this model