tomasmcm / neuronovo-7b-v0.3

Source: Neuronovo/neuronovo-7B-v0.3 ✦ Quant: TheBloke/neuronovo-7B-v0.3-AWQ ✦ Neuronovo/neuronovo-7B-v0.3 model represents an advanced and fine-tuned version of a large language model, initially based on CultriX/MistralTrix-v1.

  • Public
  • 37 runs
  • Paper
  • License

Run time and cost

This model costs approximately $0.064 to run on Replicate, or 15 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A40 GPU hardware. Predictions typically complete within 113 seconds. The predict time for this model varies significantly based on the inputs.

Readme

More information about previous Neuronovo/neuronovo-7B-v0.2 version available here: 🔗Don’t stop DPOptimizing!

Author: Jan Kocoń     🔗LinkedIn     🔗Google Scholar     🔗ResearchGate

Changes concerning Neuronovo/neuronovo-7B-v0.2:

  1. Training Dataset: In addition to the Intel/orca_dpo_pairs dataset, this version incorporates a mlabonne/chatml_dpo_pairs. The combined datasets enhance the model’s capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation.

  2. Tokenizer and Formatting: The tokenizer now originates directly from the Neuronovo/neuronovo-7B-v0.2 model.

  3. Training Configuration: The training approach has shifted from using max_steps=200 to num_train_epochs=1. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.

  4. Learning Rate: The learning rate has been reduced to a smaller value of 5e-6. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance.