yorickvp / jina-embeddings-v2-base-en

An English, monolingual embedding model supporting 8192 sequence length (137M version)

  • Public
  • 49 runs
  • Paper
  • License

Run time and cost

This model runs on Nvidia T4 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

Model card

Intended Usage & Model Info

jina-embeddings-v2-base-en is an English, monolingual embedding model supporting 8192 sequence length. It is based on a Bert architecture (JinaBert) that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length. The backbone jina-bert-v2-small-en is pretrained on the C4 dataset. The model is further trained on Jina AI’s collection of more than 400 millions of sentence pairs and hard negatives. These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.

The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi. This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.

With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model.

There is also a small model for faster inference.

Plans

The development of new bilingual models is currently underway. We will be targeting mainly the German and Spanish languages. The upcoming models will be called jina-embeddings-v2-small-de/es.

Contact

Join our Discord community and chat with other community members about ideas.