Post
2441
That didn't take long! Nomic AI has finetuned the new ModernBERT-base encoder model into a strong embedding model for search, classification, clustering and more!
Details:
đ¤ Based on ModernBERT-base with 149M parameters.
đ Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB!
đď¸ Immediate FA2 and unpacking support for super efficient inference.
đŞ Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256.
âĄď¸ Maximum sequence length of 8192 tokens!
2ď¸âŁ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets.
â Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc.
đď¸ Apache 2.0 licensed: fully commercially permissible
Try it out here: nomic-ai/modernbert-embed-base
Very nice work by Zach Nussbaum and colleagues at Nomic AI.
Details:
đ¤ Based on ModernBERT-base with 149M parameters.
đ Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB!
đď¸ Immediate FA2 and unpacking support for super efficient inference.
đŞ Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256.
âĄď¸ Maximum sequence length of 8192 tokens!
2ď¸âŁ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets.
â Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc.
đď¸ Apache 2.0 licensed: fully commercially permissible
Try it out here: nomic-ai/modernbert-embed-base
Very nice work by Zach Nussbaum and colleagues at Nomic AI.