gte-large-quant
This is the quantized (INT8) ONNX variant of the gte-large embeddings model created with DeepSparse Optimum for ONNX export/inference and Neural Magic's Sparsify for one-shot quantization.
Current list of sparse and quantized gte ONNX models:
Links | Sparsification Method |
---|---|
zeroshot/gte-large-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-large-quant | Quantization (INT8) |
zeroshot/gte-base-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-base-quant | Quantization (INT8) |
zeroshot/gte-small-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-small-quant | Quantization (INT8) |
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-quant', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
For further details regarding DeepSparse & Sentence Transformers integration, refer to the DeepSparse README.
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.
- Downloads last month
- 392
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Spaces using zeroshot/gte-large-quant 2
Evaluation results
- cos_sim_pearson on MTEB BIOSSEStest set self-reported90.273
- cos_sim_spearman on MTEB BIOSSEStest set self-reported87.978
- euclidean_pearson on MTEB BIOSSEStest set self-reported88.428
- euclidean_spearman on MTEB BIOSSEStest set self-reported87.972
- manhattan_pearson on MTEB BIOSSEStest set self-reported88.138
- manhattan_spearman on MTEB BIOSSEStest set self-reported87.434
- cos_sim_pearson on MTEB SICK-Rtest set self-reported85.142
- cos_sim_spearman on MTEB SICK-Rtest set self-reported79.134
- euclidean_pearson on MTEB SICK-Rtest set self-reported83.080
- euclidean_spearman on MTEB SICK-Rtest set self-reported79.316