logo
0
0
Login

KaLM-Embedding-Gemma3-12B-2511

HuggingFace Homepage Paper

Short Description

KaLM-Embedding-Gemma3-12B-2511 is a versatile and compact embedding model, which achieves SOTA performance in MMTEB (due to 11-2025).

MMTEB Evaluation Results

Rank (Borda)ModelMean (Task)Mean (TaskType)Bitext MiningClassificationClusteringInstruction RerankingMultilabel ClassificationPair ClassificationRerankingRetrievalSTS
1KaLM-Embedding-Gemma3-12B-251172.3262.5183.7677.8855.775.4933.0384.7367.2775.6679.02
2llama-embed-nemotron-8b69.4661.0981.7273.2154.3510.8229.8683.9767.7868.6979.41
3Qwen3-Embedding-8B70.5861.6980.8974.0057.6510.0628.6686.4065.6370.8881.08
4gemini-embedding-00168.3759.5979.2871.8254.595.1829.1683.6365.5867.7179.40
5Qwen3-Embedding-4B69.4560.8679.3672.3357.1511.5626.7785.0565.0869.6080.86
6Qwen3-Embedding-0.6B64.3456.0172.2366.8352.335.0924.5980.8361.4164.6576.17
7gte-Qwen2-7B-instruct62.5155.9373.9261.5552.774.9425.4885.1365.5560.0873.98
8Linq-Embed-Mistral61.4754.1470.3462.2450.600.9424.7780.4364.3758.6974.86
9multilingual-e5-large-instruct63.2255.0880.1364.9450.75-0.4022.9180.8662.6157.1276.81
10embeddinggemma-300m61.1554.3164.4060.9051.175.6124.8281.4063.2562.4974.73

Model Details

  • Model Size: 11.76B
  • Embedding Dimension: 3840
  • Max Input Tokens: 32k
  • MRL dimensions: 3840, 2048, 1024, 512, 256, 128, and 64
  • Pooling: lasttoken pooling

Usage

sentence-transformers support

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

You can use the model like this:

from sentence_transformers import SentenceTransformer import torch model = SentenceTransformer( "tencent/KaLM-Embedding-Gemma3-12B-2511", trust_remote_code=True, model_kwargs={ "torch_dtype": torch.bfloat16, "attn_implementation": "flash_attention_2", # Optional }, ) model.max_seq_length = 512 sentences = ["This is an example sentence", "Each sentence is converted"] prompt = "Instruct: Classifying the category of french news.\nQuery:" embeddings = model.encode( sentences, prompt=prompt, normalize_embeddings=True, batch_size=256, show_progress_bar=True, ) print(embeddings)

Or you can use encode_query and encode_document to automatically add the default prompt for queries ("Instruct: Given a query, retrieve documents that answer the query \nQuery: ") and documents (""), respectively.

from sentence_transformers import SentenceTransformer import torch model = SentenceTransformer( "tencent/KaLM-Embedding-Gemma3-12B-2511", trust_remote_code=True, model_kwargs={ "torch_dtype": torch.bfloat16, "attn_implementation": "flash_attention_2", # Optional }, ) model.max_seq_length = 512 queries = [ "What is the capital of China?", "Explain gravity", ] documents = [ "The capital of China is Beijing.", "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.", ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) similarities = model.similarity(query_embeddings, document_embeddings) print(similarities)

vllm support

Note: Since vllm only supports the Gemma3ForCausalLM model class and not Gemma3TextModel, model parameters must be loaded by specifying the CausalLM branch via revision="CausalLM".

from vllm import LLM sentences = ["This is an example sentence", "Each sentence is converted"] # Create an LLM. # You should pass task="embed" for embedding models model = LLM( model="tencent/KaLM-Embedding-Gemma3-12B-2511", task="embed", enforce_eager=True, revision="CausalLM", # specify the CausalLM branch for Gemma3ForCausalLM config ) outputs = model.embed(sentences) embeddings = [output.outputs.embedding for output in outputs]

Citation

If you find this model useful, please consider giving a star and citation.

@misc{zhao2025kalmembeddingv2, title={KaLM-Embedding-V2: Superior Training Techniques and Data Inspire A Versatile Embedding Model}, author={Xinping Zhao and Xinshuo Hu and Zifei Shan and Shouzheng Huang and Yao Zhou and Xin Zhang and Zetian Sun and Zhenyu Liu and Dongfang Li and Xinyuan Wei and Youcheng Pan and Yang Xiang and Meishan Zhang and Haofen Wang and Jun Yu and Baotian Hu and Min Zhang}, year={2025}, eprint={2506.20923}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.20923}, } @misc{hu2025kalmembedding, title={KaLM-Embedding: Superior Training Data Brings A Stronger Embedding Model}, author={Xinshuo Hu and Zifei Shan and Xinping Zhao and Zetian Sun and Zhenyu Liu and Dongfang Li and Shaolin Ye and Xinyuan Wei and Qian Chen and Baotian Hu and Haofen Wang and Jun Yu and Min Zhang}, year={2025}, eprint={2501.01028}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.01028}, }

Contact

If you encounter any issue, feel free to contact us via the email: yanshek.woo@gmail.com, xinpingzhao@slai.edu.cn

About

No description, topics, or website provided.
21.95 GiB
0 forks0 stars1 branches0 TagREADMEOther license