Skip to content

Releases: huggingface/text-embeddings-inference

v1.9.3

23 Mar 11:57
0667015

Choose a tag to compare

What's Changed

  • Use rust-toolchain.toml before rustup on Dockerfile-{cuda,cuda-all} by @alvarobartt in #842
  • fix(backend): replace bare except with Exception in device check by @llukito in #821
  • Set version 1.9.3 by @alvarobartt in #849

New Contributors

Full Changelog: v1.9.2...v1.9.3

v1.9.2

25 Feb 11:17
1d6ceb4

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.9.1...v1.9.2

v1.9.1

17 Feb 20:59
b38b8f1

Choose a tag to compare

What's Changed

🚨 Fix

When releasing ghcr.io/huggingface/text-embeddings-inference:cuda-1.9 with CUDA 12.9 and cuda-compat-12-9 there was an issue when running that same container on instances with CUDA 13.0+, as the cuda-compat-12-9 set in LD_LIBRARY_PATH was leading to a CUDA_ERROR_SYSTEM_DRIVER_MISMATCH = 803, which is now solved with a custom entrypoint that dynamically includes the cuda-compat on the LD_LIBRARY_PATH depending on the instance CUDA version.

Full Changelog: v1.9.0...v1.9.1

v1.9.0

17 Feb 13:42
5699247

Choose a tag to compare

text-embeddings-inference-v1 9 0

What's changed?

🚨 Breaking changes

  • Default HiddenAct::Gelu to GeLU + tanh in favour of GeLU erf by @vrdn-23 in #753

Default GeLU implementation is now GeLU + tanh approximation instead of exact GeLU (aka. GeLU erf) to make sure that the CPU and CUDA embeddings are the same (as cuBLASlt only supports GeLU + tanh), which represents a slight misalignment from how Transformers handles it, as when hidden_act="gelu" is set in config.json, GeLU erf should be used. The numerical differences between GeLU + tanh and GeLU erf should have negligible impact on inference quality.

--auto-truncate now defaults to true, meaning that the sequences will be truncated to the lower value between the --max-batch-tokens or the maximum model length, to prevent the --max-batch-tokens from being lower than the actual maximum supported length.

🎉 Additions

🐛 Fixes

  • Fix reading non-standard config for past_key_values in ONNX by @alvarobartt in #751
  • Fix TruncationDirection to deserialize from lowercase and capitalized by @alvarobartt in #755
  • Fix sagemaker-entrypoint* & remove SageMaker and Vertex from Dockerfile* by @alvarobartt in #699
  • Bug: Critical accuracy bugs for model_type=qwen2: no causal attention and wrong tokenizer by @michaelfeil in #762
  • Fix config.json reading w/ aliases for ORT by @alvarobartt in #786
  • Fix HTTP error code for validation by @vrdn-23 in #818
  • Fix to acquire the permit in a blocking way by @kozistr in #726
  • Read Hugging Face Hub token from cache if not provided by @alvarobartt in #814
  • Align the normalize param between the gRPC and HTTP /embed interfaces by @kozistr in #810

⚡ Improvements

📄 Other

🆕 New Contributors

Full Changelog: v1.8.3...v1.9.0

v1.8.3

30 Oct 09:08
78502d8

Choose a tag to compare

What's Changed

Bug Fixes

  • Fix error code for empty requests by @vrdn-23 in #727
  • Fix the infinite loop when max_input_length is bigger than max-batch-tokens by @kozistr in #725
  • Fix reading modules.json for Dense modules in local models by @alvarobartt in #738

Tests, Documentation & Release

New Contributors

Full Changelog: v1.8.2...v1.8.3

v1.8.2

09 Sep 14:45
d7af1fc

Choose a tag to compare

🔧 Fixed Intel MKL Support

Since Text Embeddings Inference (TEI) v1.7.0, Intel MKL support had been broken due to changes in the candle dependency. Neither static-linking nor dynamic-linking worked correctly, which caused models using Intel MKL on CPU to fail with errors such as: "Intel oneMKL ERROR: Parameter 13 was incorrect on entry to SGEMM".

Starting with v1.8.2, this issue has been resolved by fixing how the intel-mkl-src dependency is defined. Both features, static-linking and dynamic-linking (the default), now work correctly, ensuring that Intel MKL libraries are properly linked.

This issue occurred in the following scenarios:

  • Users installing text-embeddings-router via cargo with the --feature mkl flag. Although dynamic-linking should have been used, it was not working as intended.
  • Users relying on the CPU Dockerfile when running models without ONNX weights. In these cases, Safetensors weights were used with candle as backend (with MKL optimizations), instead of ort.

The following table shows the affected versions and containers:

Version Image
1.7.0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.0
1.7.1 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.1
1.7.2 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.2
1.7.3 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.3
1.7.4 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.4
1.8.0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.0
1.8.1 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.1

More details: PR #715

Full Changelog: v1.8.1...v1.8.2

v1.8.1

04 Sep 15:22
0adb000

Choose a tag to compare

text-embeddings-inference-v1 8 1-embedding-gemma(1)

Today, Google releases EmbeddingGemma, a state-of-the-art multilingual embedding model perfect for on-device use cases. Designed for speed and efficiency, the model features a compact size of 308M parameters and a 2K context window, unlocking new possibilities for mobile RAG pipelines, agents, and more. EmbeddingGemma is trained to support over 100 languages and is the highest-ranking text-only multilingual embedding model under 500M on the Massive Text Embedding Benchmark (MTEB) at the time of writing.

  • CPU:
docker run -p 8080:80 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.1 \
    --model-id google/embeddinggemma-300m --dtype float32
  • CPU with ONNX Runtime:
docker run -p 8080:80 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.1 \
    --model-id onnx-community/embeddinggemma-300m-ONNX --dtype float32 --pooling mean
  • NVIDIA CUDA:
docker run --gpus all --shm-size 1g -p 8080:80 ghcr.io/huggingface/text-embeddings-inference:cuda-1.8.1 \
    --model-id google/embeddinggemma-300m --dtype float32

Notable Changes

  • Add support for Gemma3 (text-only) architecture
  • Intel updates to Synapse 1.21.3 and IPEX 2.8
  • Extend ONNX Runtime support in OrtRuntime
    • Support position_ids and past_key_values as inputs
    • Handle padding_side and pad_token_id

What's Changed

Full Changelog: v1.8.0...v1.8.1

v1.8.0

05 Aug 08:31
2bff275

Choose a tag to compare

text-embeddings-inference-v1 8 0(2)

Notable Changes

  • Qwen3 support for 0.6B, 4B and 8B on CPU, MPS, and FlashQwen3 on CUDA and Intel HPUs
  • NomicBert MoE support
  • JinaAI Re-Rankers V1 support
  • Matryoshka Representation Learning (MRL)
  • Dense layer module support (after pooling)

Note

Some of the aforementioned changes were released within the patch versions on top of v1.7.0, whilst both Matryoshka Representation Learning (MRL) and Dense layer module support have been recently included and were not released yet.

What's Changed

New Contributors

Full Changelog: v1.7.0...v1.8.0

v1.7.4

07 Jul 12:33
6e900af

Choose a tag to compare

Noticeable Changes

Qwen3 was not working fine on CPU / MPS when sending batched requests on FP16 precision, due to the FP32 minimum value downcast (now manually set to FP16 minimum value instead) leading to null values, as well as a missing to_dtype call on the attention_bias when working with batches.

What's Changed

Full Changelog: v1.7.3...v1.7.4

v1.7.3

30 Jun 10:54
fb80177

Choose a tag to compare

Noticeable Changes

Qwen3 support included for Intel HPU, and fixed for CPU / Metal / CUDA.

What's Changed

New Contributors

Full Changelog: v1.7.2...v1.7.3