Skip to content

Anush008/fastembed-go

Folders and files

NameName
Last commit message
Last commit date

Latest commit

98bb32d Β· May 24, 2024

History

32 Commits
Jan 31, 2024
Jan 31, 2024
Oct 8, 2023
Jan 31, 2024
Oct 8, 2023
Oct 6, 2023
May 24, 2024
Jan 31, 2024
Jan 31, 2024
Jan 31, 2024
Jan 31, 2024

Repository files navigation

Go implementation of @Qdrant/fastembed

Go Reference MIT Licensed Semantic release

πŸ• Features

  • Supports batch embeddings with parallelism using go-routines.
  • Uses @sugarme/tokenizer for fast tokenization.
  • Optimized embedding models.

The default embedding supports "query" and "passage" prefixes for the input text. The default model is Flag Embedding, which is top of the MTEB leaderboard.

πŸ” Not looking for Go?

πŸ€– Models

πŸš€ Installation

Run the following Go CLI command in your project directory:

go get -u github.com/anush008/fastembed-go

β„ΉοΈŽ Notice:

The Onnx runtime path is automatically loaded on most environments. However, if you encounter

panic: Platform-specific initialization failed: Error loading ONNX shared library

Set the ONNX_PATH env to your Onnx installation. For eg, on MacOS:

export ONNX_PATH="/path/to/onnx/lib/libonnxruntime.dylib"

On Linux:

export ONNX_PATH="/path/to/onnx/lib/libonnxruntime.so"

You can find the Onnx runtime releases here.

πŸ“– Usage

import "github.com/anush008/fastembed-go"

// With default options
model, err := fastembed.NewFlagEmbedding(nil)
if err != nil {
 panic(err)
}
defer model.Destroy()

// With custom options
options := fastembed.InitOptions{
 Model:     fastembed.BGEBaseEN,
 CacheDir:  "model_cache",
 MaxLength: 200,
}

model, err = fastembed.NewFlagEmbedding(&options)
if err != nil {
 panic(err)
}
defer model.Destroy()

documents := []string{
 "passage: Hello, World!",
 "query: Hello, World!",
 "passage: This is an example passage.",
 // You can leave out the prefix but it's recommended
 "fastembed-go is licensed under MIT",
}

// Generate embeddings with a batch-size of 25, defaults to 256
embeddings, err := model.Embed(documents, 25)  //  -> Embeddings length: 4
if err != nil {
 panic(err)
}

Supports passage and query embeddings for more accurate results

// Generate embeddings for the passages
// The texts are prefixed with "passage" for better results
// The batch size is set to 1 for demonstration purposes
passages := []string{
 "This is the first passage. It contains provides more context for retrieval.",
 "Here's the second passage, which is longer than the first one. It includes additional information.",
 "And this is the third passage, the longest of all. It contains several sentences and is meant for more extensive testing.",
}

embeddings, err := model.PassageEmbed(passages, 1)  //  -> Embeddings length: 3
if err != nil {
 panic(err)
}

// Generate embeddings for the query
// The text is prefixed with "query" for better retrieval
query := "What is the answer to this generic question?";

embeddings, err := model.QueryEmbed(query)
if err != nil {
 panic(err)
}

πŸš’ Under the hood

Why fast?

It's important we justify the "fast" in FastEmbed. FastEmbed is fast because:

  1. Quantized model weights
  2. ONNX Runtime which allows for inference on CPU, GPU, and other dedicated runtimes

Why light?

  1. No hidden dependencies via Huggingface Transformers

Why accurate?

  1. Better than OpenAI Ada-002
  2. Top of the Embedding leaderboards e.g. MTEB

πŸ“„ LICENSE

MIT Β© 2023