RAG library for .NET 10.0 - Build semantic search and retrieval systems with vector + keyword hybrid search.
- Hybrid Search - Vector (semantic) + Keyword (BM25) with automatic strategy selection
- High Performance - Embedding cache (100% faster), batch indexing (24ms/1K chunks)
- Local Reranking - Cross-encoder neural reranking with automatic algorithmic fallback
- Graph Traversal - BFS/DFS, Dijkstra shortest path, PageRank-style importance
- Vector Quantization - Scalar (Int8/Int4), Product Quantization, Binary (32x compression)
- Multiple Storage - SQLite, PostgreSQL with pgvector
- AI Provider Agnostic - Core provides abstract base classes, bring your own embedding service
- Document Processing - PDF/DOCX/TXT via FileFlux, web crawling via WebFlux
- MCP Server - Model Context Protocol for AI assistant integration
- Production Ready - Redis caching, clean architecture, .NET 10.0
dotnet add package FluxIndex.SDK
dotnet add package FluxIndex.Storage.SQLiteusing FluxIndex.SDK;
// 1. Setup (InMemory embedding for testing)
var context = FluxIndexContext.CreateBuilder()
.UseSQLite("fluxindex.db")
.Build();
// 2. Index
await context.Indexer.IndexDocumentAsync(
"FluxIndex is a RAG library for .NET", "doc-001");
// 3. Search
var results = await context.Retriever.SearchAsync("RAG library", maxResults: 5);FluxIndex is AI provider-agnostic. Extend EmbeddingServiceBase for your preferred provider:
// Example: LMSupply embedding (local ONNX-based, no API key)
public class LMSupplyEmbedder : EmbeddingServiceBase, IAsyncDisposable
{
private readonly IEmbeddingModel _model;
private LMSupplyEmbedder(IEmbeddingModel model) => _model = model;
public static async Task<LMSupplyEmbedder> CreateAsync(string modelId = "default")
{
var model = await LocalEmbedder.LoadAsync(modelId);
return new LMSupplyEmbedder(model);
}
protected override async Task<float[]> EmbedCoreAsync(string text, CancellationToken ct)
=> await _model.EmbedAsync(text, ct);
public override int GetEmbeddingDimension() => _model.Dimensions;
public override string GetModelName() => _model.ModelId;
public ValueTask DisposeAsync() => _model.DisposeAsync();
}
// Register and use
var context = FluxIndexContext.CreateBuilder()
.UseSQLite("fluxindex.db")
.ConfigureServices(s => s.AddSingleton<IEmbeddingService>(
LMSupplyEmbedder.CreateAsync().GetAwaiter().GetResult()))
.Build();FluxIndex provides Model Context Protocol (MCP) server for AI assistant integration.
Available Tools: search, memorize, unmemorize, status
See FluxIndex.MCP for integration details.
| Operation | Performance | Notes |
|---|---|---|
| Batch Indexing | 24ms/1K chunks | 8-thread parallelism |
| Vector Search | 0.6ms/query | In-memory embeddings |
| Embedding Cache | 100% faster | Eliminates API calls |
| Semantic Cache | <5ms | Redis, 95% similarity |
Full benchmarks: BENCHMARK_RESULTS.md
| Scenario | Packages |
|---|---|
| Embeddings + vector search only (no native deps, no document parsing) | FluxIndex.Core + storage |
| Full RAG pipeline (PDF, DOCX, HWP, web crawling) | FluxIndex.SDK + storage |
| File system monitoring + auto-indexing | FluxIndex.Extensions.FileVault + storage |
| Local AI embedding (ONNX, no API key required) | FluxIndex.Providers.LMSupply |
Minimal setup — bring your own embedding service, no native binaries:
dotnet add package FluxIndex.Core
dotnet add package FluxIndex.Storage.SQLiteFull SDK — includes document processing (PDF, DOCX, HWP, web crawling):
dotnet add package FluxIndex.SDK
dotnet add package FluxIndex.Storage.SQLite- Guide - Quick start and configuration
- Reference - Architecture and API reference
- Advanced RAG - HyDE, Contextual Retrieval, Query Expansion
- Philosophy - Core principles and design philosophy
- RealQualityTest - LMSupply + SQLite integration
- WebFluxSample - Web crawling with WebFlux
- ChunkingQualityTest - FileFlux chunking analysis
- FileFluxIndexSample - Document indexing workflow
- .NET 10.0 or later
- SQLite or PostgreSQL
MIT License - see LICENSE file.
Contributions are welcome! Please feel free to submit issues and pull requests.