|
| 1 | +# Config LLamaSharp |
| 2 | + |
| 3 | +BotSharp contains LLamaSharp plugin that allows you to run local llm models. To use the LLamaSharp, you need to config the BotSharp project with few steps. |
| 4 | + |
| 5 | +## Install LLamaSharp Backend |
| 6 | + |
| 7 | +Before use LLamaSharp plugin, you need to install one of the LLamaSharp backend services that suits your environment. |
| 8 | + |
| 9 | +- [`LLamaSharp.Backend.Cpu`](https://www.nuget.org/packages/LLamaSharp.Backend.Cpu): Pure CPU for Windows & Linux. Metal for Mac. |
| 10 | +- [`LLamaSharp.Backend.Cuda11`](https://www.nuget.org/packages/LLamaSharp.Backend.Cuda11): CUDA 11 for Windows and Linux |
| 11 | +- [`LLamaSharp.Backend.Cuda12`](https://www.nuget.org/packages/LLamaSharp.Backend.Cuda12): CUDA 12 for Windows and Linux |
| 12 | + |
| 13 | +**Please install the same version of LLamaSharp Backend with the LLamaSharp in BotSharp.Plugin.LLamaSharp.csproj.** |
| 14 | + |
| 15 | + |
| 16 | + |
| 17 | +```shell |
| 18 | +# move to the LLamaSharp Plugin Project |
| 19 | +$ cd src/Plugins/BotSharp.Plugin.LLamaSharp |
| 20 | +# Install the LLamaSharp Backend |
| 21 | +$ dotnet add package LLamaSharp.Backend.Cpu --version 0.9.1 |
| 22 | +``` |
| 23 | + |
| 24 | +## Download and Config Local LLM Models |
| 25 | + |
| 26 | +LLamaSharp supports many LLM Models like LLaMA and Alpaca. Download the `gguf` format models and save them in your machine. |
| 27 | + |
| 28 | +We will use a [Llama 2](https://huggingface.co/TheBloke/llama-2-7B-Guanaco-QLoRA-GGUF) model in this tutorial. |
| 29 | + |
| 30 | +After downloading the model, open the `src/WebStarter/appsettings.json` file to config the LLamaSharp models. Set the `LlmProviders` and `LlamaSharp` fields to correct settings as your computer. For example: |
| 31 | + |
| 32 | +```json |
| 33 | +{ |
| 34 | + ..., |
| 35 | + "LlmProviders": [ |
| 36 | + ..., |
| 37 | + { |
| 38 | + "Provider": "llama-sharp", |
| 39 | + "Models": [ |
| 40 | + { |
| 41 | + "Name": "llama-2-7b.Q2_K.gguf", |
| 42 | + "Type": "chat" |
| 43 | + } |
| 44 | + ] |
| 45 | + }, |
| 46 | + ... |
| 47 | + ], |
| 48 | + ..., |
| 49 | + "LlamaSharp": { |
| 50 | + "Interactive": true, |
| 51 | + "ModelDir": "/Users/wenwei/Desktop/LLM", |
| 52 | + "DefaultModel": "llama-2-7b.Q2_K.gguf", |
| 53 | + "MaxContextLength": 1024, |
| 54 | + "NumberOfGpuLayer": 20 |
| 55 | + }, |
| 56 | + ... |
| 57 | +} |
| 58 | +``` |
| 59 | + |
| 60 | +For more details about LLamaSharp, visit [LLamaSharp - GitHub](https://github.com/SciSharp/LLamaSharp). |
0 commit comments