- How do I update Darwin?
- How do I configure Darwin to work with a local LLM?
- How do I configure Darwin to work with OpenAI?
- How do I configure Darwin for paper summarization?
Updating Darwin is straightforward. Execute the following command:
darwin update
This command will install the latest version of Darwin, ensuring you have the most up-to-date features and improvements available.
Darwin currently supports Ollama as a local LLM provider. To configure Darwin to use Ollama:
-
Download and install Ollama.
-
Download a model you want to use. For example, to use the model
llama3:instruct
, run:ollama pull llama3:instruct
A list of available models can be found here.
Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
-
Configure Darwin with the Ollama server's endpoint URL (typically
http://localhost:11434
unless Ollama is deployed to a remote server) and the model name (the model from Step 2):darwin config set
-
Validate the configuration by running:
darwin config get
To configure Darwin to use OpenAI's API, you first need to get an API key from OpenAI.
-
Navigate to the API key page and "Create new secret key", optionally naming the key.
-
Copy the API key and configure Darwin with it:
darwin config set
Models available on OpenAI can be found here. Usually you can start with
gpt-3.5-turbo
which is a good compromise between cost and performance. If you want better result, you can trygpt-4-turbo
. -
Validate the configuration by running:
darwin config get
Darwin uses a large language model (LLM) to summarize papers. You have two options: using OpenAI's API or setting up a local LLM.
Configure OpenAI following the steps in the previous section.
Once you have configured Darwin to use OpenAI, you can summarize papers using OpenAI's cloud-based service by providing the flag --llm-provider openai
.
Example:
darwin search papers "flash attention" --log-level DEBUG --output ./darwin-data --count 3 --include-summary --llm-provider openai
Note: This method is the most performant but can be costly.
For a more economical approach, you can set up a local LLM like Ollama by following the steps in the previous section.
Once you have configured Darwin to use Ollama, you can summarize papers using the local LLM by simply running the command or by explicitly providing the flag --llm-provider ollama
. (This is the default provider if no provider is specified.)
Example:
darwin search papers "flash attention" --log-level DEBUG --output ./darwin-data --count 3 --include-summary --llm-provider ollama