Skip to content

Latest commit

 

History

History
100 lines (63 loc) · 3.91 KB

faq.md

File metadata and controls

100 lines (63 loc) · 3.91 KB

Frequently Asked Questions

How do I update Darwin?

Updating Darwin is straightforward. Execute the following command:

darwin update

This command will install the latest version of Darwin, ensuring you have the most up-to-date features and improvements available.

How do I configure Darwin to work with a local LLM?

Darwin currently supports Ollama as a local LLM provider. To configure Darwin to use Ollama:

  1. Download and install Ollama.

  2. Download a model you want to use. For example, to use the model llama3:instruct, run:

    ollama pull llama3:instruct

    A list of available models can be found here.

    Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

  3. Configure Darwin with the Ollama server's endpoint URL (typically http://localhost:11434 unless Ollama is deployed to a remote server) and the model name (the model from Step 2):

    darwin config set
  4. Validate the configuration by running:

    darwin config get

How do I configure Darwin to work with OpenAI?

To configure Darwin to use OpenAI's API, you first need to get an API key from OpenAI.

  1. Sign up for an account on OpenAI or Sign in.

  2. Navigate to the API key page and "Create new secret key", optionally naming the key.

  3. Copy the API key and configure Darwin with it:

    darwin config set

    Models available on OpenAI can be found here. Usually you can start with gpt-3.5-turbo which is a good compromise between cost and performance. If you want better result, you can try gpt-4-turbo.

  4. Validate the configuration by running:

    darwin config get

How do I configure Darwin for paper summarization?

Darwin uses a large language model (LLM) to summarize papers. You have two options: using OpenAI's API or setting up a local LLM.

Using OpenAI's API

Configure OpenAI following the steps in the previous section.

Once you have configured Darwin to use OpenAI, you can summarize papers using OpenAI's cloud-based service by providing the flag --llm-provider openai.

Example:

darwin search papers "flash attention" --log-level DEBUG --output ./darwin-data --count 3 --include-summary --llm-provider openai

Note: This method is the most performant but can be costly.

Using a Local LLM

For a more economical approach, you can set up a local LLM like Ollama by following the steps in the previous section.

Once you have configured Darwin to use Ollama, you can summarize papers using the local LLM by simply running the command or by explicitly providing the flag --llm-provider ollama. (This is the default provider if no provider is specified.)

Example:

darwin search papers "flash attention" --log-level DEBUG --output ./darwin-data --count 3 --include-summary --llm-provider ollama