-
Notifications
You must be signed in to change notification settings - Fork 21
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
ec485a6
commit 7855a74
Showing
8 changed files
with
94 additions
and
61 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,100 +1,133 @@ | ||
# Diffbot LLM | ||
# Diffbot GraphRAG LLM | ||
|
||
Diffbot LLM is an API that combines the reasoning ability of frontier large language models with the knowledge | ||
found in Diffbot's Knowledge Graph and realtime web index using Retrieval Augmented Generation. | ||
## 1. Introduction | ||
|
||
Diffbot LLM is best suited for applications that require maximum accuracy and | ||
factual grounding with authoritative citations. | ||
Recently, large language models (LLMs) have been trained with more and more data, leading to an increase in the number of parameters and the compute power needed. But what if, instead of feeding the model more data, we purposefully trained it to rely less on its pretraining data and more on it's ability to find external knowledge? | ||
|
||
Diffbot LLM API is fully compatible with OpenAI's Chat Completion API and can be used as a drop-in replacement. | ||
To test this idea, we fine-tuned LLama 3.3 70B to be an expert tool user of a real-time Knowledge Graph API, providing the first open-source implementation of a GraphRAG system that outperforms Google Gemini and ChatGPT. | ||
|
||
## Diffbot LLM Inference Server | ||
## 2. Features | ||
|
||
Diffbot LLM is available as a service hosted by Diffbot, or it can be self-hosted with the following open-source models: | ||
* diffbot-small (8b llama 3.1 fine tune): https://huggingface.co/diffbot/diffbot-small-1.0 | ||
* diffbot-small-xl (70b llama 3.1 fine tune): https://huggingface.co/diffbot/diffbot-small-xl-1.0 | ||
## Real-time web URL extraction | ||
|
||
This repo contains the Diffbot LLM Inference, a system to serve the open-source Diffbot LLM models | ||
with built-in tool calling for Diffbot's Knowledge Graph and realtime web index. | ||
 | ||
|
||
This system requires an Nvidia GPU with at least 80G VRAM. | ||
As a RAG system, Diffbot LLM can summarize a web document in real-time, appropriately crediting the original source. | ||
|
||
## Demo | ||
## Expert Retriever of Factual citations | ||
|
||
A demo of Diffbot LLM is available at https://diffy.chat | ||
 | ||
|
||
 | ||
Diffbot LLM is explicitly trained to align the cited text with the reference source. | ||
|
||
## Usage | ||
## Knowledge Graph Querying | ||
|
||
### API | ||
 | ||
|
||
#### Who is the CEO of Nike? | ||
Diffbot LLM is an expert tool user of the Diffbot (Knowledge Graph) Query Language. | ||
|
||
```python | ||
from openai import OpenAI | ||
# get your free token at https://app.diffbot.com/get-started/ | ||
diffbot_token = "<YOUR_TOKEN>" | ||
base_url = "http://<YOUR_SERVER>:8001/rag/v1" # or https:/llm.diffbot.com/rag/v1 for Diffbot-hosted | ||
client = OpenAI(api_key=diffbot_token, base_url=base_url) | ||
completion = client.chat.completions.create( | ||
model="diffbot-small", | ||
temperature=0, | ||
messages=[ | ||
{"role": "system", "content": "You are a helpful assistant."}, | ||
{ | ||
"role": "user", | ||
"content": "Who is Nike's CEO?" | ||
} | ||
] | ||
) | ||
print (completion) | ||
``` | ||
## Image Entailment | ||
|
||
 | ||
|
||
## Evaluation | ||
Diffbot LLM an also entail images. | ||
|
||
### MMLU-Pro | ||
## Code Interpreter Tool Use | ||
|
||
 | ||
|
||
|
||
Instead of relying on the model weights for performing empirical calculations, Diffbot LLM is an expert tool user of a Javascript interpreter that it can use to inform it's response. | ||
|
||
 | ||
|
||
## Fun stuff | ||
|
||
 | ||
|
||
### FreshQA | ||
Diffbot LLM is an expert maker of ASCII-art weather forecasts, grounded in real sources. | ||
|
||
 | ||
## 3. Model Download | ||
|
||
FreshQA is a dynamic question answering benchmark encompassing a diverse range of question and answer types, including questions that | ||
require fast-changing world knowledge as well as questions with false premises that need to be debunked. | ||
Available on HuggingFace at: | ||
* diffbot-small (8b Llama 3.1 fine tune): https://huggingface.co/diffbot/Llama-3.1-Diffbot-Small-2412 | ||
* diffbot-small-xl (70b Llama 3.3 fine tune): https://huggingface.co/diffbot/Llama-3.3-Diffbot-Small-XL-2412 | ||
|
||
## 4. Accuracy Benchmarks | ||
|
||
### FreshQA Dataset | ||
|
||
 | ||
|
||
[FreshQA](https://arxiv.org/abs/2310.03214) is a benchmark that measures real-time accuracy for search RAG systems. Diffbot LLM outperforms gpt-4o (no web access), ChatGPT (with web access), Google Gemini, and Perplexity on real-time factual accuracy. | ||
|
||
In this evaluation, we focus on 130 FreshQA questions whose answer have changed in 2024, which is after the knowledge | ||
cutoff for all evaluated models as of December 2024. | ||
|
||
## Pricing | ||
### MMLU-Pro | ||
|
||
[MMLU-Pro](https://arxiv.org/abs/2406.01574) is a more difficult version of the [MMLU](https://arxiv.org/abs/2009.03300) benchmark that tests for static knowledge of 57 academic subjects using a 10-choice multiple-choice questions. [MMLU-Pro Leaderboard](https://huggingface.co/spaces/TIGER-Lab/MMLU-Pro). | ||
|
||
Get a free token at: https://app.diffbot.com/get-started/ | ||
Below shows the MMLU-Pro scores of diffbot-small and diffbot-small-xl over the base models it was fine-tuned from. | ||
|
||
Contact [email protected] if need more credits or higher limits. | ||
| Model | Accuracy (CoT 5-shot) | | ||
| ----- | ----------------- | | ||
| diffbot-small-xl | 72.89 | | ||
| Llama-3.3-70B Instruct | 65.92 | | ||
|
||
## Self-Hosting | ||
| Model | Accuracy (CoT 5-shot) | | ||
| ----- | ----------------- | | ||
| diffbot-small | 48.64 | | ||
| Llama-3.1-8B Instruct | 44.25 | | ||
|
||
### Using Docker image and models in huggingface | ||
Note: This is a measurement of the Diffbot GraphRAG LLM API end-to-end, not a measure of the knowledge contained in the weights. The lift in its performance over the base model comes from its ability to access external tools. | ||
|
||
|
||
## 5. Demo | ||
|
||
Try Diffbot LLM using the demo app at https://diffy.chat | ||
|
||
## 6. Running Locally | ||
|
||
Tested minimum hardware configurations: | ||
|
||
- Nvidia A100 40G for diffbot-small | ||
- Nvidia 2XH100 80G for diffbot-small-xl @ FP8 | ||
|
||
Using Docker image and models in huggingface | ||
1. Pull docker image: `docker pull docker.io/diffbot/diffbot-llm-inference:latest` | ||
2. Run docker image. **Note: The model weights will be automatically downloaded from huggingface. | ||
This might take a few minutes.** | ||
|
||
```bash | ||
docker run --runtime nvidia --gpus all -p 8001:8001 --ipc=host -e VLLM_OPTIONS="--model diffbot/diffbot-small-1.0 --served-model-name diffbot-small --enable-prefix-caching" docker.io/diffbot/diffbot-llm-inference:latest | ||
docker run --runtime nvidia --gpus all -p 8001:8001 --ipc=host -e VLLM_OPTIONS="--model diffbot/Llama-3.1-Diffbot-Small-2412 --served-model-name diffbot-small --enable-prefix-caching" docker.io/diffbot/diffbot-llm-inference:latest | ||
``` | ||
## 7. Using the Serverless API | ||
|
||
## Extending Diffbot LLM Inference Server | ||
Get a free Diffbot developer token at https://app.diffbot.com/get-started | ||
|
||
To extend the Diffbot LLM Inference Server with new tools, please refer to [this tutorial](add_tool_to_diffbot_llm_inference.md). | ||
```python | ||
from openai import OpenAI | ||
|
||
## Diffbot-Hosted Service | ||
client = OpenAI( | ||
base_url = "https://llm.diffbot.com/rag/v1", | ||
api_key = "<diffbot_token>" | ||
) | ||
|
||
To test the Diffbot LLM Inference Server before self-hosting, set the base_url to `https:/llm.diffbot.com/rag/v1` | ||
completion = client.chat.completions.create( | ||
model="diffbot-xl-small", | ||
temperature=0, | ||
messages=[ | ||
{ | ||
"role": "user", | ||
"content": "What is the Diffbot Knowledge Graph?" | ||
} | ||
] | ||
) | ||
print (completion) | ||
``` | ||
Contact [email protected] if need more credits or higher limits. | ||
|
||
```python | ||
from openai import OpenAI | ||
# get your free token at https://app.diffbot.com/get-started/ | ||
diffbot_token = "<YOUR_TOKEN>" | ||
base_url = "https:/llm.diffbot.com/rag/v1" | ||
client = OpenAI(api_key=diffbot_token, base_url=base_url) | ||
``` | ||
## 8. Adding Custom Tools | ||
|
||
To extend the Diffbot LLM Inference Server with new tools, please refer to [this tutorial](add_tool_to_diffbot_llm_inference.md). |
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.