Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature idea: LLM-powered Prompt compression #11

Open
arthurwolf opened this issue May 5, 2024 · 0 comments
Open

Feature idea: LLM-powered Prompt compression #11

arthurwolf opened this issue May 5, 2024 · 0 comments

Comments

@arthurwolf
Copy link

Observation 1: Large-context/smart LLMs (GPT4, Gemini, Claude) can be expensive.

Ideally, you want to send as few tokens as needed to them when crafting your prompt, to reduce cost (and also possibly make them better at answering).

This means you can't send entire files at a time (which would be faster/more convenient) but you instead have to think about which parts of the files are important to your prompt and which are not, and send only the important parts. This takes time/effort.

Observation 2: Local (llama3-8b/oolama etc) / smaller-but-remote (groq, gpt3.5) LLMs are free or much cheaper.

So, what if we could delegate the task of "filtering" what to send in the final prompt, to a small/local LLM?

This would work like this:

  • Pass 1: Extract everything from the prompt that is not a file, meaning it is "the question" / "the task" the user needs done. Ask the small LLM to "summarize" this task.
  • Pass 2: Go over each file, and for each file, ask it, which part of this file is relevant to the question/task, and which is not. Filter out the irrelevant parts, only keep the relevant parts.
  • Finally, generate the prompt keeping only the relevant parts, resulting in a (possibly much) more compact prompt, without losing any important information.

If this works, it would significantly reduce cost without reducing usefulness/accuracy (at the cost of a bit of time to process the initial passes, and a bit of effort to initially set things up).

Just an idea. Sorry for all the noise, I'm presuming you'd rather people give ideas even if you don't end up implementing them, tell me if I need to calm down.

Cheers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant