Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom assistant prompt (i.e. agent prefix/suffix) #27

Open
frankroeder opened this issue Jul 9, 2024 · 18 comments
Open

Custom assistant prompt (i.e. agent prefix/suffix) #27

frankroeder opened this issue Jul 9, 2024 · 18 comments

Comments

@frankroeder
Copy link
Owner

Another useful feature is being able to customise assitant prompt (i.e. agent prefix/suffix) in the responses for custom commands (for example my 'proofreader' command could have a different emoji like 👓 instead of 🦜).

Originally posted by @eterps in #23 (comment)

@frankroeder
Copy link
Owner Author

@eterps could you share your "proofreader" with me?

@eterps
Copy link
Contributor

eterps commented Jul 9, 2024

Sure:

      hooks = {
        --PrtProofReader
        ProofReader = function(prt, params)
          local chat_system_prompt = [[
I want you to act as a proofreader. I will provide you with texts and
I would like you to review them for any spelling, grammar, or
punctuation errors. Once you have finished reviewing the text,
provide me with any necessary corrections or suggestions to improve the
text. Highlight the corrected fragments (if any) using markdown backticks.

When you have done that subsequently provide me with a slightly better
version of the text, but keep close to the original text.

Finally provide me with an ideal version of the text.

Whenever I provide you with text, you reply in this format directly:

## Corrected text:

{corrected text, or say "NO_CORRECTIONS_NEEDED" instead if there are no corrections made}

## Slightly better text

{slightly better text}

## Ideal text

{ideal text}
]]
          local agent = prt.get_chat_agent()
          agent.model.system = chat_system_prompt
          prt.cmd.ChatNew(params, agent.model, '')
        end,
      }

@frankroeder
Copy link
Owner Author

I am not sure if this worked at all. In the current version I changed the arguments for ChatNew, thus the new_chat function. Previously, agent.model and system_prompt were used solely for visualization purposes at the beginning of the chat file. Could you confirm that your chat_system_promptgoes through and let me know which commit you’re currently using, or at the very least, the commit where this function worked correctly?

@eterps
Copy link
Contributor

eterps commented Jul 9, 2024

I am not sure if this worked at all.

It definitely works, the markdown response of this prompt is quite elaborate. I use it with Anthropic's Claude 3.5 model.
I'm on 8fa80bb. I don't have the time to do any checks myself for now, maybe later this week.
Thanks for looking into it.

@frankroeder
Copy link
Owner Author

There is no hurry, it works for anthropic, but not for other providers. agent.model.system = chat_system_prompt really changes the reference for that API call. On the other hand, agent.system_prompt = chat_system_prompt does not work for other providers.

@eterps
Copy link
Contributor

eterps commented Jul 10, 2024

I found some additional information while trying to understand what's happening.
In my code fragment above I have this function call:

prt.cmd.ChatNew(params, agent.model, '')

However, that doesn't make sense because the prt.cmd.ChatNew function only accepts a single argument, not 3.
In the other words, the passing of agent.model and '' is ignored.
Which is true because if I change that call to prt.cmd.ChatNew(params) the proofreader continues to work fine (when used with Anthropic).

That means that agent.model.system = chat_system_prompt overrides the system prompt globally. Which causes this bug: #28

However, I also see you're working on: #29

@frankroeder
Copy link
Owner Author

Sure:

      hooks = {
        --PrtProofReader
        ProofReader = function(prt, params)
          local chat_system_prompt = [[
I want you to act as a proofreader. I will provide you with texts and
I would like you to review them for any spelling, grammar, or
punctuation errors. Once you have finished reviewing the text,
provide me with any necessary corrections or suggestions to improve the
text. Highlight the corrected fragments (if any) using markdown backticks.

When you have done that subsequently provide me with a slightly better
version of the text, but keep close to the original text.

Finally provide me with an ideal version of the text.

Whenever I provide you with text, you reply in this format directly:

## Corrected text:

{corrected text, or say "NO_CORRECTIONS_NEEDED" instead if there are no corrections made}

## Slightly better text

{slightly better text}

## Ideal text

{ideal text}
]]
          local agent = prt.get_chat_agent()
          agent.model.system = chat_system_prompt
          prt.cmd.ChatNew(params, agent.model, '')
        end,
      }

Dear @eterps, I have added a lightweight chat template support. In the latest release, you can now provide custom prompt text that is part of the first message.

       ProofReader = function(prt, params)
      local chat_prompt = [[
		I want you to act as a proofreader. I will provide you with texts and
		I would like you to review them for any spelling, grammar, or
		punctuation errors. Once you have finished reviewing the text,
		provide me with any necessary corrections or suggestions to improve the
		text. Highlight the corrected fragments (if any) using markdown backticks.

		When you have done that subsequently provide me with a slightly better
		version of the text, but keep close to the original text.

		Finally provide me with an ideal version of the text.

		Whenever I provide you with text, you reply in this format directly:

		## Corrected text:

		{corrected text, or say "NO_CORRECTIONS_NEEDED" instead if there are no corrections made}

		## Slightly better text

		{slightly better text}

		## Ideal text

		{ideal text}
		]]
      prt.cmd.ChatNew(params, chat_prompt)
    end

@eterps
Copy link
Contributor

eterps commented Jul 29, 2024

Ah, thanks! It makes sense that way. I'll give it a try 👍

@eterps
Copy link
Contributor

eterps commented Jul 29, 2024

@frankroeder it works great, however I can imagine it might be confusing UX to some users.
This is what I see when I invoke ProofReader:

image

On line 5 in this screenshot is the chat_user_prefix and at line 29 is the nvim cursor where I can type (or paste) my text.

Making it part of the first message is a powerful feature, but it feels quite different than having it as a system prompt for that particular chat.

But like I said, it works for me, so thank you 👍

@eterps
Copy link
Contributor

eterps commented Jul 29, 2024

Now that I think of it, I can probably improve my prompt as well so that it reads less like a system prompt and has a clear separator for the text I'm pasting in.

@frankroeder
Copy link
Owner Author

@eterps, I also considered this and decided to go with the message because the system prompt is not visible to the user. However, it is quite simple to make it the system prompt. The question is whether it should be appended to the existing general system prompt or replace it entirely.

@eterps
Copy link
Contributor

eterps commented Jul 29, 2024

I believe replacing the prompt entirely would be preferable, as automatic merging could potentially be frustrating for users. If needed, users can always incorporate the general system prompt, which they would likely customize to suit their specific requirements for the hook in question.

@frankroeder
Copy link
Owner Author

@eterps, you are right. I will change this very soon.

@frankroeder
Copy link
Owner Author

frankroeder commented Jul 29, 2024

@eterps Despite everything being cramped into a single row, this is how it looks when provided as a system prompt, using the former structure of gp.nvim.

screenshot 2024-07-29 at 17 09 52

@eterps
Copy link
Contributor

eterps commented Jul 29, 2024

@frankroeder looks quite clear to me. Another possibility would be to use vim folds around the system prompt so the user can expand it on demand.

@frankroeder
Copy link
Owner Author

Oh, that is just my setup. You can simply call :set wrap and you will be fine. vim.fold wouldn't change too much as it is a single line.

@GerardGarcia
Copy link

similarly, it would be nice to be able configure the agent_prefix similarly to the chat_user_prefix option, not everyone have terminals that support bitmap emojis and in these cases nothing renders or you only see a square instead of the parrot.

@frankroeder
Copy link
Owner Author

@GerardGarcia Sure, I added the agent prefix with commit 2b29eb3 on the main branch. It will be part of the next release today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants