-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dry_agent ChatOps LLM agent #402
Comments
Wow! Ambitious. Cool. I'm concerned that the nature of LLMs today will make this a bit "fuzzy" (like sometimes it will understand "start drawio" and sometimes it won't). But if this did work as described, then I could have services I rarely use shut down, and when I want to use them I chat them into action and then use them. Your example suggested I could also make them active for a period of time, which is definitely nice, but not as nice as shutting down a service after a period of inactivity. E.g., "start drawio for 45 minutes", but I'm still working on my document in drawio after 45 minutes - does the container shut down? |
good point about the timeout I hadn't considered that, maybe it could be like "Start immich and whoami and remind me to turn them off tomorrow". It could be possible to add an actual usage tracker but the original plan does not cover that. Its my understanding that the structured json output will help to prevent mistakes. It would need to match the request with actual configured services, so if it didn't understand that "drawio" means a service that actually exists, it would have to give an error response. The extra confirmation would only happen after it determined that it could fullfill the action. |
Yeah, there could be separate functionality (a separate project) to shut down a container after X. Even if that doesn't exist, then at least the containers aren't active until I chat them up, and then I can manually shut them down when I'm done. Your idea will probably work pretty well. It's just been my experience that LLMs are kind of arbitrary. Given the same prompt, sometimes they have signifcantly different responses. So even though it's supposed to be outputting JSON, maybe it'll cough up bogus JSO sometimes, or may other times the JSON will be accurate byt the content will be hallucination-affected. |
If that turns out to be the case, we could implement a non-LLM stricter language literally: "start drawio" "stop drawio" and if "drawio" isn't a service it fails. |
A hypothetical management interface I dreamed up:
dry_agent
dry_agent is a ChatOps bot and agent for d.rymcg.tech. The bot sits in
a Matrix room which you can chat with to manage your Docker server.
The user is able to ask questions and perform the following
tasks:
Stretch goals include:
posed by the bot.
The chat bot may run on any machine where it has access to an LLM,
used to build a structured JSON
message based on
information provided by the user. Once the user confirms the action,
it posts the message to an MQTT server that the agent receives from.
The agent must run on a secure workstation that has access to an
unlocked SSH key that controls your Docker server, and receives its
instructions via MQTT, which may include starting and stopping
services as well as status requests.
Structured Responses
General chat responses (no Docker action)
Docker actions
Confirmations
Examples
"Which services are running?"
"Start whoami and immich and then turn them off after 45 minutes."
"Configure a new postgres database."
System prompt
You are dry_agent, a ChatOps bot that helps users manage Docker services via Matrix chat. Your responses must be formatted as structured JSON that clearly identifies whether you're providing:
Message Structure Rules:
Valid Actions:
JSON Response Format:
For chat responses:
The text was updated successfully, but these errors were encountered: