This has been a fun evening so thank you for that! I'm having a total blast bootstrapping an agent with itself, and I can see myself building a bunch of task-specific agents with this going forward.
Feature Request
It'd be really helpful if conversation history was persisted beyond the current program execution so that it's possible to recover context if the model throws because of a rate limit or something similar. It's kind of disheartening to lose the current state after feeding the LLM a bunch of web docs for example.
Possible Implementation
New inputs to the chat function:
- Some sort of
continue boolean, and
- an (optional?) history filepath that gets
conversation.responses rewritten to on response end.
If this makes sense and you're interested, I'd also be happy to take a stab at implementing it. Thanks again!
This has been a fun evening so thank you for that! I'm having a total blast bootstrapping an agent with itself, and I can see myself building a bunch of task-specific agents with this going forward.
Feature Request
It'd be really helpful if conversation history was persisted beyond the current program execution so that it's possible to recover context if the model throws because of a rate limit or something similar. It's kind of disheartening to lose the current state after feeding the LLM a bunch of web docs for example.
Possible Implementation
New inputs to the
chatfunction:continueboolean, andconversation.responsesrewritten to on response end.If this makes sense and you're interested, I'd also be happy to take a stab at implementing it. Thanks again!