You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Similarly I'd love to be able to start a chat with a corpus of input (text files, images), plus an initial prompt, then continue the conversation, but cat Screenshot\ 2024-12-12\ at\ 14.06.20.png | llm chat "describe this image and then let's talk about it" -a - -m gemini-2.0-flash-exp gives. (I'm assuming/hoping this will mean that I don't have to re-transmit the image for each 'turn' in the chat.)
Usage: llm chat [OPTIONS]
Try 'llm chat --help' for help.
Error: No such option: -a
Something like @ instead of !file would be cleaner I think
Copilot in VSCode & Cursor do this, meaning they let you type @ to attach a file.
GitHub's issue editor uses # to reference issues and @ to reference people.
One way to do this might be an additional hook: prepare-prompt, which allows plugins to modify a prompt before it is executed by the model. The context could/should include file attachments, model id etc. Something like:
Curious, is there a way to inject the contents of a file while in the middle of a chat? If not, I'd love to be able to do something like
!file foo.txt
The text was updated successfully, but these errors were encountered: