Skip to content

captainflasmr/ollama-buddy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ollama Buddy: Local LLM Integration for Emacs

img/ollama-buddy-youtube-banner_001.jpg

Welcome to OLLAMA BUDDY

Want to quickly access local running AI through Emacs and Ollama? This package provides a friendly Emacs interface for interacting with Ollama models and offers a convenient way to integrate to Ollama’s local LLM capabilities directly into your Emacs workflow with little or no configuration required!

“The name is just something a little bit fun and it seems to always remind me of the “bathroom buddy” from the film Gremlins (although hopefully this will work better than that seemed to!)”

I have a youtube channel where where I am looking to regularly post videos showcasing the capabilities of the package. Check it out here:

https://www.youtube.com/@OllamaBuddyforEmacs

See docs/ollama-buddy.org for the manual!

You can install with literally no config, but I would suggest binding to both menus.

(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper))

A simple text information screen will be presented on the first opening of the chat (by selecting “o” from any of the menus), its just a bit of fun, but I wanted a quick start tutorial/assistant type of feel, here is what will be presented:

#+TITLE: Ollama Buddy Chat

* Welcome to OLLAMA BUDDY

#+begin_example
 ___ _ _      n _ n      ___       _   _ _ _
|   | | |__._|o(Y)o|__._| . |_ _ _| |_| | | |
| | | | | .  |     | .  | . | | | . | . |__ |
|___|_|_|__/_|_|_|_|__/_|___|___|___|___|___|
#+end_example

(a) *tinyllama:latest* Select Info Pull Copy Delete
(b) *qwen2.5-coder:7b* Select Info Pull Copy Delete
(c) *deepseek-r1:7b* Select Info Pull Copy Delete
(d) *codellama:7b* Select Info Pull Copy Delete
(e) *gemma3:4b* Select Info Pull Copy Delete
(f) *phi3:3.8b* Select Info Pull Copy Delete
(g) *llama3.2:1b* Select Info Pull Copy Delete

starcoder2:3b mistral:7b deepseek-r1:1.5b gemma3:1b qwen2.5-coder:3b llama3.2:3b

- Ask me anything!                    C-c C-c
- Main transient menu                 C-c O
- Cancel request                      C-c C-k
- Change model                        C-c m
- Browse prompt history               M-p/n/r
- Browse ollama-buddy manual          C-c ?
- Advanced interface (show all tips)  C-c A

* *tinyllama:latest >> PROMPT:* <put prompt in here>

You will, of course, need to have ollama serve running, but technically, that is all you need! Emacs and ollama-buddy will handle the rest!

The example above has some models already pulled, but if you don’t have any initially, the Recommended Models section will be fully populated with suggestions. Just click, or move your point over, and press return, and ollama-buddy via Ollama will pull that model for you asynchronously!

Quick Demo Video

011 Real-time tracking and display

Submit some queries, with different models:

PROMPT>> why is the sky blue? PROMPT>> why is the grass green? PROMPT>> why is emacs so great?

and show the token usage and rate being displayed in the chat buffer.

Open the Token Usage stats from the menu

Open the Token Usage graph

Toggle on ollama-buddy-toggle-token-display

img/ollama-buddy-screen-recording_011.gif

See Screenshots / Demos section for more videos/demonstration.

Whats New

<2025-12-11 Thu> 1.0.2

Added Mistral Codestral support

  • Added ollama-buddy-codestral.el for Mistral Codestral API support
  • Implemented model handler registration for seamless switching between local Ollama and cloud-based Codestral models
  • Supports async streaming responses with real-time token counting
  • Follows the same integration pattern as other remote AI providers (OpenAI, Claude, Gemini, Grok)
  • Uses “s:” prefix to identify Codestral models in the model list
(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :custom
  (ollama-buddy-codestral-api-key
   (auth-source-pick-first-password :host "ollama-buddy-codestral" :user "apikey"))
  :config
  (require 'ollama-buddy-codestral nil t))

<2025-07-23> 1.0.0

Simply bumped to version 1

<2025-06-21> 0.13.1

Refactored content register processing to be more efficient and added a new Emacs package brainstorming prompt file.

<2025-06-15> 0.13.0

Added curl communication backend with fallback support

  • Added ollama-buddy-curl.el as separate backend implementation
  • Implemented backend dispatcher system in ollama-buddy-core.el
  • Updated all async functions to use backend dispatcher
  • Added curl backend validation and testing functions
  • Maintained full compatibility with existing network process backend

When building Emacs packages that communicate with external services, network connectivity can sometimes be a pain point. While Emacs’s built-in make-network-process works great in most cases, some users have encountered issues on certain systems or network configurations. That’s why now I have added a curl-based communication backend to give you an additional option, who knows, maybe it will solve your ollama communication issues!

The original ollama-buddy implementation relied exclusively on Emacs’s native network process functionality. While this works well for most users, I occasionally heard from users who experienced network process failures/flakiness on certain systems.

Rather than trying to work around these edge cases in the network process code, I took a different approach: I added curl as an alternative communication backend! This gives users a battle-tested, widely-available HTTP client as a fallback option.

Users can enable the curl backend with a simple customization:

(use-package ollama-buddy
  ;; :load-path "~/source/repos/ollama-buddy/ollama-buddy-mini"
  :load-path "~/source/repos/ollama-buddy"
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :config
  ;; Load curl backend first
  (require 'ollama-buddy-curl nil t)
  
  ;; Then set the backend
  (setq ollama-buddy-communication-backend 'curl))

and then switch backends from the chat buffer C-c e

The curl backend supports everything the network backend does:

  • Real-time streaming responses
  • Vision model support with image attachments
  • File attachments and context
  • All Ollama API parameters
  • Multishot model sequences

If curl is selected but not available, the system automatically falls back to the network process with a helpful warning message.

From a user perspective, the backend choice is largely transparent. The main indicators are:

  • Status line shows [C] for curl or [N] for network
  • Process list shows ollama-chat-curl vs ollama-chat-stream processes
  • Curl backend shows “Curl Processing…” in status messages

Everything else - streaming behaviour, response formatting, error handling - works identically regardless of the backend.

<2025-05-31> 0.12.1

Optimized Unicode escape function to fix blocking with large file attachments

  • Fixed UI blocking when sending large attached files
  • Used temp buffer with delete-char/insert instead of repeated concat calls
  • Reduced processing time from minutes to milliseconds for large payloads

<2025-05-22> 0.12.0

Full system prompt in the status bar replaced with a more meaningful simple role title

  • Added system prompt metadata tracking with title, source, and timestamp registry
  • Implemented automatic title extraction and unified completing-read interface
  • Enhanced fabric/awesome prompt integration with proper metadata handling
  • Improved transient menu organization and org-mode formatting with folding
  • Added system prompt history display and better error handling for empty files
  • Transient menu has been simplified and reorganised

Previously, the header status bar would show truncated system prompt text like [You are a helpful assistant wh...], making it difficult to quickly identify which prompt was active. Now, the display shows meaningful role titles with source indicators:

  • [F:Code Reviewer] - Fabric pattern
  • [A:Linux Terminal] - Awesome ChatGPT prompt
  • [U:Writing Assistant] - User-defined prompt

The system now intelligently extracts titles from prompt content by recognizing common patterns like “You are a…”, “Act as…”, or “I want you to act as…”. When these patterns aren’t found, it generates a concise title from the first few words.

Behind the scenes, Ollama Buddy now maintains a registry of all system prompts with their titles, sources, and timestamps. This enables new features like system prompt history viewing and better organization across Fabric patterns, Awesome ChatGPT prompts, and user-defined prompts.

The result is a cleaner interface that makes it immediately clear which role your AI assistant is currently embodying, without cluttering the status bar with long, truncated text.

<2025-05-21> 0.11.1

Quite a bit of refactoring to generally make this project more maintainable and I have added a starter kit of user prompts.

  • Color System Reworking
    • Removed all model color-related functions and variables
    • Removed dependency on color.el
    • Replaced with highlight-regexp and hashing to ^font-lock faces, so now using a more native built-in solutions for model colouring rather than shoe-horning in overlays.
  • UI Improvements
    • Simplified the display system by leveraging Org mode
    • Added org-mode styling for output buffers
    • Added org-hide-emphasis-markers and org-hide-leading-stars settings
    • Changed formatting to use Org markup instead of text properties
    • Converted plain text headers to proper Org headings
    • Replaced color properties with Org emphasis (bold)
  • History Management Updates
    • Streamlined history editing functionality
    • Improved model-specific history editing
    • Refactored history display and navigation
  • System Prompts
    • Added library of system prompts in these categories:
      • analysis (3 prompts)
      • coding (5 prompts)
      • creative (3 prompts)
      • documentation (3 prompts)
      • emacs (10 prompts)
      • general (3 prompts)
      • technical (3 prompts)
      • writing (3 prompts)

<2025-05-19> 0.11.0

Added user system prompts management

  • You can now save, load and manage system prompts
  • Created new transient menu for user system prompts (C-c s)
  • Organized prompts by categories with org-mode format storage
  • Supported prompt editing, listing, creation and deletion
  • Updated key bindings to integrate with existing functionality
  • Added prompts directory customization with defaults

This feature makes it easier to save, organize, and reuse your favorite system prompts when working with Ollama language models.

System prompts are special instructions that guide the behavior of language models. By setting effective system prompts, you can:

  • Define the AI’s role (e.g., “You are a helpful programming assistant who explains code clearly”)
  • Establish response formats
  • Set the tone and style of responses
  • Provide background knowledge for specific domains

The new ollama-buddy-user-prompts module organizes your system prompts in a clean, category-based system:

  • Save your prompts - Store effective system prompts you’ve crafted for future use
  • Categorize - Prompts are organized by domains like “coding,” “writing,” “technical,” etc.
  • Quick access - Browse and load your prompt library with completion-based selection
  • Edit in org-mode - All prompts are stored as org files with proper metadata
  • Manage with ease - Create, edit, list, and delete prompts through a dedicated transient menu

The new functionality is accessible through the updated key binding C-c s, which opens a dedicated transient menu with these options:

  • Save current (S) - Save your active system prompt
  • Load prompt (L) - Choose a previously saved prompt
  • Create new (N) - Start fresh with a new prompt
  • List all Prompts (l) - View your entire prompt library
  • Edit prompt (e) - Modify an existing prompt
  • Delete prompt (d) - Remove prompts you no longer need

If you work frequently with Ollama models, you’ve likely discovered the power of well-crafted system prompts. They can dramatically improve the quality and consistency of responses. With this new management system, you can:

  • Build a personal library of effective prompts
  • Maintain context continuity across sessions
  • Share prompts with teammates
  • Refine your prompts over time

<2025-05-14> 0.10.0

Added file attachment system for including documents in conversations

  • Added file attachment support with configurable file size limits (10MB default) and supported file types
  • Implemented session persistence for attachments in save/load functionality
  • Added attachment context inclusion in prompts with proper token counting
  • Created comprehensive attachment management commands:
    • Attach files to conversations
    • Show current attachments in dedicated buffer
    • Detach specific files
    • Clear all attachments
  • Added Dired integration for bulk file attachment
  • Included attachment menu in transient interface (C-c 1)
  • Updated help text to document new attachment keybindings
  • Enhanced context calculation to include attachment token usage

You can now seamlessly include text files, code, documentation, and more directly in your conversations with local AI models!

Simply use C-c C-a from the chat buffer to attach any file to your current conversation.

The attached files become part of your conversation context, allowing the AI to reference, analyze, or work with their contents directly.

The transient menu has also been updated with a new Attachment Menu

*File Attachments*
  a Attach file
  w Show attachments
  d Detach file
  0 Clear all attachments

Your attachments aren’t just dumped into the conversation - they’re intelligently integrated:

  • Token counting now includes attachment content, so you always know how much context you’re using
  • Session persistence means your attachments are saved and restored when you save/load conversations
  • File size limits (configurable, 10MB default) prevent accidentally overwhelming your context window

Managing attached files is intuitive with dedicated commands:

  • C-c C-w - View all current attachments in a nicely formatted org mode buffer, folded to each file
  • C-c C-d - Detach specific files when you no longer need them
  • C-c 0 - Clear all attachments at once
  • C-c 1 - Access the full attachment menu via a transient interface

Working in Dired? No problem! You can attach files directly from your file browser:

  • Mark multiple files and attach them all at once
  • Attach the file at point with a single command

Use the configuration as follows:

(eval-after-load 'dired
  '(progn
     (define-key dired-mode-map (kbd "C-c C-a") #'ollama-buddy-dired-attach-marked-files)))

<2025-05-12> 0.9.50

Added context size management and monitoring

  • Added configurable context sizes for popular models (llama3.2, mistral, qwen, etc.)
  • Implemented real-time context usage display in status bar
  • Can display in text or bar display types
  • Added context size thresholds with visual warnings
  • Added interactive commands for context management:
    • ollama-buddy-show-context-info: View all model context sizes
    • ollama-buddy-set-model-context-size: Manually configure model context
    • ollama-buddy-toggle-context-percentage: Toggle context display
  • Implemented context size validation before sending prompts
  • Added token estimation and breakdown (history/system/current prompt)
  • Added keybindings: C-c $ (set context), C-c % (toggle display), C-c C (show info)
  • Updated status bar to show current/max context with fontification

I’ve added context window management and monitoring capabilities to Ollama Buddy!

This update helps you better understand and manage your model’s context usage, preventing errors and optimizing your conversations.

Enable it with the following:

(setq ollama-buddy-show-context-percentage t)

Usage

After implementing these changes:

  1. Text mode: Shows 1024/4096 style display
  2. Bar mode (default): Shows ███████░░░░ 2048 style display
  3. Use C-c 8 to toggle between modes
  4. The Text mode will change fontification based on your thresholds:
    • Normal: regular fontification
    • (85%+): underlined and bold
    • (100%+): inverse video and bold
  5. The Bar mode will just fill up as normal

The progress bar will visually represent how much of the context window you’re using, making it easier to see at a glance when you’re approaching the limit.

Implementation Details

Context Size Detection

Determining a model’s context size proved more complex than expected. While experimenting with parsing model info JSON, I discovered that context size information can be scattered across different fields. Rather than implementing a complex JSON parser (which may come later), I chose a pragmatic approach:

I created a new defcustom variable ollama-buddy-fallback-context-sizes that includes hard-coded values for popular Ollama models. The fallback mechanism is deliberately simple: substring matching followed by a sensible default of 4096 tokens.

(defcustom ollama-buddy-fallback-context-sizes
  '(("llama3.2:1b" . 2048)
    ("llama3:8b" . 4096)
    ("tinyllama" . 2048)
    ("phi3:3.8b" . 4096)
    ("gemma3:1b" . 4096)
    ("gemma3:4b" . 8192)
    ("llama3.2:3b" . 8192)
    ("llama3.2:8b" . 8192)
    ("llama3.2:70b" . 8192)
    ("starcoder2:3b" . 8192)
    ("starcoder2:7b" . 8192)
    ("starcoder2:15b" . 8192)
    ("mistral:7b" . 8192)
    ("mistral:8x7b" . 32768)
    ("codellama:7b" . 8192)
    ("codellama:13b" . 8192)
    ("codellama:34b" . 8192)
    ("qwen2.5-coder:7b" . 8192)
    ("qwen2.5-coder:3b" . 8192)
    ("qwen3:0.6b" . 4096)
    ("qwen3:1.7b" . 8192)
    ("qwen3:4b" . 8192)
    ("qwen3:8b" . 8192)
    ("deepseek-r1:7b" . 8192)
    ("deepseek-r1:1.5b" . 4096))
  "Mapping of model names to their default context sizes.
Used as a fallback when context size can't be determined from the API."
  :type '(alist :key-type string :value-type integer)
  :group 'ollama-buddy)

This approach may not be perfectly accurate for all models, but it’s sufficient for getting the core functionality working. More importantly, as a defcustom, users can easily customize these values for complete accuracy with their specific models. Users can also set context values within the chat buffer through C-c C (Show Context Information) for each individual model if desired.

This design choice allowed me to focus on the essential features without getting stuck on complex context retrieval logic.

One final thing!, if the num_ctx: Context window size in tokens is set, then that number will also be taken into consideration. An assumption will be made that the model is honouring the context size requested and will incorporated into the context calculations accordingly.

Token Estimation

For token counting, I’ve implemented a simple heuristic: each word (using string-split) is multiplied by 1.3. This follows commonly recommended approximations and works well enough in practice. While this isn’t currently configurable, I may add it as a customization option in the future.

How to Use Context Management in Practice

The C-c C (Show Context Information) command is central to this feature. Rather than continuously monitoring context size while you type (which would be computationally expensive and potentially distracting), I’ve designed the system to calculate context on-demand when you choose.

Typical Workflows

Scenario 1: Paste-and-Send Approach

Let’s say you want to paste a large block of text into the chat buffer. You can simply:

  1. Paste your content
  2. Press the send keybinding
  3. If the context limit is exceeded, you’ll get a warning dialog asking whether to proceed anyway

Scenario 2: Preemptive Checking

For more control, you can check context usage before sending:

  1. Paste your content
  2. Run C-c C to see the current context breakdown
  3. If the context looks too high, you have several options:
    • Trim your current prompt
    • Remove or simplify your system prompt
    • Edit conversation history using Ollama Buddy’s history modification features
    • Switch to a model with a larger context window

Scenario 3: Manage the Max History Length

Want tight control over context size without constantly monitoring the real-time display? Since conversation history is part of the context, you can simply limit ollama-buddy-max-history-length to control the total context size.

For example, when working with small context windows, set ollama-buddy-max-history-length to 1. This keeps only the last exchange (your prompt + model response), ensuring your context remains small and predictable, perfect for maintaining control without manual monitoring.

Scenario 4: Parameter num_ctx: Context window size in tokens

Simply set this parameter and off you go!

Current Status: Experimental

Given the potentially limiting nature of context management, I’ve set this feature to disabled by default.

But to enable set the following :

(setq ollama-buddy-show-context-percentage t)

This means:

  • Context checks won’t prevent sending prompts
  • Context usage won’t appear in the status line
  • However, calculations still run in the background, so C-c C (Show Context Information) remains functional

As the feature matures and proves its value, I may enable it by default. For now, consider it an experimental addition that users can opt into.

More Details

The status bar now displays your current context usage in real-time. You’ll see a fraction showing used tokens versus the model’s maximum context size (e.g., “2048/8192”). The display automatically updates as your conversation grows.

Context usage changes fontification to help you stay within limits:

  • Normal font: Normal usage (under 85%)
  • Bold and Underlined: Approaching limit (85-100%)
  • Inversed: At or exceeding limit (100%+)

Before sending prompts that exceed the context limit, Ollama Buddy now warns you and asks for confirmation. This prevents unexpected errors and helps you manage long conversations more effectively.

There are now three new interactive commands:

C-c $ - Set Model Context Size. Manually configure context sizes for custom or fine-tuned models.

C-c % - Toggle Context Display. Show or hide the context percentage in the status bar.

C-c C - Show Context Information. View a detailed breakdown of:

  • All model context sizes
  • Current token usage by category (history, system prompt, current prompt)
  • Percentage usage

The system estimates token counts for:

  • Conversation history: All previous messages
  • System prompts: Your custom instructions
  • Current input: The message you’re about to send

This gives you a complete picture of your context usage before hitting send.

The context monitoring is not enabled by default.

<2025-05-05> 0.9.44

  • Sorted model names alphabetically in intro message
  • Removed multishot writing to register name letters

For some reason, when I moved the .ollama folder to an external disk, the models returned with api/tags were inconsistent, which meant it broke consistent letter assignment. I’m not sure why this happened, but it is probably sensible to sort the models alphabetically anyway, as this has the benefit of naturally grouping together model families.

I also removed the multishot feature of writing to the associated model letter. Now that I have to accommodate more than 26 models, incorporating them into the single-letter Emacs register system is all but impossible. I suspect this feature was not much used, and if you think about it, it wouldn’t have worked anyway with multiple model shots, as the register letter associated with the model would just show the most recent response. Due to these factors, I think I should remove this feature. If someone wants it back, I will probably have to design a bespoke version fully incorporated into the ollama-buddy system, as I can’t think of any other Emacs mechanism that could accommodate this.

<2025-05-05> 0.9.43

Fix model reference error exceeding 26 models #15

Update ollama-buddy to handle more than 26 models by using prefixed combinations for model references beyond ‘z’. This prevents errors in create-intro-message when the local server hosts a large number of models.

<2025-05-03> 0.9.42

Added the following to recommended models:

  • qwen3:0.6b
  • qwen3:1.7b
  • qwen3:4b
  • qwen3:8b

and fixed pull model

<2025-05-02> 0.9.41

Refactored model prefixing again so that when using only ollama models no prefix is applied and is only applied when online LLMs are selected (for example claude, chatGPT e.t.c)

I think this makes more sense and is cleaner for I suspect the majority who may use this package are probably more interested in just using ollama models and the prefix will probably be a bit confusing.

This could be a bit of a breaking change once again I’m afraid for those ollama users that have switched and are now familiar with prefixing “o:”, sorry!

<2025-05-02> 0.9.40

Added vision support for those ollama models that can support it!

Image files are now detected within a prompt and then processed if a model can support vision processing. Here’s a quick overview of how it works:

  1. Configuration: Users can configure the application to enable vision support and specify which models and image formats are supported. Vision support is enabled by default.
  2. Image Detection: When a prompt is submitted, the system automatically detects any image files referenced in the prompt.
  3. Vision Processing: If the model supports vision, the detected images are processed in relation to the defined prompt. Note that the detection of a model being vision capable is defined in ollama-buddy-vision-models and can be adjusted as required.
  4. In addition, a menu item has been added to the custom ollama buddy menu :
    [I] Analyze an Image
        

When selected, it will allow you to describe a chosen image. At some stage, I may allow integration into dired, which would be pretty neat. :)

<2025-04-29> 0.9.38

Added model unloading functionality to free system resources

  • Add unload capability for individual models via the model management UI
  • Create keyboard shortcut (C-c C-u) for quick unloading of all models
  • Display running model count and unload buttons in model management buffer

Large language models consume significant RAM and GPU memory while loaded. Until now, there wasn’t an easy way to reclaim these resources without restarting the Ollama server entirely. This new functionality allows you to:

  • Free up GPU memory when you’re done with your LLM sessions
  • Switch between resource-intensive tasks more fluidly
  • Manage multiple models more efficiently on machines with limited resources
  • Avoid having to restart the Ollama server just to clear memory

There are several ways to unload models with the new functionality:

  1. Unload All Models: Press C-c C-u to unload all running models at once (with confirmation)
  2. Model Management Interface: Access the model management interface with C-c W where you’ll find:
    • A counter showing how many models are currently running
    • An “Unload All” button to free all models at once
    • Individual “Unload” buttons next to each running model
  3. Quick Access in Management Buffer: When in the model management buffer, simply press u to unload all models

The unloading happens asynchronously in the background, with clear status indicators so you can see when the operation completes.

<2025-04-25> 0.9.37

  • Display modified parameters in token stats

Enhanced the token statistics section to include any modified parameters, providing a clearer insight into the active configurations. This update helps in debugging and understanding the runtime environment.

<2025-04-25> 0.9.36

Added Reasoning/Thinking section visibility toggle functionality

  • Introduced the ability to hide reasoning/thinking sections during AI responses, making the chat output cleaner and more focused on final results
  • Added a new customizable variable ollama-buddy-hide-reasoning (default: nil) which controls visibility of reasoning sections
  • Added ollama-buddy-reasoning-markers to configure marker pairs that encapsulate reasoning sections (supports multiple formats like <think></think> or ----)
  • Added ollama-buddy-toggle-reasoning-visibility interactive command to switch visibility on/off
  • Added keybinding C-c V for toggling reasoning visibility in chat buffer
  • Added transient menu option “V” for toggling reasoning visibility
  • When reasoning is hidden, a status message shows which section is being processed (e.g., “Think…” or custom marker names)
  • Reasoning sections are automatically detected during streaming responses
  • Header line now indicates when reasoning is hidden with “REASONING HIDDEN” text
  • All changes preserve streaming response functionality while providing cleaner output

This feature is particularly useful when working with AI models that output their “chain of thought” or reasoning process before providing the final answer, allowing users to focus on the end results while still having the option to see the full reasoning when needed.

<2025-04-21> 0.9.35

Added Grok support

Integration is very similar to other remote AIs:

(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :custom
  (ollama-buddy-grok-api-key
   (auth-source-pick-first-password :host "ollama-buddy-grok" :user "apikey"))
  :config
  (require 'ollama-buddy-grok nil t))

<2025-04-20> 0.9.33

Fixed utf-8 encoding stream response issues from remote LLMs.

<2025-04-19> 0.9.32

Finished the remote LLM decoupling process, meaning that the core ollama-buddy logic is now not dependent on any remote LLM, and each remote LLM package is self-contained and functions as a unique extension.

<2025-04-18> 0.9.31

Refactored model prefixing logic and cleaned up

  • Standardized model prefixing by introducing distinct prefixes for Ollama (o:), OpenAI (a:), Claude (c:), and Gemini (g:) models.
  • Centralized functions to get full model names with prefixes across different model types.
  • Removed redundant and unused variables related to model management.

Note that there may be some breaking changes here especially regarding session recall as all models will now have a prefix to uniquely identify their type. For ollama recall, just edit the session files to prepend the ollama prefix of “o:”

<2025-04-17> 0.9.30

Added Gemini integration!

As with the Claude and ChatGPT integration, you will need to add something similar to them in your configuration. I currently have the following set up to enable access to the remote LLMs:

(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :custom
  (ollama-buddy-openai-api-key
   (auth-source-pick-first-password :host "ollama-buddy-openai" :user "apikey"))
  (ollama-buddy-claude-api-key
   (auth-source-pick-first-password :host "ollama-buddy-claude" :user "apikey"))
  (ollama-buddy-gemini-api-key
   (auth-source-pick-first-password :host "ollama-buddy-gemini" :user "apikey"))
  :config
  (require 'ollama-buddy-openai nil t)
  (require 'ollama-buddy-claude nil t)
  (require 'ollama-buddy-gemini nil t))

Also with the previous update all the latest model names will be pulled, so there should be a full comprehensive list for each of the main remote AI LLMs!

Features

See README-features.org for a deeper delve into each feature as it was added.

  • Minimal Setup

If desired, the following will get you going! (of course, have ollama running with some models loaded).

(use-package ollama-buddy
  :ensure t
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper))
  • Interactive Command Menu
    • Quick-access menu with single-key commands (M-x ollama-buddy-menu)
    • Quickly define your own menu using defcustom with dynamic adaptable menu
    • Menu Presets easily definable in text files
    • Quickly switch between LLM models with no configuration required
    • Send text from any Emacs buffer
    • Each model is uniquely colored to enhance visual feedback
    • Switch between basic and advanced interface levels (C-c A)
  • Role and Session Management
    • Create, switch, save and manage role-specific command menu configurations
    • Create, switch, save and manage sessions to provide chat/model based context
    • Prompt history/context with the ability to clear, turn on and off and save as part of a session file
    • Edit conversation history with intuitive keybindings
  • Smart Model Management
    • Models can be assigned to individual menu commands
    • Intelligent model fallback
    • Real-time model availability monitoring
    • Easy model switching during sessions
  • Parameter Control
    • Comprehensive parameter management for all Ollama API options
    • Temperature control for adjusting AI creativity
    • Command-specific parameter customization for optimized interactions
    • Visual parameter interface showing active and modified values
  • System Prompt Support
    • Set persistent system prompts for guiding AI responses
    • Command-specific system prompts for tailored interactions
    • Visual indicators for active system prompts
  • AI Operations
    • Code refactoring with context awareness
    • Automatic git commit message generation
    • Code explanation and documentation
    • Text operations (proofreading, conciseness, dictionary lookups)
    • Custom prompt support for flexibility
  • Conversation Tools
    • Conversation history tracking and navigation
    • Real-time token usage tracking and statistics
    • Multi-model comparison (send prompts to multiple models)
    • Markdown to Org conversion for better formatting
    • Export conversations in various formats
  • Lightweight
    • Single package file
    • A minified version ollama-buddy-mini (200 lines) is available
    • No external dependencies (curl not used)

Transient Menu

Ollama Buddy now includes a transient-based menu system to improve usability and streamline interactions. Yes, I originally stated that I would never do it, but I think it compliments my crafted simple textual menu and the fact that I have now defaulted the main chat interface to a simple menu.

This can give the user more options for configuration, they can use the chat in advanced mode where the keybindings are presented in situ, or a more minimal basic setup where the transient menu can be activated. For my use-package definition I current have the following set up, with the two styles of menus sitting alongside each other :

:bind
("C-c o" . ollama-buddy-menu)
("C-c O" . ollama-buddy-transient-menu)

The new menu provides an organized interface for accessing the assistant’s core functions, including chat, model management, roles, and Fabric patterns. This post provides an overview of the features available in the Ollama Buddy transient menus.

Yes that’s right also fabric patterns!, I have decided to add in auto syncing of the patterns directory in https://github.com/danielmiessler/fabric

Simply I pull the patterns directory which contain prompt guidance for a range of different topics and then push them through a completing read to set the ollama-buddy system prompt, so a special set of curated prompts can now be applied right in the ollama-buddy chat!

Anyways, here is a description of the transient menu system.

What is the Transient Menu?

The transient menu in Ollama Buddy leverages Emacs’ transient.el package (the same technology behind Magit’s popular interface) to create a hierarchical, discoverable menu system. This approach transforms the user experience from memorizing numerous keybindings to navigating through logical groups of commands with clear descriptions.

Accessing the Menu

The main transient menu can be accessed with the keybinding C-c O when in an Ollama Buddy chat buffer. You can also call it via M-x ollama-buddy-transient-menu from anywhere in Emacs.

What the Menu Looks Like

When called, the main transient menu appears at the bottom of your Emacs frame, organized into logical sections with descriptive prefixes. Here’s what you’ll see:

|o(Y)o| Ollama Buddy
[Chat]             [Prompts]            [Model]               [Roles & Patterns]
o  Open Chat       l  Send Region       W  Manage Models      R  Switch Roles
O  Commands        s  Set System Prompt m  Switch Model       E  Create New Role
RET Send Prompt    C-s Show System      v  View Model Status  D  Open Roles Directory
h  Help/Menu       r  Reset System      i  Show Model Info    f  Fabric Patterns
k  Kill/Cancel     b  Ollama Buddy Menu M  Multishot          t  OpenAI Integration
x  Toggle Streaming

[Display Options]          [History]              [Sessions]             [Parameters]
A  Toggle Interface Level  H  Toggle History      N  New Session         P  Edit Parameter
B  Toggle Debug Mode       X  Clear History       L  Load Session        G  Display Parameters
T  Toggle Token Display    J  Edit History        S  Save Session        I  Parameter Help
u  Token Stats                                    Q  List Sessions       K  Reset Parameters
U  Token Usage Graph                              Z  Delete Session      F  Toggle Params in Header
C-o Toggle Markdown->Org                                                 p  Parameter Profiles
c  Toggle Model Colors
V  Toggle reasoning visibility                                           

This visual layout makes it easy to discover and access the full range of Ollama Buddy’s functionality. Let’s explore each section in detail.

Menu Sections Explained

Chat Section

This section contains the core interaction commands:

  • Open Chat (o): Opens the Ollama Buddy chat buffer
  • Commands (O): Opens a submenu with specialized commands
  • Send Prompt (RET): Sends the current prompt to the model
  • Help/Menu (h): Displays the help assistant with usage tips
  • Kill/Cancel Request (k): Cancels the current ongoing request
  • Toggle Streaming (x): Toggle streaming on and off only for ollama models

Prompts Section

These commands help you manage and send prompts:

  • Send Region (l): Sends the selected region as a prompt
  • Set System Prompt (s): Sets the current prompt as a system prompt
  • Show System Prompt (C-s): Displays the current system prompt
  • Reset System Prompt (r): Resets the system prompt to default
  • Ollama Buddy Menu (b): Opens the classic custom menu interface

Model Section

Commands for model management:

  • Manage Models: Info, Pull, Copy, Delete for each model, plus Import GGUF Model and Pull any model from ollama library
  • Switch Model (m): Changes the active LLM
  • View Model Status (v): Shows status of all available models
  • Show Model Info (i): Displays detailed information about the current model
  • Multishot (M): Sends the same prompt to multiple models

Roles & Patterns Section

These commands help manage roles and use fabric patterns:

  • Switch Roles (R): Switch to a different predefined role
  • Create New Role (E): Create a new role interactively
  • Open Roles Directory (D): Open the directory containing role definitions
  • Fabric Patterns (f): Opens the submenu for Fabric patterns
  • OpenAI Integration (t): Opens menu specifically for OpenAI/ChatGPT

When you select the Fabric Patterns option, you’ll see a submenu like this:

Fabric Patterns (42 available, last synced: 2025-03-18 14:30)
[Actions]             [Sync]              [Categories]          [Navigation]
s  Send with Pattern  S  Sync Latest      u  Universal Patterns q  Back to Main Menu
p  Set as System      P  Populate Cache   c  Code Patterns
l  List All Patterns  I  Initial Setup    w  Writing Patterns
v  View Pattern Details                   a  Analysis Patterns

Display Options Section

Commands to customize the display:

  • Toggle Interface Level (A): Switch between basic and advanced interfaces
  • Toggle Debug Mode (B): Enable/disable JSON debug information
  • Toggle Token Display (T): Show/hide token usage statistics
  • Display Token Stats (U): Show detailed token usage information
  • Toggle Markdown->Org (C-o): Enable/disable conversion to Org format
  • Toggle Model Colors (c): Enable/disable model-specific colors
  • Toggle reasoning visibility (V): Hide/show reasoning/thinking sections
  • Token Usage Graph (g): Display a visual graph of token usage

History Section

Commands for managing conversation history:

  • Toggle History (H): Enable/disable conversation history
  • Clear History (X): Clear the current history
  • Edit History (J): Edit the history in a buffer

Sessions Section

Commands for session management:

  • New Session (N): Start a new session
  • Load Session (L): Load a saved session
  • Save Session (S): Save the current session
  • List Sessions (Q): List all available sessions
  • Delete Session (Z): Delete a saved session

Parameters Section

Commands for managing model parameters:

  • Edit Parameter (P): Opens a submenu to edit specific parameters
  • Display Parameters (G): Show current parameter settings
  • Parameter Help (I): Display help information about parameters
  • Reset Parameters (K): Reset parameters to defaults
  • Toggle Params in Header (F): Show/hide parameters in header
  • Parameter Profiles (p): Opens the parameter profiles submenu

When you select the Edit Parameter option, you’ll see a comprehensive submenu of all available parameters:

Parameters
[Generation]                [More Generation]          [Mirostat]
t  Temperature              f  Frequency Penalty       M  Mirostat Mode
k  Top K                    s  Presence Penalty        T  Mirostat Tau
p  Top P                    n  Repeat Last N           E  Mirostat Eta
m  Min P                    x  Stop Sequences
y  Typical P                l  Penalize Newline
r  Repeat Penalty

[Resource]                  [More Resource]            [Memory]
c  Num Ctx                  P  Num Predict             m  Use MMAP
b  Num Batch                S  Seed                    L  Use MLOCK
g  Num GPU                  N  NUMA                    C  Num Thread
G  Main GPU                 V  Low VRAM
K  Num Keep                 o  Vocab Only

[Profiles]                  [Actions]
d  Default Profile          D  Display All
a  Creative Profile         R  Reset All
e  Precise Profile          H  Help
A  All Profiles             F  Toggle Display in Header
                            q  Back to Main Menu

Parameter Profiles

Ollama Buddy includes predefined parameter profiles that can be applied with a single command. When you select “Parameter Profiles” from the main menu, you’ll see:

Parameter Profiles
Current modified parameters: temperature, top_k, top_p
[Available Profiles]
d  Default
c  Creative
p  Precise

[Actions]
q  Back to Main Menu

Commands Submenu

The Commands submenu provides quick access to specialized operations:

Ollama Buddy Commands
[Code Operations]       [Language Operations]    [Pattern-based]         [Custom]
r  Refactor Code        l  Dictionary Lookup     f  Fabric Patterns      C  Custom Prompt
d  Describe Code        s  Synonym Lookup        u  Universal Patterns   m  Minibuffer Prompt
g  Git Commit Message   p  Proofread Text        c  Code Patterns

[Actions]
q  Back to Main Menu

Direct Keybindings

For experienced users who prefer direct keybindings, all transient menu functions can also be accessed through keybindings with the prefix of your choice (or C-c O when in the chat minibuffer) followed by the key shown in the menu. For example:

  • C-c O s - Set system prompt
  • C-c O m - Switch model
  • C-c O P - Open parameter menu

Customization

The transient menu can be customized by modifying the transient-define-prefix definitions in the package. You can add, remove, or rearrange commands to suit your workflow.

Screenshots / Demos

Note that all the demos are in real time.

Ollama Buddy - youtube #emacs #ollama

Also these videos will all be uploaded to a youtube channel:

https://www.youtube.com/@OllamaBuddyforEmacs

description

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

Display setup

(setq font-general “Source Code Pro 15”)

theme : doom-oceanic-next

001 Ollama Buddy - First Steps #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

  • Starting with model : llama3.2:1b
  • Show menu activation C-c o ollama-buddy-menu
  • [o] Open chat buffer
  • PROMPT:: why is the sky blue?
  • C-c C-c to send from chat buffer
  • From this demo README file, select the following text and send to chat buffer: What is the capital of France?

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_001.gif

002 Ollama Buddy - Swap Models #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

  • C-c m to swap to differing models from chat buffer
  • Select models from the intro message
  • Swap models from the transient menu
  • PROMPT:: how many fingers do I have?

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_002.gif

003 Ollama Buddy - From Other Buffers #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

the quick brown fox jumps over the lazy dog

  • Select individual words for dictionary menu items
  • Select whole sentence and convert to uppercase

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_003.gif

004 Coding - Ollama Buddy - Writing a Hello World! program with increasingly advanced models #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

  • PROMPT:: can you write a hello world program in Ada?
  • switch models to the following and check differing output:
    • tinyllama:latest
    • qwen2.5-coder:3b
    • qwen2.5-coder:7b

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_004.gif

005 006 Ollama Buddy - Roald Dahl to Buffy! using a custom menu - just for fun! #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

In fairy-tales, witches always wear silly black hats and black coats, and they ride on broomsticks. But this is not a fairy-tale. This is about REAL WITCHES.

Lets change roles into a Buffy preset and have some fun!

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

  • Cordelia burn

img/ollama-buddy-screen-recording_005.gif

  • Giles… yawn!

img/ollama-buddy-screen-recording_006.gif

007 Ollama Buddy - Multishot #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

  • PROMPT:: how many fingers do I have?
  • send to multiple models, any difference in output?
  • Also they are now available in the equivalent letter named registers, so lets pull those registers!

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_007.gif

008 Ollama Buddy - Role Switching #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

Show the changing of roles and how they affect the menu and hence the commands available.

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_008.gif

009 Ollama Buddy - Prompt History #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

Enter a few queries to test the system, then navigate back through your previous inputs, switch models, and resubmit to compare results.

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_009.gif

012 Ollama Buddy - Sessions/History and recall #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

  • Ask:

What is the capital of France? and of Italy?

  • Turn off history

and of Germany?

  • Turn on history

and of Germany?

  • Save Session
  • Restart Emacs
  • Load Session

and of Sweden?

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_012.gif

015 Ollama Buddy - System prompt support #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

Set the system message to: You must always respond in a single sentence. Now ask the following: Tell me why Emacs is so great! Tell me about black holes clear the system message and ask again, the responses should now be more verbose!!

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_015.gif

016 Ollama Buddy - Same prompt to 10 models (multishot) #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

PROMPT: What is the capital of France? Multishot to abcdefghij

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_016.gif

017 Ollama Buddy - Lets look at some usage statistics #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

Display Token Stats Display Token Usage Graph

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_017.gif

018 Ollama Buddy - Awesome ChatGPT Prompting Pt1 #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

Select a passage from 20,000 and push

as a poet as a gaslighter as a drunk person

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_018.gif

019 Ollama Buddy - First Steps into User System Prompt Management #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

switch to use qwen2.5-coder:3b

Select User System Prompts

List, and move down through the buffer

Set as system prompt/load

Select Elisp Debugging Guide

Show the system prompt in a buffer

Run checking overy my/rsync in Emacs-DIYer

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_019.gif

020 Ollama Buddy - Attaching files to the chat buffer #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

switch to use qwen2.5-coder:3b

Attach the ollama-buddy Makefile

can you tell me about this file?

Attach another file, curl-tests.sh

Show attachments

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_020.gif

021 Ollama Buddy - Using a vision model to extract some text #emacs #ollama

Demonstrating the Emacs package ollama-buddy, providing a convenient way to integrate Ollama’s local LLM capabilities.

switch to use gemma3:4b

Open the image with the text

Copy the link, paste into the chat buffer and then lets see if the text is correctly extracted!

https://melpa.org/#/ollama-buddy https://github.com/captainflasmr/ollama-buddy

#emacs #ollama

img/ollama-buddy-screen-recording_021.gif

Installation

Prerequisites

  • Ollama installed and running locally
  • Emacs 26.1 or later

MELPA

(use-package ollama-buddy
  :ensure t
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper))

Manual Installation

Clone this repository:

git clone https://github.com/captainflasmr/ollama-buddy.git

init.el

With the option to add your own user keybinding for the ollama-buddy-menu

(add-to-list 'load-path "path/to/ollama-buddy")
(require 'ollama-buddy)
(global-set-key (kbd "C-c o") #'ollama-buddy-menu)
(global-set-key (kbd "C-c O") #'ollama-buddy-transient-menu-wrapper)

OR

(use-package ollama-buddy
  :load-path "path/to/ollama-buddy"
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper))
  • Usage
  1. Start your Ollama server locally with ollama serve
  2. In Emacs, M-x ollama-buddy-menu or a user defined keybinding C-c o to open up the ollama buddy menu
  3. Select [o] to open and jump to the chat buffer
  4. Read the AI assistants greeting and off you go!

Key Bindings in Chat Buffer

KeyAction
C-c C-cSend prompt
C-c kCancel request
C-c mChange model
C-c hShow help
C-c iShow model info
C-c UShow token statistics
C-c UShow token graph
C-c VToggle reasoning visibility
C-c C-sShow system prompt
C-u C-c C-cSet system prompt
C-u C-u C-c C-cClear system prompt
M-p / M-nBrowse prompt history
C-c n / C-c pNavigate between prompts
C-c NNew session
C-c LLoad session
C-c SSave session
C-c YList sessions
C-c WDelete session
C-c HToggle history
C-c XClear history
C-c EEdit history
C-c lPrompt to multiple models
C-c PEdit parameters
C-c GShow parameters
C-c IShow parameter help
C-c KReset parameters
C-c DToggle JSON debug mode
C-c TToggle token display
C-c ZToggle parameters
C-c C-oToggle Markdown/Org format
C-c AToggle interface level (basic/advanced)

Default Menu Items

The default menu offers the following menu items:

KeyActionDescription
oOpen chat bufferOpen chat buffer
mSwap modelSwap model
vView model statusView model status
lSend regionSend region
RSwitch rolesSwitch roles
NCreate new roleCreate new role
DOpen roles directoryOpen roles directory
rRefactor codeRefactor code
gGit commit messageGit commit message
cDescribe codeDescribe code
dDictionary LookupDictionary Lookup
nWord synonymWord synonym
pProofread textProofread text
eCustom promptCustom prompt
iMinibuffer PromptMinibuffer Prompt
KDelete sessionDelete session
qQuitQuit

Tutorial: Quickly Setting Up a Specialized Menu

Lets quickly get started by adding that little command you frequently use on text, lets add it to the ollama buddy menu!

First, lets clarify some concepts.

Understanding Commands, Roles, and Prompts in ollama-buddy

Commands

In ollama-buddy, a “command” is essentially a predefined action that appears as an option in the menu. Each command has:

  • A key (the button you press to invoke it)
  • A description (what shows up in the menu)
  • An action (what happens when you select it)
  • Optional settings like prompts, model selection, etc.

Roles

A “role” is a collection of commands. Think of it as a preset configuration or profile that defines which commands are available in your menu and how they behave.

The Different Types of Prompts

There are two types of prompts that might be causing confusion:

  1. Prompt (:prompt property): This is the instruction or question that gets prepended to your selected text. For example, if your prompt is “Fill in the missing Turkish letters in:” and your selected text is “Merhaba nasilsin”, the LLM will receive “Fill in the missing Turkish letters in: Merhaba nasilsin”.
  2. System Prompt (:system property): This is a behind-the-scenes instruction that guides the LLM’s overall behavior but isn’t directly part of your query. It’s like telling the AI “Here’s how I want you to approach this task” before you give it the actual task.

Creating a Simple Turkish Letter Correction Command

Let’s set up a simple, consistent command for filling in missing Turkish letters, there are two main ways.

The first will add a menu item to the existing menu, and the second will define a new role, which will be a collection of menu items. In the example, I have added only a single menu item, but you can add as many as you like. If you want, you can set up an entire menu system in the Role file that could be just Turkey specific.

Option 1: Adding to existing commands:

(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :config
  (setq ollama-buddy-default-model "o:tinyllama:latest")
  (add-to-list 'ollama-buddy-command-definitions
             '(turkish-letters
               :key ?t ; Press 't' in the menu to select this
               :description "Fix Turkish Letters" ; What shows in the menu
               :model "o:deepseek-r1:7b" ; Choose an appropriate model
               :prompt "Fill in the missing Turkish letters in the following text:"
               :system "You are an expert in Turkish language. Your task is to correctly add any missing Turkish-specific letters (ç, ğ, ı, ö, ş, ü) to text that may be missing them. Only correct the spelling by adding proper Turkish letters - do not change any words or add any commentary. Return only the corrected text."
               :action (lambda () (ollama-buddy--send-with-command 'turkish-letters)))))
  1. Add the code block above to your Emacs configuration (tweaking the use-package command as desired for your local configuration and setting the :model you wish to use)
  2. Restart Emacs or evaluate the code
  3. Select some Turkish text in any buffer
  4. Press C-c o to open the ollama-buddy menu
  5. Press t to run your “Fix Turkish Letters” command

Option 2: Creating a dedicated role:

  1. Create the file ~/.emacs.d/ollama-buddy-presets/ollama-buddy--preset__turkish.el with the code block below:
(require 'ollama-buddy)

(setq ollama-buddy-command-definitions
  '(
    ;; Standard commands (always include these)
    (open-chat
     :key ?o
     :description "Open chat buffer"
     :action ollama-buddy--open-chat)
    
    (show-models
     :key ?v
     :description "View model status"
     :action ollama-buddy-show-model-status)
    
    (switch-role
     :key ?R
     :description "Switch roles"
     :action ollama-buddy-roles-switch-role)
    
    (create-role
     :key ?E
     :description "Create new role"
     :action ollama-buddy-role-creator-create-new-role)
    
    (open-roles-directory
     :key ?D
     :description "Open roles directory"
     :action ollama-buddy-roles-open-directory)
    
    (swap-model
     :key ?m
     :description "Swap model"
     :action ollama-buddy--swap-model)
    
    (help
     :key ?h
     :description "Help"
     :action ollama-buddy--menu-help-assistant)
    
    ;; Your custom Turkish command
    (turkish-letters
     :key ?t
     :description "Fix Turkish Letters"
     :model "o:deepseek-r1:7b"
     :prompt "Fill in the missing Turkish letters in the following text:"
     :system "You are an expert in Turkish language. Your task is to correctly add any missing Turkish-specific letters (ç, ğ, ı, ö, ş, ü) to text that may be missing them. Only correct the spelling by adding proper Turkish letters - do not change any words or add any commentary. Return only the corrected text."
     :action (lambda () (ollama-buddy--send-with-command 'turkish-letters)))
    
    (send-region
     :key ?l
     :description "Send region"
     :action (lambda ()
               (let* ((selected-text (when (use-region-p)
                                       (buffer-substring-no-properties
                                        (region-beginning) (region-end)))))
                 (when (not selected-text)
                   (user-error "This command requires selected text"))
                 
                 (ollama-buddy--open-chat)
                 (insert selected-text))))
    ))
  1. In Emacs, press C-c o to open the ollama-buddy menu
  2. Press R to switch roles
  3. Select “turkish” from the list
  4. Now your menu will be simplified and focused on Turkish helpers
  5. Select some Turkish text, open the menu again (C-c o), and press t

Let’s break down what’s happening in the command definition:

(turkish-letters
 :key ?t ; The key to press in the menu
 :description "Fix Turkish Letters" ; What appears in the menu
 :model "o:deepseek-r1:7b"
 :prompt "Fill in the missing Turkish letters in the following text:" ; Instruction sent to the LLM
 :system "You are an expert in Turkish language..."  ; Background instruction for the LLM
 :action (lambda () (ollama-buddy--send-with-command 'turkish-letters)))  ; Function to execute
  • :prompt is what gets added before your selected text. It tells the AI what to do with your text.
  • :system gives the AI context about its role and specific behavioral guidelines. This helps ensure it gives exactly the type of response you want.

The prompt is for the specific task, while the system message shapes the AI’s overall approach. Together, they ensure consistent, specialized behavior.

Tutorial: Setting Up Roles and Custom Menus

Roles in Ollama Buddy allow you to create different configurations of commands and models for specific use cases. This tutorial will guide you through setting up roles, creating custom menus, and effectively using them in your workflow.

Understanding Roles

A role is essentially a preset configuration that defines:

  • Which commands are available in your menu
  • What models to use for specific commands
  • What prompts and system messages to use
  • Any special parameters for optimization

For example, you might have a “programmer” role focused on coding tasks and a “writer” role with writing-focused commands.

Understanding Role File Naming Convention

The file naming convention is critical to understand how roles, preset files, and menu configurations work together:

  • Required filename format: ollama-buddy--preset__ROLE-NAME.el
    • The double underscore __ separates the prefix from your role name
    • The role name portion becomes the identifier shown when switching roles
    • Example: ollama-buddy--preset__programmer.el creates a role named “programmer”

This naming convention is how Ollama Buddy discovers and identifies role files in your roles directory. When you run ollama-buddy-roles-switch-role, the system:

  1. Scans the ollama-buddy-roles-directory for files matching the pattern
  2. Extracts the role name from each filename (the part after __)
  3. Presents these names in the role selection interface
  4. When selected, loads the corresponding file which redefines ollama-buddy-command-definitions
  5. This redefinition immediately changes the available commands in your Ollama Buddy menu

The relationship chain works like this:

ollama-buddy--preset__ROLE-NAME.el → Defines ollama-buddy-command-definitions → Controls menu content

When creating roles using the interactive role creator (C-c E), this naming convention is automatically handled for you. When creating roles manually, you must follow this pattern for Ollama Buddy to recognize your role files correctly.

You can locate your roles directory with:

;; Check where your roles are stored
(message ollama-buddy-roles-directory)

;; Or open the directory directly
M-x ollama-buddy-roles-open-directory

By default, this is set to ~/.emacs.d/ollama-buddy-presets/, but you can customize it:

(setq ollama-buddy-roles-directory "/your/custom/path/to/presets")

Creating Custom Roles

There are two ways to create custom roles:

1. Using the Interactive Role Creator

The most user-friendly approach:

  1. Press C-c E or run M-x ollama-buddy-role-creator-create-new-role
  2. Enter a name for your role (e.g., “programmer”)
  3. For each command you want to add:
    • Specify a command name (e.g., “refactor-code”)
    • Choose a key shortcut for the menu
    • Add a description
    • Optionally specify a model
    • Optionally add prompt prefixes and system messages

2. Creating Role Files Manually

For more advanced customization, create role files manually:

  1. Create a file named ollama-buddy--preset__your-role-name.el in your ollama-buddy-roles-directory
  2. Structure your file like this:
;; ollama-buddy preset for role: programmer
(require 'ollama-buddy)

(setq ollama-buddy-command-definitions
  '(
    ;; Standard commands - always include these
    (open-chat
     :key ?o
     :description "Open chat buffer"
     :action ollama-buddy--open-chat)
    
    (show-models
     :key ?v
     :description "View model status"
     :action ollama-buddy-show-model-status)
    
    (switch-role
     :key ?R
     :description "Switch roles"
     :action ollama-buddy-roles-switch-role)
    
    (create-role
     :key ?E
     :description "Create new role"
     :action ollama-buddy-role-creator-create-new-role)
    
    (open-roles-directory
     :key ?D
     :description "Open roles directory"
     :action ollama-buddy-roles-open-directory)
    
    ;; Custom commands for this role
    (refactor-code
     :key ?r
     :description "Refactor code"
     :model "codellama:7b"
     :prompt "Refactor this code to improve readability and efficiency:"
     :system "You are an expert software engineer who improves code quality while maintaining functionality."
     :action (lambda () (ollama-buddy--send-with-command 'refactor-code)))
    
    (explain-code
     :key ?e
     :description "Explain code"
     :model "deepseek-r1:7b"
     :prompt "Explain what this code does in detail:"
     :system "You are a programming teacher who explains code clearly and thoroughly."
     :action (lambda () (ollama-buddy--send-with-command 'explain-code)))
    
    (git-commit
     :key ?g
     :description "Git commit message"
     :prompt "Write a concise git commit message for these changes:"
     :system "You are a version control expert who creates clear, concise commit messages."
     :action (lambda () (ollama-buddy--send-with-command 'git-commit)))
    ))

Switching Between Roles

To switch between roles:

  1. Press C-c R or run M-x ollama-buddy-roles-switch-role
  2. Select a role from the completion list
  3. The menu will update with the commands defined in that role

You can also switch roles from within the Ollama Buddy menu by pressing ‘R’.

Advanced Customization Techniques

Command-Specific Models

Assign specific models to commands for optimal performance:

(ollama-buddy-add-model-to-menu-entry 'refactor-code "codellama:7b")

Command-Specific Parameters

Optimize parameters for specific commands:

(ollama-buddy-add-parameters-to-command 'refactor-code
  'temperature 0.2
  'top_p 0.7
  'repeat_penalty 1.3)

Creating New Commands

Add entirely new commands to your menu:

(ollama-buddy-update-menu-entry 'my-new-command
  :key ?z
  :description "My new awesome command"
  :prompt "Here is what I want you to do:"
  :system "You are an expert system specialized in this task."
  :action (lambda () (ollama-buddy--send-with-command 'my-new-command)))

Example: Creating a “Writer” Role

Here’s a complete example of setting up a writing-focused role:

;; ollama-buddy preset for role: writer
(require 'ollama-buddy)

(setq ollama-buddy-command-definitions
  '(
    ;; Standard commands
    (open-chat
     :key ?o
     :description "Open chat buffer"
     :action ollama-buddy--open-chat)
    
    (show-models
     :key ?v
     :description "View model status"
     :action ollama-buddy-show-model-status)
    
    (switch-role
     :key ?R
     :description "Switch roles"
     :action ollama-buddy-roles-switch-role)
    
    (create-role
     :key ?E
     :description "Create new role"
     :action ollama-buddy-role-creator-create-new-role)
    
    (open-roles-directory
     :key ?D
     :description "Open roles directory"
     :action ollama-buddy-roles-open-directory)
    
    ;; Writing-focused commands
    (summarize
     :key ?s
     :description "Summarize text"
     :prompt "Summarize the following text in a concise manner:"
     :system "You are an expert at extracting the key points from any text."
     :action (lambda () (ollama-buddy--send-with-command 'summarize)))
    
    (proofread
     :key ?p
     :description "Proofread text"
     :model "deepseek-r1:7b"
     :prompt "Proofread the following text and correct any errors:"
     :system "You are a professional editor who identifies and corrects grammar, spelling, and style errors."
     :action (lambda () (ollama-buddy--send-with-command 'proofread)))
    
    (rewrite
     :key ?r
     :description "Rewrite text"
     :prompt "Rewrite the following text to improve clarity and flow:"
     :system "You are a skilled writer who can improve any text while preserving its meaning."
     :action (lambda () (ollama-buddy--send-with-command 'rewrite)))
    
    (brainstorm
     :key ?b
     :description "Brainstorm ideas"
     :model "llama3.2:3b"
     :prompt "Generate creative ideas related to the following topic:"
     :parameters ((temperature . 1.0) (top_p . 0.95))
     :action (lambda () (ollama-buddy--send-with-command 'brainstorm)))
    ))

Save this as ollama-buddy--preset__writer.el in your ollama-buddy-roles-directory.

Tips for Effective Role Usage

  1. Group related commands: Create roles around specific workflows or tasks
  2. Match models to tasks: Use lightweight models for simple tasks and more powerful models for complex ones
  3. Customize system prompts: Craft specific system prompts to guide the model for each command
  4. Use the roles directory: Press C-c D to quickly access and manage your role files
  5. Create specialized roles: Consider roles for programming, writing, translation, or domain-specific knowledge

Core Features in Detail

Chat Interface

The ollama-buddy chat buffer is powered by org-mode, providing enhanced readability and structure. Conversations automatically format user prompts and AI responses with org-mode headings, making them easier to navigate and utilize.

Benefits include:

  • Outlining and heading navigation
  • Org export capabilities
  • Source code fontification

Toggle Markdown to Org conversion with C-c C-o.

Interface Levels

You can choose between two interface levels depending on your preference:

  • Basic Interface: Shows minimal commands for new users
  • Advanced Interface: Shows all available commands and features

Set your preferred interface level:

(setq ollama-buddy-interface-level 'basic)  ; or 'advanced

By default, the menu is set to Basic. Switch between levels during your session with C-c A.

Conversation History

Ollama Buddy maintains context between your interactions by:

  • Tracking conversation history between prompts and responses
  • Sending previous messages to Ollama for improved contextual responses
  • Displaying a history counter in the status line showing conversation length
  • Providing configurable history length limits to control memory usage

Control this feature with:

;; Enable/disable conversation history (default: t)
(setq ollama-buddy-history-enabled t)

;; Set maximum conversation pairs to remember (default: 10)
(setq ollama-buddy-max-history-length 10)

;; Show/hide the history counter in the header line (default: t)
(setq ollama-buddy-show-history-indicator t)

History-related commands:

  • C-c H: Toggle history tracking on/off
  • C-c X: Clear the current conversation history
  • C-c E: Edit conversation history (universal argument to edit specific model)

Parameter Management

Comprehensive parameter management gives you complete control over your Ollama model’s behavior through API options:

  • All Parameters - Set any custom option for the Ollama LLM at runtime
  • Smart Parameter Management: Only modified parameters are sent to Ollama, preserving defaults
  • Visual Parameter Interface: Clear display showing which parameters are active with highlighting

Access parameter management through keyboard shortcuts:

  • C-c P - Edit a parameter
  • C-c G - Display current parameters
  • C-c I - Show parameter help
  • C-c K - Reset parameters to defaults

Command-Specific Parameters

You can define specific parameter sets for each command in the menu, enabling optimization for particular use cases:

;; Update properties and parameters at once
(ollama-buddy-update-command-with-params 'describe-code
 :model "codellama:latest"
 :parameters '((temperature . 0.3) (top_p . 0.8)))

This feature is particularly useful for:

  1. Code-related tasks: Lower temperature for more deterministic code generation
  2. Creative writing: Higher temperature for more varied and creative outputs
  3. Technical explanations: Balanced settings for clear, accurate explanations
  4. Summarization tasks: Custom parameters to control verbosity and focus

System Prompt Support

Ollama Buddy supports system prompts, allowing you to set and manage system-level instructions for your AI interactions:

  • Set a system prompt: Use C-u C-c C-c to designate any user prompt as a system prompt
  • Clear system prompt: Use C-u C-u C-c C-c to clear the system prompt
  • View system prompt: Use C-c C-s to display the current system prompt

System prompts remain active across user queries, providing better control over conversation context. The status bar displays an “S” indicator when a system prompt is active.

You can also define system prompts per command:

(ollama-buddy-update-menu-entry
 'refactor-code
 :model "qwen2.5-coder:7b"
 :system "You are an expert software engineer who improves code and only mainly using the principles exhibited by Ada")

Token Usage Tracking

Real-time token tracking helps you monitor usage and performance:

  • Track token counts, rates, and usage history
  • Display token usage statistics (C-c t or menu option)
  • Toggle token stats display after responses
  • Real-time updates via timer

Multi-model Comparison

With the multishot mode, you can send a prompt to multiple models in sequence and compare their responses:

  • Models are assigned letters for quick selection (e.g., (a) mistral, (b) gemini)
  • Use C-c M to initiate a multishot sequence
  • Status updates track progress during multishot execution

To use multishot mode:

  1. C-c M to start a multishot session
  2. Type a sequence of model letters (e.g., abc to use models a, b, and c)
  3. The selected models process the prompt one by one

Role-Based Presets

Roles in Ollama Buddy are essentially profiles tailored to specific tasks:

  • Store custom roles in ollama-buddy-roles-directory (default: ~/.emacs.d/ollama-buddy-presets/)
  • Switch between roles with M-x ollama-buddy-roles-switch-role or menu option R
  • Create custom roles with M-x ollama-buddy-role-creator-create-new-role or menu option N
  • Open role directory with M-x ollama-buddy-roles-open-directory or menu option D

Preconfigured presets available:

  • ollama-buddy–preset__buffy.el
  • ollama-buddy–preset__default.el
  • ollama-buddy–preset__emacs.el
  • ollama-buddy–preset__developer.el
  • ollama-buddy–preset__janeway.el
  • ollama-buddy–preset__translator.el
  • ollama-buddy–preset__writer.el

Session Management

Session management allows you to save conversations and restore them with relevant context:

  • Save session with ollama-buddy-sessions-save or C-c S
  • Load session with ollama-buddy-sessions-load or C-c L
  • List sessions with ollama-buddy-sessions-list or C-c Y
  • Delete session with ollama-buddy-sessions-delete or C-c K
  • New session with ollama-buddy-sessions-new or C-c E

Sessions preserve model-specific chat history to prevent context contamination across different models.

Prompt History Support

Prompts are integrated into the Emacs history mechanism and persist across sessions:

  • Use M-p to navigate prompt history in the chat buffer
  • Use M-p / M-n within the minibuffer to insert previous prompts

Advanced Customizing the Ollama Buddy Menu System

Ollama Buddy provides a flexible menu system that can be easily customized to match your workflow. The menu is built from ollama-buddy-command-definitions, which you can modify or extend in your Emacs configuration.

Basic Structure

Each menu item is defined using a property list with these key attributes:

(command-name
 :key ?k              ; Character for menu selection
 :description "desc"  ; Menu item description
 :model "model-name"  ; Specific Ollama model (optional)
 :prompt "prompt"     ; System prompt (optional)
 :system "system"     ; System prompt (optional)
 :parameters ((param . value)) ; Command-specific parameters (optional)
 :action function)    ; Command implementation

Examples

Adding New Commands

You can add new commands to ollama-buddy-command-definitions in your config:

;; Add a single new command
(add-to-list 'ollama-buddy-command-definitions
               '(pirate
                 :key ?i
                 :description "R Matey!"
                 :model "mistral:latest"
                 :prompt "Translate the following as if I was a pirate:"
                 :action (lambda () (ollama-buddy--send-with-command 'pirate))))

;; Incorporate into a use-package
(use-package ollama-buddy
  :load-path "path/to/ollama-buddy"
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  (add-to-list 'ollama-buddy-command-definitions
               '(pirate
                 :key ?i
                 :description "R Matey!"
                 :model "mistral:latest"
                 :prompt "Translate the following as if I was a pirate:"
                 :action (lambda () (ollama-buddy--send-with-command 'pirate))))
  :custom ollama-buddy-default-model "llama:latest")

;; Add multiple commands at once
(setq ollama-buddy-command-definitions
      (append ollama-buddy-command-definitions
              '((summarize
                 :key ?u
                 :description "Summarize text"
                 :model "tinyllama:latest"
                 :prompt "Provide a brief summary:"
                 :action (lambda () 
                          (ollama-buddy--send-with-command 'summarize)))
                (translate-spanish
                 :key ?t
                 :description "Translate to Spanish"
                 :model "mistral:latest"
                 :prompt "Translate this text to Spanish:"
                 :action (lambda () 
                          (ollama-buddy--send-with-command 'translate-spanish))))))

Creating a Minimal Setup

You can create a minimal configuration by defining only the commands you need:

;; Minimal setup with just essential commands
(setq ollama-buddy-command-definitions
      '((send-basic
         :key ?l
         :description "Send Basic Region"
         :action (lambda () (ollama-buddy--send-with-command 'send-basic)))

        (quick-define
         :key ?d
         :description "Define word"
         :model "tinyllama:latest"
         :prompt "Define this word:"
         :action (lambda () 
                  (ollama-buddy--send-with-command 'quick-define)))
        (quit
         :key ?q
         :description "Quit"
         :model nil
         :action (lambda () 
                  (message "Quit Ollama Shell menu.")))))

Tips for Custom Commands

  1. Choose unique keys for menu items
  2. Match models to task complexity (small models for quick tasks)
  3. Use clear, descriptive names

Command Properties Reference

PropertyDescriptionRequired
:keySingle character for menu selectionYes
:descriptionMenu item descriptionYes
:modelSpecific Ollama model to useNo
:promptStatic system promptNo
:systemSystem-level instructionNo
:parametersCommand-specific parametersNo
:actionFunction implementing the commandYes
  • Model Selection and Fallback Logic

Overview

You can associate specific commands defined in the menu with an Ollama LLM to optimize performance for different tasks. For example, if speed is a priority over accuracy, such as when retrieving synonyms, you might use a lightweight model like TinyLlama or a 1B–3B model. On the other hand, for tasks that require higher precision, like code refactoring, a more capable model such as Qwen-Coder 7B can be assigned to the “refactor” command on the buddy menu system.

Since this package enables seamless model switching through Ollama, the buddy menu can present a list of commands, each linked to an appropriate model. All Ollama interactions share the same chat buffer, ensuring that menu selections remain consistent. Additionally, the status bar on the header line and the prompt itself indicate the currently active model.

Ollama Buddy also includes a model selection mechanism with a fallback system to ensure commands execute smoothly, even if the preferred model is unavailable.

Command-Specific Models

Commands in ollama-buddy-command-definitions can specify preferred models using the :model property. This allows optimizing different commands for specific models:

(defcustom ollama-buddy-command-definitions
  '((refactor-code
     :key ?r
     :description "Refactor code"
     :model "qwen-coder:latest"
     :prompt "refactor the following code:")
    (git-commit
     :key ?g
     :description "Git commit message"
     :model "tinyllama:latest"
     :prompt "write a concise git commit message for the following:")
    (send-region
     :key ?l
     :description "Send region"
     :model "llama:latest"))
  ...)

When :model is nil, the command will use whatever model is currently set as ollama-buddy-default-model.

Fallback Chain

When executing a command, the model selection follows this fallback chain:

  1. Command-specific model (:model property)
  2. Current model (ollama-buddy-default-model)
  3. User selection from available models

Configuration Options

Setting the Fallback Model

(setq ollama-buddy-default-model "llama:latest")

User Interface Feedback

When a fallback occurs, Ollama Buddy provides clear feedback:

  • The header line shows which model is being used
  • If using a fallback model, an orange warning appears showing both the requested and actual model
  • The model status can be viewed using the “View model status” command (v key)

Customization

Basic Customization

(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :custom
  ;; Set default model
  (ollama-buddy-default-model "llama3:latest")
  ;; Set interface level (basic or advanced)
  (ollama-buddy-interface-level 'advanced)
  ;; Enable model colors
  (ollama-buddy-enable-model-colors t)
  ;; Set menu columns
  (ollama-buddy-menu-columns 3))

Remote Ollama Server

(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :custom
  (ollama-buddy-host "http://<remote-server>")
  (ollama-buddy-port 11400))

Command-Specific Models

(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :config
  (ollama-buddy-add-model-to-menu-entry 'dictionary-lookup "tinyllama:latest")
  (ollama-buddy-add-model-to-menu-entry 'synonym "tinyllama:latest"))

Command-Specific System Prompts

(use-package ollama-buddy
  :bind
  ("C-c o" . ollama-buddy-menu)
  ("C-c O" . ollama-buddy-transient-menu-wrapper)
  :config
  (ollama-buddy-update-menu-entry
   'refactor-code
   :model "qwen2.5-coder:7b"
   :system "You are an expert software engineer who improves code using Ada principles"))

Variables

Custom variableDescription
ollama-buddy-awesome-categorize-promptsWhether to categorize prompts based on common keywords.
ollama-buddy-awesome-prompts-fileFilename containing the prompts within the repository.
ollama-buddy-grok-api-endpointEndpoint for Grok chat completions API.
ollama-buddy-remote-modelsList of available remote models.
ollama-buddy-command-definitionsComprehensive command definitions for Ollama Buddy.
ollama-buddy-enable-model-colorsWhether to show model colors.
ollama-buddy-interface-levelLevel of interface complexity to display.
ollama-buddy-show-params-in-headerWhether to show modified parameters in the header line.
ollama-buddy-debug-modeWhen non-nil, show raw JSON messages in a debug buffer.
ollama-buddy-modeNon-nil if Ollama-Buddy mode is enabled.
ollama-buddy-grok-marker-prefixPrefix used to identify Grok models in the ollama-buddy interface.
ollama-buddy-reasoning-markersList of marker pairs that encapsulate reasoning/thinking sections.
ollama-buddy-awesome-update-on-startupWhether to automatically update prompts when Emacs starts.
ollama-buddy-context-bar-widthWidth of the context progress bar in characters.
ollama-buddy-sessions-directoryDirectory containing ollama-buddy session files.
ollama-buddy-history-model-view-mode-mapKeymap for model-specific history viewing mode.
ollama-buddy-menu-columnsNumber of columns to display in the Ollama Buddy menu.
ollama-buddy-gemini-api-endpointEndpoint format for Google Gemini API.
ollama-buddy-fabric-pattern-categoriesList of pattern categories to focus on when listing patterns.
ollama-buddy-status-update-intervalInterval in seconds to update the status line with background operations.
ollama-buddy-fabric-update-on-startupWhether to automatically update patterns when Emacs starts.
ollama-buddy-available-modelsList of available models to pull from Ollama Hub.
ollama-buddy-grok-max-tokensMaximum number of tokens to generate in the response.
ollama-buddy-claude-max-tokensMaximum number of tokens to generate in the response.
ollama-buddy-openai-marker-prefixPrefix to indicate that a model is from OpenAI rather than Ollama.
ollama-buddy-history-enabledWhether to use conversation history in Ollama requests.
ollama-buddy-grok-api-keyAPI key for accessing Grok services.
ollama-buddy-grok-default-modelDefault Grok model to use.
ollama-buddy-claude-marker-prefixPrefix used to identify Claude models in the model list.
ollama-buddy-roles-directoryDirectory containing ollama-buddy role preset files.
ollama-buddy-history-view-mode-mapKeymap for history viewing mode.
ollama-buddy-fabric-local-dirLocal directory where Fabric patterns will be stored.
ollama-buddy-show-history-indicatorWhether to show the history indicator in the header line.
ollama-buddy-context-size-thresholdsThresholds for context usage warnings.
ollama-buddy-mode-mapKeymap for ollama-buddy mode.
ollama-buddy-marker-prefixPrefix used to identify Ollama models in the ollama-buddy interface.
ollama-buddy-fabric-patterns-subdirSubdirectory within the Fabric repo containing the patterns.
ollama-buddy-context-display-typeHow to display context usage in the status bar.
ollama-buddy-awesome-repo-urlURL of the Awesome ChatGPT Prompts GitHub repository.
ollama-buddy-openai-api-endpointEndpoint for OpenAI chat completions API.
ollama-buddy-vision-modelsList of models known to support vision capabilities.
ollama-buddy-params-activeCurrently active values for Ollama API parameters.
ollama-buddy-openai-default-modelDefault OpenAI model to use.
ollama-buddy-claude-default-modelDefault Claude model to use.
ollama-buddy-params-modifiedSet of parameters that have been explicitly modified by the user.
ollama-buddy-gemini-temperatureTemperature setting for Gemini requests (0.0-1.0).
ollama-buddy-current-session-nameThe name of the currently loaded session.
ollama-buddy-hostHost where Ollama server is running.
ollama-buddy-image-formatsList of regular expressions matching supported image file formats.
ollama-buddy-streaming-enabledWhether to use streaming mode for responses.
ollama-buddy-max-history-lengthMaximum number of message pairs to keep in conversation history.
ollama-buddy-gemini-default-modelDefault Gemini model to use.
ollama-buddy-default-modelDefault Ollama model to use.
ollama-buddy-claude-temperatureTemperature setting for Claude requests (0.0-1.0).
ollama-buddy-vision-enabledWhether to enable vision support for models that support it.
ollama-buddy-awesome-prompt-variable
ollama-buddy-display-token-statsWhether to display token usage statistics in responses.
ollama-buddy-show-context-percentageWhether to show context percentage in the status bar.
ollama-buddy-claude-api-endpointEndpoint for Anthropic Claude API.
ollama-buddy-gemini-api-keyAPI key for accessing Google Gemini services.
ollama-buddy-openai-max-tokensMaximum number of tokens to generate in the response.
ollama-buddy-claude-api-keyAPI key for accessing Anthropic Claude services.
ollama-buddy-params-profilesPredefined parameter profiles for different usage scenarios.
ollama-buddy-awesome-local-dirLocal directory where Awesome ChatGPT Prompts will be stored.
ollama-buddy-params-defaultsDefault values for Ollama API parameters.
ollama-buddy-mode-line-segmentMode line segment for Ollama Buddy.
ollama-buddy-grok-temperatureTemperature setting for Grok requests (0.0-1.0).
ollama-buddy-portPort where Ollama server is running.
ollama-buddy-default-registerDefault register to store the current response when not in multishot mode.
ollama-buddy-fabric-repo-urlURL of the Fabric GitHub repository.
ollama-buddy-gemini-max-tokensMaximum number of tokens to generate in the response.
ollama-buddy-modelfile-directoryDirectory for storing temporary Modelfiles.
ollama-buddy-fallback-context-sizesMapping of model names to their default context sizes.
ollama-buddy-context-bar-charsCharacters used to draw the context progress bar.
ollama-buddy-openai-temperatureTemperature setting for OpenAI requests (0.0-2.0).
ollama-buddy-convert-markdown-to-orgWhether to automatically convert markdown to `org-mode’ format in responses.
ollama-buddy-mode-hookHook run after entering or leaving `ollama-buddy-mode’.
ollama-buddy-gemini-marker-prefixPrefix used to identify Gemini models in the ollama-buddy interface.
ollama-buddy-hide-reasoningWhen non-nil, hide reasoning/thinking blocks from the stream output.
ollama-buddy-openai-api-keyAPI key for accessing OpenAI services.

Interactive functions

CommandDescription
ollama-buddy-open-infoOpen the Info manual for the ollama-buddy package.
ollama-buddy-dired-import-ggufImport the GGUF file at point in Dired into Ollama.
ollama-buddy-toggle-markdown-conversionToggle automatic conversion of markdown to ‘org-mode’ format.
ollama-buddy-fabric-sendApply a Fabric pattern to the selected text and send to Ollama.
ollama-buddy-reset-system-promptReset the system prompt to default (none).
ollama-buddy-beginning-of-promptMove point to the beginning of the current prompt.
ollama-buddy-toggle-streamingToggle streaming mode for Ollama responses.
ollama-buddy-toggle-reasoning-visibilityToggle visibility of reasoning/thinking sections in responses.
ollama-buddy-history-toggle-edit-modeToggle between viewing and editing modes for history.
ollama-buddy-modeMinor mode for ollama-buddy keybindings.
ollama-buddy-roles-switch-roleSwitch to a different ollama-buddy role.
ollama-buddy-pull-modelPull or update MODEL from Ollama Hub asynchronously.
ollama-buddy-history-editView and edit the conversation history in a buffer.
ollama-buddy-previous-historyNavigate to previous item in prompt history.
ollama-buddy-toggle-params-in-headerToggle display of modified parameters in the header line.
ollama-buddy-display-token-graphDisplay a visual graph of token usage statistics.
ollama-buddy-toggle-token-displayToggle display of token statistics after each response.
ollama-buddy-show-raw-model-infoRetrieve and display raw JSON information about the current default MODEL.
ollama-buddy-history-cancelCancel editing the history.
ollama-buddy-sessions-loadLoad an Ollama Buddy session with improved org file handling.
ollama-buddy-transient-menu-wrapperWrapper function for safely loading the Ollama Buddy transient menu.
ollama-buddy-sessions-listDisplay a list of saved sessions.
ollama-buddy-toggle-context-percentageToggle display of context percentage in the status bar.
ollama-buddy-transient-awesome-menuAwesome ChatGPT Prompts for ollama-buddy.
ollama-buddy-show-context-infoShow detailed information about context sizes for all models.
ollama-buddy-fabric-set-system-promptSet the system prompt to a Fabric pattern without sending a request.
ollama-buddy-params-helpDisplay help for Ollama parameters.
ollama-buddy-awesome-sendApply an Awesome ChatGPT Prompt to the selected text and send to Ollama.
ollama-buddy-sessions-newStart a new session by clearing history and buffer.
ollama-buddy-show-system-promptDisplay the current system prompt in a buffer.
ollama-buddy-awesome-show-prompts-menuShow a transient menu of Awesome ChatGPT Prompt organized by category.
ollama-buddy-awesome-populate-promptsPopulate the list of available prompt from the local repository.
ollama-buddy-awesome-set-system-promptSet the system prompt to an Awesome ChatGPT Prompt without sending a request.
ollama-buddy-params-editEdit a specific parameter PARAM interactively.
ollama-buddy-awesome-list-promptsDisplay a list of available Awesome ChatGPT Prompt.
ollama-buddy-transient-parameter-menuParameter menu for Ollama Buddy.
ollama-buddy-toggle-debug-modeToggle display of raw JSON messages in a debug buffer.
ollama-buddy-awesome-show-promptDisplay the full content of a prompt with FORMATTED-NAME.
ollama-buddy-toggle-model-colorsToggle the use of model-specific colors in ollama-buddy.
ollama-buddy-awesome-setupSet up the ollama-buddy-awesome package.
ollama-buddy-fabric-show-patternDisplay the full content of a PATTERN.
ollama-buddy-fabric-sync-patternsSync the latest patterns from the Fabric GitHub repository.
ollama-buddy-reset-all-promptsReset both system prompt and suffix to default (none).
ollama-buddy-fabric-list-patternsDisplay a list of available Fabric patterns with descriptions.
ollama-buddy-history-searchSearch through the prompt history using a ‘completing-read’ interface.
ollama-buddy-toggle-historyToggle conversation history on/off.
ollama-buddy-display-token-statsDisplay token usage statistics.
ollama-buddy-role-creator-create-new-roleCreate a new role interactively.
ollama-buddy-roles-open-directoryOpen the ollama-buddy roles directory in Dired.
ollama-buddy-params-resetReset all parameters to default values and clear modification tracking.
ollama-buddy-menuDisplay Ollama Buddy menu with support for prefixed model references.
ollama-buddy-unload-all-modelsUnload all currently running Ollama models to free up resources.
ollama-buddy-history-toggle-edit-modelToggle between viewing and editing modes for MODEL history.
ollama-buddy-transient-menuOllama Buddy main menu.
ollama-buddy-fabric-setupSet up the ollama-buddy-fabric package.
ollama-buddy-manage-modelsUpdate the model management interface to include unload capabilities.
ollama-buddy-history-edit-modelEdit the conversation history for a specific MODEL.
ollama-buddy-roles-create-directoryCreate the ollama-buddy roles directory if it doesn’t exist.
ollama-buddy-history-save-modelSave the edited history for MODEL back to variable.
ollama-buddy-toggle-context-display-typeToggle between text and bar display for context usage.
ollama-buddy-set-max-history-lengthSet the maximum number of message pairs to keep in conversation history.
ollama-buddy-reset-suffixReset the suffix to default (none).
ollama-buddy-show-model-statusDisplay status of models referenced in command definitions with color coding.
ollama-buddy-next-historyNavigate to next item in prompt history.
ollama-buddy-clear-historyClear the conversation history.
ollama-buddy-awesome-sync-promptsSync the latest prompt from the Awesome ChatGPT Prompt GitHub repository.
ollama-buddy-set-model-context-sizeManually set the context size for MODEL to SIZE.
ollama-buddy-sessions-deleteDelete an Ollama Buddy session.
ollama-buddy-transient-profile-menuParameter profiles menu for Ollama Buddy.
ollama-buddy-set-suffixSet the current prompt as a suffix.
ollama-buddy-set-system-promptSet the current prompt as a system prompt.
ollama-buddy-history-saveSave the edited history back to ‘ollama-buddy–conversation-history-by-model’.
ollama-buddy-import-gguf-fileImport a GGUF file at FILE-PATH into Ollama.
ollama-buddy-transient-commands-menuCommands menu for Ollama Buddy.
ollama-buddy-toggle-interface-levelToggle between basic and advanced interface levels.
ollama-buddy-transient-fabric-menuFabric patterns menu for Ollama Buddy.
ollama-buddy-params-displayDisplay the current Ollama parameter settings.
ollama-buddy-fabric-populate-patternsPopulate the list of available patterns from the local repository.
ollama-buddy-sessions-saveSave the current Ollama Buddy session.

Interactive functions

All interactive functions available in ollama-buddy are listed in the original README.

ollama API Support Tables

Core API

API EndpointMethodPurposeSupported?will ollama-buddy support?
/api/chatPOSTSend chat messages to a modelYes
/api/generatePOSTGenerate text without chat contextNoProbably not, as chat covers
/api/deleteDELETEDeletes available modelsYes
/api/tagsGETList available modelsYes
/api/pullPOSTPull a model from Ollama libraryYes
/api/pushPOSTPush a model to the Ollama libraryNoNo
/api/createPOSTCreate a model from a ModelfileYes
/api/showPOSTShow model informationYes
/api/psGETList running modelsYes
/api/copyPOSTCopy a modelYes
/api/embedPOSTGenerate embeddings from textNoProbably when I can figure it out
/api/versionGETGet Ollama versionYes
Cancel requestsCustomTerminate ongoing requestYes

Core Params

ParameterSupportedNoteswill ollama-buddy support?
modelYesCan be specified per command or globally
promptYesCore feature with extensive prompt handling
suffixYes/NoCan be set with ollama-buddy-set-suffix
imagesNoUnlikely, this is Emacs!
formatNoMaybe
optionsYesFull implementation of all model parameters
systemYesCan be set with ollama-buddy-set-system-prompt
templateNoMaybe
streamYesToggleable with ollama-buddy-toggle-streaming
rawNoMight as well
keep_aliveNoUses default Ollama behaviorYes

Model Options Parameters

Parameter GroupParametersSupported
Temperature Controlstemperature, top_k, top_p, min_p, typical_pYes
Repetition Controlsrepeat_last_n, repeat_penalty, presence_penalty, frequency_penaltyYes
Advanced Samplingmirostat, mirostat_tau, mirostat_eta, penalize_newline, stopYes
Resource Managementnum_keep, seed, num_predict, num_ctx, num_batchYes
Hardware Optimizationnuma, num_gpu, main_gpu, low_vram, vocab_only, use_mmap, use_mlock, num_threadYes

Bugs

TODODOINGDONE
GGUF to be asynchronousKill online AI process with cancel requestClaude request and connection can sometimes take time and it blocks!
Better attachment signallingallow switch to remote LLMs even when ollama not running
Modifying a parameter internal defaults would flag as not modifiedVision image with spaces not processed as image
No streaming with hiding reasoning corrupts outputSave system prompts as part of session
Pushing hundreds or even thousands of lines to the chat buffer takes a whilechat ollama names to include prefix
Check tags cachingStill some encoding issues on returns from online AI
keybindings in ollama chat buffer are reservedConversation history to display with Claude/ChatGPT
I think some hard coded Claude models currently don’t work
register contents to be converted to org, currently it is in markdown
C-a beginning-of-line to jump to prompt start like comint
As shell generally C-n to end of list to clear prompt history
Add Texinfo to MELPA recipe
System prompt not cleared on new session
Remove fabric from Commands transient menu
External AI to copy to register
First request with no chat buffer fails

GGUF to be asynchronous

Better attachment signalling

Modifying a parameter internal defaults would flag as not modified

No streaming with hiding reasoning corrupts output

Pushing hundreds or even thousands of lines to the chat buffer takes a while

Check tags caching

keybindings in ollama chat buffer are reserved

Kill online AI process with cancel request

Claude request and connection can sometimes take time and it blocks!

allow switch to remote LLMs even when ollama not running

Vision image with spaces not processed as image

Save system prompts as part of session

chat ollama names to include prefix

Still some encoding issues on returns from online AI

Conversation history to display with Claude/ChatGPT

I think some hard coded Claude models currently don’t work

register contents to be converted to org, currently it is in markdown

C-a beginning-of-line to jump to prompt start like comint

As shell generally C-n to end of list to clear prompt history

Add Texinfo to MELPA recipe

System prompt not cleared on new session

Remove fabric from Commands transient menu

External AI to copy to register

First request with no chat buffer fails

Roadmap

TODODOINGDONE
Autoload intelligently?Add option to hide reasoning in chat bufferUnload all models
Core API - EmbeddingsCore Params - suffixcheck context size of current model when querying
Core Params - formatShow token parameters as part of token stats
Core Params - templateAutogenerate session save name
Core Params - rawAdd gemini?
Test on WindowsPull ChatGPT and Claude model names
Merge history buffer functionality
Show incoming ollama messages
Add chatGPT awesome prompt support
Add fabric pattern/prompt support
ChatGPT support
Claude support
Add new ChatGPT 4.1
Method for securely storing API keys
Core API - Delete Model
Core API - Copy a model
Core API - stream
Core API - Pull/Info/Pull/Delete a model
Core API - Show running models
Core API - Create a model from a GGUF file
Add transient menu
History editing
Core API - options - parameter editing
Core API - show model information
Core API - system prompt
Chat buffer to use org-mode
Chat export options
Core Params - options:temperature
Managing Sessions
Sparse minified version
Role-Based Menu Preset System
Add to MELPA
Enhancing Menu Clarity
Real time token tracking
Managing chat history and context

Add option to hide reasoning in chat buffer

Autoload intelligently?

Core API - Embeddings

Core Params - format

Core Params - template

Core Params - raw

Test on Windows

Core Params - suffix

I think this is only for generate, but I have quietly added it in.

Unload all models

check context size of current model when querying

Show token parameters as part of token stats

Autogenerate session save name

Add gemini?

Pull ChatGPT and Claude model names

Merge history buffer functionality

Show incoming ollama messages

Add chatGPT awesome prompt support

Add fabric pattern/prompt support

ChatGPT support

Claude support

Add new ChatGPT 4.1

Method for securely storing API keys

Core API - Delete Model

Core API - Copy a model

Core API - stream

Core API - Pull/Info/Pull/Delete a model

Core API - Show running models

Core API - Create a model from a GGUF file

Add transient menu

History editing

Core API - options - parameter editing

Core API - show model information

Core API - system prompt

Chat buffer to use org-mode

Chat export options

Core Params - options:temperature

Managing Sessions

Sparse minified version

Role-Based Menu Preset System

Add to MELPA

Enhancing Menu Clarity

Real time token tracking

Managing chat history and context

Alternative LLM based packages

To the best of my knowledge, there are currently a few Emacs packages related to Ollama, though the ecosystem is still relatively young:

  1. llm.el (by Andrew Hyatt)
    • A more general LLM interface package that supports Ollama as one of its backends
    • GitHub: https://github.com/ahyatt/llm
    • Provides a more abstracted approach to interacting with language models
    • Supports multiple backends including Ollama, OpenAI, and others
  2. gptel (by Karthik Chikmagalur)
    • While primarily designed for ChatGPT and other online services, it has experimental Ollama support
    • GitHub: https://github.com/karthink/gptel
    • Offers a more integrated chat buffer experience
    • Has some basic Ollama integration, though it’s not the primary focus
  3. chatgpt-shell (by xenodium)
  4. ellama (by s-kostyaev)
    • A comprehensive Emacs package for interacting with local LLMs through Ollama
    • GitHub: https://github.com/s-kostyaev/ellama
    • Features deep org-mode integration and extensive prompt templates
    • Offers streaming responses and structured interaction patterns
    • More complex but feature-rich approach to local LLM integration

Alternative package comparison

Let’s compare ollama-buddy to the existing solutions:

  1. llm.el
    • Pros:
      • Provides a generic LLM interface
      • Supports multiple backends
      • More abstracted and potentially more extensible

    ollama-buddy is more:

    • Directly focused on Ollama
    • Lightweight and Ollama-native
    • Provides a more interactive, menu-driven approach
    • Simpler to set up for Ollama specifically
  2. gptel
    • Pros:
      • Sophisticated chat buffer interface
      • Active development
      • Good overall UX

    ollama-buddy differentiates by:

    • Being purpose-built for Ollama
    • Offering a more flexible, function-oriented approach
    • Providing a quick, lightweight interaction model
    • Having a minimal, focused design
  3. chatgpt-shell
    • Pros:
      • Mature shell-based interaction model
      • Rich interaction capabilities

    ollama-buddy stands out by:

    • Being specifically designed for Ollama
    • Offering a simpler, more direct interaction model
    • Providing a quick menu-based interface
    • Having minimal dependencies
  4. ellama
    • Pros:
      • Tight integration with Emacs org-mode
      • Extensive built-in prompt templates
      • Support for streaming responses
      • Good documentation and examples

    ollama-buddy differs by:

    • Having a simpler, more streamlined setup process
    • Providing a more lightweight, menu-driven interface
    • Focusing on quick, direct interactions from any buffer
    • Having minimal dependencies and configuration requirements

Issues

Report issues on the GitHub Issues page

Error when model numbers is more than alphabet letters #15

If the local ollama server has more than 26 models the create-intro-message function produce an error due to the fact that alphabet letter are just 26.

Not possible to pass path to image? #14

in the ollama cli when using models with vision support (ex. ollama run gemma3:4b) you can say something like “extract the text from this image: /home/user/Downloads/screenshot.png”) then the ollama cli will detect this valid path and pass it to the gemma model as a file parameter somehow and the model will then be able to analize the actual file, when using ollama-buddy this is not possible

Role stuff is difficult to understand #13

I am simply trying to add a new role which should by default add the missing turkish letters to any of the paragraph of text I send to it. However, ollama-buddy’s Role stuff is quite confusing to me. How do I set it up? I see no explanation about what I am doing when I try to add a role like that.

Getting constant lisp nesting exceeds errors in latest versions #12

I just updated ollama-buddy’s source to its latest version, here’s my use-package config:

(use-package ollama-buddy
  :if (file-directory-p "/home/user/.local/gits/emacs/ollama-buddy")
  :vc (:url "file:///home/user/.local/gits/emacs/ollama-buddy"
			:rev "870233e5d83da133b42fa7e9f57fcd6909f74502")
  :config
  (setq ollama-buddy-current-model "gemma3:12b-it-qat")
  (setq ollama-buddy-menu-columns 4)
  (setq ollama-buddy-host "localhost")
  (setq ollama-buddy-port 11434)
  (setq ollama-buddy-connection-check-interval 1))

After byte-compiling, I cannot use ollama-buddy. When I send a prompt to my ollama backend, I constantly get Debugger entered–Lisp error: (excessive-lisp-nesting 1601). Older versions of ollama-buddy have been working without such an error.

stopping the thinking text output #11 #2

As for context size, all I want the model to report it’s max size it can handle, what was the context size received, what it considered as input context, what it dropped and output context size for each call. Visually, all I was want is red, green amber colouring to know where it cut off(red), was ok(green) or slightly exceeded the context size(amber). whatever the model reports on the metrics(if at all, it does track that on a per API call)

stopping the thinking text output #11

Thanks for the package, I’m slowly getting into the grove of using it regularly.

I have a laptop without graphics cards for LLM work. I get about 2-3 tokens/sec. The newer models seem to be doing reasoning/thinking text output that are huge and takes up time. Is there a way to toggle it off during the session or through config on a per model basis. Either as a binary switch or a config parameter.

While at it, how do I set the context length to larger sizes than the default one and how do I check if it’s respected? On a per model basis, when you report the token stats, would you consider plugging in info about temp, top-k, context length that was active during the session?

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Commit your changes
  4. Open a pull request

License

MIT License

Acknowledgments

  • Ollama for making local LLM inference accessible
  • Emacs community for continuous inspiration

About

A friendly Emacs interface for interacting with Ollama models

Resources

License

Stars

Watchers

Forks

Packages

No packages published