Skip to content

Enhanced Model Management and SystemPrompt Integration in OllamaAgent #351

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Apr 25, 2025

Conversation

cnupy
Copy link
Contributor

@cnupy cnupy commented Mar 5, 2025

PR Summary

Feature 1: Support for Multiple Models

Description

OllamaAgent can now handle and switch between multiple models seamlessly, listing available models dynamically via an API

Implementation Details

  • Modified the agent to include a model selection mechanism that allows it to iterate through predefined models.
  • Introduced a model command to manage the switching between models based on available models.

Feature 2: Integration of SystemPrompt

Description

It is possible to configure SystemPrompt with the selected model into OllamaAgent, providing users with a toolkit for task-oriented conversations.

Implementation Details

  • Modified the agent to accommodate Ollama's SystemPrompt functionality.
  • Introduced a system-prompt command to manage the setting.

Feature 3: Support for Predefined ModelConfigs sets

Description

OllamaAgent now supports the use of predefined model configurations, allowing users to easily switch between different models and system prompts based on specific requirements.

Implementation Details

  • Introduced a ModelConfig Record to encapsulate data for each configuration.
  • Introduced a config command to manage the switching between configuration based on available predefined sets.

PR Context

This Pull Request enhances the functionality of OllamaAgent by introducing improved model management. By allowing seamless switching between multiple models and supporting predefined configurations, it offers users greater flexibility. The addition of SystemPrompt functionality empowers users with a more task-oriented conversational toolkit, streamlining interactions. These updates not only enhance user experience but also improve the agent's adaptability to diverse requirements. This makes OllamaAgent a more versatile and powerful tool overall.

…sly, listing available models dynamically via an API

*   Modified the agent to include a model selection mechanism that allows it to iterate through predefined models.
*   Introduced a `model` command to manage the switching between models based on available models.

It is possible to configure SystemPrompt with the selected model into OllamaAgent, providing users with a toolkit for task-oriented conversations.

*   Modified the agent to accommodate Ollama's SystemPrompt functionality.
*   Introduced a `system-prompt` command to manage the setting.

OllamaAgent now supports the use of predefined model configurations, allowing users to easily switch between different models and system prompts based on specific requirements.

*   Introduced a `ModelConfig` Record to encapsulate data for each configuration.
*   Introduced a `config` command to manage the switching between configuration based on available predefined sets.
@cnupy

This comment was marked as resolved.

@daxian-dbw
Copy link
Member

@kborowinski @cnupy After upgrading to the v0.5.13 of Ollama, the OllamaAgent stops working even from the main branch ... After sending a query, it's just spinning forever, and after cancel the request, the server side displays the following record:

image

I moved to OllamaSharp v5.1.4, but it's the same result. Any idea what could be the problem?

@kborowinski
Copy link
Contributor

kborowinski commented Mar 11, 2025

@daxian-dbw I'm on latest AIShell build from main and Ollama v0.5.13 with llama3.1:8b-instruct-q5_K_M and it works. What model are you running? Has anything changed on your computer where you run AIShell like drivers? How your config looks like?

Animation

And this is different computer, graphic card and model phi4-mini:3.8b-q4_K_M, and it works as well:

Animation

Try to stop all Ollama processes and then start it again:

Get-Process ollama* | Stop-Process -Force

@cnupy
Copy link
Contributor Author

cnupy commented Mar 11, 2025

Everything's working fine for me as well. I'm using Ollama 0.5.4

/llm/ollama# ./ollama -v
ggml_sycl_init: found 1 SYCL devices:
ollama version is 0.5.4-ipexllm-20250310

Copy link
Member

@daxian-dbw daxian-dbw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay!
The ollama agent from the main branch works for me now (I have no idea why it didn't weeks before. I guess something specific to my system). I left some comments, but haven't finished reviewing the Command.cs file. Will finish it tomorrow.

Copy link
Member

@daxian-dbw daxian-dbw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done with my 2nd pass of review :)

@daxian-dbw
Copy link
Member

daxian-dbw commented Apr 2, 2025

@cnupy Thanks for the quick updates. Now only those 4 remaining comments are relevant (I left follow-up comments). All rest looks good.

@cnupy
Copy link
Contributor Author

cnupy commented Apr 5, 2025

@cnupy Thanks for the quick updates. Now only those 4 remaining comments are relevant (I left follow-up comments). All rest looks good.

I've resolved the four remaining comments. Additionally, I renamed the Configs property to Presets for better clarity. I've also carried out some additional refactoring to properly implement the running model checking. Furthermore, I moved and optimized PerformSelfcheck from OllamaAgent to Settings so that it can be utilized in both locations. Let me know if there's anything else you'd like adjusted.

@daxian-dbw
Copy link
Member

@cnupy To be more efficient, I went ahead and made some refactoring directly. The main changs are:

  1. Update PerformSelfcheck to account for RunningConfig check as well, while making it possible to do endpoint check only too. With this change, we will only need a call to PerformSelfcheck, and then we can directly use _settings.RunningConfig.ModelName.
  2. Update EnsureModelsInitialized to not do the endpoint check when the passed-in host is null.
  3. Use async actions for commands where they are applicable.

Please review my changes and see if you have any concerns. Thanks!

@daxian-dbw
Copy link
Member

daxian-dbw commented Apr 23, 2025

BTW, the Ollama server writes out a warning about using the context field:

[GIN] 2025/04/23 - 13:05:47 | 200 | 9.6578815s | 127.0.0.1 | POST "/api/generate"
time=2025-04-23T13:06:59.988-07:00 level=WARN source=routes.go:274 msg="the context field is deprecated and will be removed in a future version of Ollama"

So, the ollama agent code will need to be updated accordingly to avoid breaking (tracked by #375).

Copy link
Contributor Author

@cnupy cnupy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the effort. Everything looks good to me! I just have a small suggestion to make the code logic easier to follow.

@cnupy
Copy link
Contributor Author

cnupy commented Apr 24, 2025

the context field is deprecated and will be removed in a future version of Ollama

Yes, the correct way is to use the chat endpoint instead of the generate endpoint (ChatAsync method in the OllamaSharp client), as mentioned previously.

@daxian-dbw daxian-dbw merged commit ad02261 into PowerShell:main Apr 25, 2025
4 checks passed
@daxian-dbw
Copy link
Member

@cnupy Thank you for the contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants