Skip to content

Add new clients and update README with usage instructions #56

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 23, 2025

Conversation

Flemington8
Copy link

Introduce DeepSeek and Qwen clients for AIOpsLab, refactor existing clients to utilize GPTClient, and enhance the README with setup instructions for the vLLM client. A launch script for the vLLM server has also been added.

- Introduced DeepSeek and Qwen clients for AIOpsLab.
- Updated README to include descriptions and setup instructions for vLLM client.
- Refactored existing clients to use GPTClient instead of GPT4Turbo.
- Added launch script for vLLM server.
@Copilot Copilot AI review requested due to automatic review settings April 22, 2025 07:37
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces new DeepSeek and Qwen clients, refactors existing clients to use the new GPTClient abstraction, and enhances the README with detailed vLLM setup instructions.

  • New dependencies and configuration added in pyproject.toml
  • Refactored various client files to replace GPT4Turbo with GPTClient and added new clients for vLLM, DeepSeek, and Qwen
  • Enhanced README with instructions and a launch script for the vLLM server

Reviewed Changes

Copilot reviewed 11 out of 12 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
pyproject.toml New dependencies (wandb, python-dotenv, vllm, transformers) added
clients/vllm.py Added vLLM client with integration for local LLM usage
clients/utils/llm.py Refactored GPT4Turbo to GPTClient and added DeepSeekClient, QwenClient, and vLLMClient
clients/react.py Replaced GPT4Turbo with GPTClient
clients/qwen.py New Qwen client implementation
clients/gpt_managed_identity.py Replaced GPT4Turbo with GPTClient for managed identity support
clients/gpt.py Replaced GPT4Turbo with GPTClient
clients/flash.py Replaced GPT4Turbo with GPTClient
clients/deepseek.py New DeepSeek client implementation
clients/README.md Updated clients list and added detailed vLLM setup instructions
Files not reviewed (1)
  • clients/launch_vllm.sh: Language not supported

from clients.utils.templates import DOCS_SHELL_ONLY


class Agent:
def __init__(self, azure_config_file: str):
self.history = []
self.llm = GPT4Turbo(auth_type="managed", azure_config_file=azure_config_file)
self.llm = GPTClient(auth_type="managed", azure_config_file=azure_config_file)
Copy link
Preview

Copilot AI Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new GPTClient implementation in clients/utils/llm.py does not appear to support the additional parameters (auth_type and azure_config_file). Consider updating the GPTClient constructor to handle these parameters or refactoring its usage in this file.

Copilot uses AI. Check for mistakes.

@@ -76,7 +76,7 @@ async def diagnose_with_hindsight(self, input: str, history: dict):
class HindsightBuilder:
"""Agent hindsight generator."""

llm = GPT4Turbo()
llm = GPTClient()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious why this change was made. I don't think there's anything wrong with this, I'm just curious what the difference between GPT4Turbo and GPTClient is.

Copy link
Author

@Flemington8 Flemington8 Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenAI said they will stop GPT-4 service in their APP after April 30, although developers still could access GPT-4 API in a period of time (I'm not sure when it will be desperate, but in the future we may use stronger model released by OpenAI instead). So the 'GPT4Turbo' isn't appropriate I think.
I think for us, it's a good implementation to name this class in a more compatible way, since OpenAI releases many model recently, and in our class, we could easily change the model.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha! I remember getting some emails about that. I'll try out this update later today and try and get it merged.

@HacksonClark
Copy link
Collaborator

This looks great! Overall, I'm very happy with this PR. I just had one question which is above. Once I understand that I'll approve and merge.

Copy link
Collaborator

@HacksonClark HacksonClark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was able to test it out, looks great! I resolved the merge conflicts so I will be merging.

@HacksonClark HacksonClark merged commit d21f015 into xlab-uiuc:main Apr 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants