-
Notifications
You must be signed in to change notification settings - Fork 376
[Synopsys] Feature: Microsoft AutoGen Integration #859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughAdds an AutoGen integration to the toolkit: new Autogen plugin package with LLM clients, tool wrapper, and profiler handler; example demo workflow with config and tool; root and package pyproject updates; new LLMFrameworkEnum member; profiler instrumentation hook; and comprehensive tests and docs. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant NAT as NAT Workflow (autogen_team)
participant AG as AutoGen GroupChat
participant WTA as WeatherAndTimeAgent
participant FRA as FinalResponseAgent
participant WT as weather_update tool
participant MCP as MCP Time Server
U->>NAT: Input(city)
NAT->>AG: Initialize agents/tools
AG->>WTA: Round-robin message
WTA->>WT: Get weather(city)
WT-->>WTA: Weather data
WTA->>MCP: Get time(city)
MCP-->>WTA: Time data
WTA-->>AG: Weather+Time
AG->>FRA: Handoff context
FRA-->>AG: Final answer + "APPROVE"
AG-->>NAT: Final content
NAT-->>U: Response
note over AG,NAT: Terminates on "APPROVE"
sequenceDiagram
autonumber
participant FW as Profiler Wrapper
participant H as AutoGenProfilerHandler
participant AC as AutoGen Client/Tool
participant ISM as IntermediateStepManager
FW->>H: instrument()
H->>AC: Monkey-patch LLM/tool calls
Note over H,AC: Wrap run_json/create methods
rect rgba(200,230,255,0.3)
participant Caller as Agent/Tool
Caller->>AC: Invoke (LLM/Tool)
AC->>H: START hook
H->>ISM: Enqueue LLM_START/TOOL_START
AC-->>Caller: Result
H->>ISM: Enqueue LLM_END/TOOL_END with usage
end
FW->>H: uninstrument()
H->>AC: Restore originals
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
74a2ca8
to
4231323
Compare
Signed-off-by: Onkar Kulkarni <[email protected]>
Signed-off-by: Onkar Kulkarni <[email protected]>
Signed-off-by: Onkar Kulkarni <[email protected]>
Signed-off-by: Onkar Kulkarni <[email protected]>
4231323
to
831e019
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🧹 Nitpick comments (1)
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/register.py (1)
118-120
: Re-raise withraise
to keep the original traceback.Per the repo’s exception-handling guideline, use a bare
raise
after logging;raise e
resets the stack and makes diagnostics harder to read. As per coding guidelines.except Exception as e: logger.exception("Failed to initialize AutoGen workflow") - raise e + raise
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lock
is excluded by!**/*.lock
📒 Files selected for processing (23)
examples/frameworks/nat_autogen_demo/README.md
(1 hunks)examples/frameworks/nat_autogen_demo/config.yml
(1 hunks)examples/frameworks/nat_autogen_demo/configs/config.yml
(1 hunks)examples/frameworks/nat_autogen_demo/pyproject.toml
(1 hunks)examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/__init__.py
(1 hunks)examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/register.py
(1 hunks)examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/weather_update_tool.py
(1 hunks)packages/nvidia_nat_autogen/pyproject.toml
(1 hunks)packages/nvidia_nat_autogen/src/nat/meta/pypi.md
(1 hunks)packages/nvidia_nat_autogen/src/nat/plugins/autogen/__init__.py
(1 hunks)packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
(1 hunks)packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py
(1 hunks)packages/nvidia_nat_autogen/src/nat/plugins/autogen/register.py
(1 hunks)packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py
(1 hunks)packages/nvidia_nat_autogen/tests/__init__.py
(1 hunks)packages/nvidia_nat_autogen/tests/conftest.py
(1 hunks)packages/nvidia_nat_autogen/tests/test_autogen_callback_handler.py
(1 hunks)packages/nvidia_nat_autogen/tests/test_llm.py
(1 hunks)packages/nvidia_nat_autogen/tests/test_register.py
(1 hunks)packages/nvidia_nat_autogen/tests/test_tool_wrapper.py
(1 hunks)pyproject.toml
(5 hunks)src/nat/builder/framework_enum.py
(1 hunks)src/nat/profiler/decorators/framework_wrapper.py
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
**/*.{py,yaml,yml}
📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)
**/*.{py,yaml,yml}
: Configure response_seq as a list of strings; values cycle per call, and [] yields an empty string.
Configure delay_ms to inject per-call artificial latency in milliseconds for nat_test_llm.
Files:
packages/nvidia_nat_autogen/tests/__init__.py
src/nat/builder/framework_enum.py
packages/nvidia_nat_autogen/tests/test_register.py
packages/nvidia_nat_autogen/tests/conftest.py
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/__init__.py
packages/nvidia_nat_autogen/tests/test_tool_wrapper.py
packages/nvidia_nat_autogen/tests/test_llm.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
examples/frameworks/nat_autogen_demo/config.yml
packages/nvidia_nat_autogen/src/nat/plugins/autogen/register.py
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/weather_update_tool.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/__init__.py
src/nat/profiler/decorators/framework_wrapper.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py
examples/frameworks/nat_autogen_demo/configs/config.yml
packages/nvidia_nat_autogen/tests/test_autogen_callback_handler.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/register.py
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)
**/*.py
: Programmatic use: create TestLLMConfig(response_seq=[...], delay_ms=...), add with builder.add_llm("", cfg).
When retrieving the test LLM wrapper, use builder.get_llm(name, wrapper_type=LLMFrameworkEnum.) and call the framework’s method (e.g., ainvoke, achat, call).
**/*.py
: In code comments/identifiers use NAT abbreviations as specified: nat for API namespace/CLI, nvidia-nat for package name, NAT for env var prefixes; do not use these abbreviations in documentation
Follow PEP 20 and PEP 8; run yapf with column_limit=120; use 4-space indentation; end files with a single trailing newline
Run ruff check --fix as linter (not formatter) using pyproject.toml config; fix warnings unless explicitly ignored
Respect naming: snake_case for functions/variables, PascalCase for classes, UPPER_CASE for constants
Treat pyright warnings as errors during development
Exception handling: use bare raise to re-raise; log with logger.error() when re-raising to avoid duplicate stack traces; use logger.exception() when catching without re-raising
Provide Google-style docstrings for every public module, class, function, and CLI command; first line concise and ending with a period; surround code entities with backticks
Validate and sanitize all user input, especially in web or CLI interfaces
Prefer httpx with SSL verification enabled by default and follow OWASP Top-10 recommendations
Use async/await for I/O-bound work; profile CPU-heavy paths with cProfile or mprof before optimizing; cache expensive computations with functools.lru_cache or external cache; leverage NumPy vectorized operations when beneficial
Files:
packages/nvidia_nat_autogen/tests/__init__.py
src/nat/builder/framework_enum.py
packages/nvidia_nat_autogen/tests/test_register.py
packages/nvidia_nat_autogen/tests/conftest.py
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/__init__.py
packages/nvidia_nat_autogen/tests/test_tool_wrapper.py
packages/nvidia_nat_autogen/tests/test_llm.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/register.py
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/weather_update_tool.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/__init__.py
src/nat/profiler/decorators/framework_wrapper.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py
packages/nvidia_nat_autogen/tests/test_autogen_callback_handler.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/register.py
packages/*/tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
If a package contains Python code, include tests in a tests/ directory at the same level as pyproject.toml
Files:
packages/nvidia_nat_autogen/tests/__init__.py
packages/nvidia_nat_autogen/tests/test_register.py
packages/nvidia_nat_autogen/tests/conftest.py
packages/nvidia_nat_autogen/tests/test_tool_wrapper.py
packages/nvidia_nat_autogen/tests/test_llm.py
packages/nvidia_nat_autogen/tests/test_autogen_callback_handler.py
**/*
⚙️ CodeRabbit configuration file
**/*
: # Code Review Instructions
- Ensure the code follows best practices and coding standards. - For Python code, follow
PEP 20 and
PEP 8 for style guidelines.- Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
Example:def my_function(param1: int, param2: str) -> bool: pass- For Python exception handling, ensure proper stack trace preservation:
- When re-raising exceptions: use bare
raise
statements to maintain the original stack trace,
and uselogger.error()
(notlogger.exception()
) to avoid duplicate stack trace output.- When catching and logging exceptions without re-raising: always use
logger.exception()
to capture the full stack trace information.Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any
words listed in the
ci/vale/styles/config/vocabularies/nat/reject.txt
file, words that might appear to be
spelling mistakes but are listed in theci/vale/styles/config/vocabularies/nat/accept.txt
file are OK.Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,
and should contain an Apache License 2.0 header comment at the top of each file.
- Confirm that copyright years are up-to date whenever a file is changed.
Files:
packages/nvidia_nat_autogen/tests/__init__.py
src/nat/builder/framework_enum.py
packages/nvidia_nat_autogen/tests/test_register.py
packages/nvidia_nat_autogen/tests/conftest.py
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/__init__.py
packages/nvidia_nat_autogen/tests/test_tool_wrapper.py
packages/nvidia_nat_autogen/tests/test_llm.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
examples/frameworks/nat_autogen_demo/config.yml
packages/nvidia_nat_autogen/src/nat/plugins/autogen/register.py
examples/frameworks/nat_autogen_demo/README.md
packages/nvidia_nat_autogen/pyproject.toml
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/weather_update_tool.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/__init__.py
src/nat/profiler/decorators/framework_wrapper.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py
examples/frameworks/nat_autogen_demo/configs/config.yml
pyproject.toml
examples/frameworks/nat_autogen_demo/pyproject.toml
packages/nvidia_nat_autogen/src/nat/meta/pypi.md
packages/nvidia_nat_autogen/tests/test_autogen_callback_handler.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/register.py
packages/**/*
⚙️ CodeRabbit configuration file
packages/**/*
: - This directory contains optional plugin packages for the toolkit, each should contain apyproject.toml
file. - Thepyproject.toml
file should declare a dependency onnvidia-nat
or another package with a name starting
withnvidia-nat-
. This dependency should be declared using~=<version>
, and the version should be a two
digit version (ex:~=1.0
).
- Not all packages contain Python code, if they do they should also contain their own set of tests, in a
tests/
directory at the same level as thepyproject.toml
file.
Files:
packages/nvidia_nat_autogen/tests/__init__.py
packages/nvidia_nat_autogen/tests/test_register.py
packages/nvidia_nat_autogen/tests/conftest.py
packages/nvidia_nat_autogen/tests/test_tool_wrapper.py
packages/nvidia_nat_autogen/tests/test_llm.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/register.py
packages/nvidia_nat_autogen/pyproject.toml
packages/nvidia_nat_autogen/src/nat/plugins/autogen/__init__.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py
packages/nvidia_nat_autogen/src/nat/meta/pypi.md
packages/nvidia_nat_autogen/tests/test_autogen_callback_handler.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py
src/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
All importable Python code must live under src/ (or packages//src/)
Files:
src/nat/builder/framework_enum.py
src/nat/profiler/decorators/framework_wrapper.py
src/nat/**/*
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Changes in src/nat should prioritize backward compatibility
Files:
src/nat/builder/framework_enum.py
src/nat/profiler/decorators/framework_wrapper.py
⚙️ CodeRabbit configuration file
This directory contains the core functionality of the toolkit. Changes should prioritize backward compatibility.
Files:
src/nat/builder/framework_enum.py
src/nat/profiler/decorators/framework_wrapper.py
{src/**/*.py,packages/*/src/**/*.py}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
All public APIs must have Python 3.11+ type hints on parameters and return values; prefer typing/collections.abc abstractions; use typing.Annotated when useful
Files:
src/nat/builder/framework_enum.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/register.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/__init__.py
src/nat/profiler/decorators/framework_wrapper.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py
examples/**/*
⚙️ CodeRabbit configuration file
examples/**/*
: - This directory contains example code and usage scenarios for the toolkit, at a minimum an example should
contain a README.md or file README.ipynb.
- If an example contains Python code, it should be placed in a subdirectory named
src/
and should
contain apyproject.toml
file. Optionally, it might also contain scripts in ascripts/
directory.- If an example contains YAML files, they should be placed in a subdirectory named
configs/
. - If an example contains sample data files, they should be placed in a subdirectory nameddata/
, and should
be checked into git-lfs.
Files:
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/__init__.py
examples/frameworks/nat_autogen_demo/config.yml
examples/frameworks/nat_autogen_demo/README.md
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/weather_update_tool.py
examples/frameworks/nat_autogen_demo/configs/config.yml
examples/frameworks/nat_autogen_demo/pyproject.toml
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/register.py
packages/*/src/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Importable Python code inside packages must live under packages//src/
Files:
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/register.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/__init__.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py
packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py
**/*.{yaml,yml}
📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)
In workflow/config YAML, set llms.._type: nat_test_llm to stub responses.
Files:
examples/frameworks/nat_autogen_demo/config.yml
examples/frameworks/nat_autogen_demo/configs/config.yml
**/README.@(md|ipynb)
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Ensure READMEs follow the naming convention; avoid deprecated names; use “NeMo Agent Toolkit” (capital T) in headings
Files:
examples/frameworks/nat_autogen_demo/README.md
packages/*/pyproject.toml
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
packages/*/pyproject.toml
: Each package must contain a pyproject.toml
In packages, declare a dependency on nvidia-nat or packages starting with nvidia-nat-
Use ~= version constraints (e.g., ~=1.0) for dependencies
Files:
packages/nvidia_nat_autogen/pyproject.toml
{packages/*/pyproject.toml,uv.lock}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Add new dependencies to both pyproject.toml (alphabetically) and uv.lock via uv pip install --sync
Files:
packages/nvidia_nat_autogen/pyproject.toml
**/configs/**
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Configuration files consumed by code must be stored next to that code in a configs/ folder
Files:
examples/frameworks/nat_autogen_demo/configs/config.yml
🧠 Learnings (1)
📚 Learning: 2025-09-23T18:39:15.023Z
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-09-23T18:39:15.023Z
Learning: Applies to packages/*/pyproject.toml : In packages, declare a dependency on nvidia-nat or packages starting with nvidia-nat-
Applied to files:
packages/nvidia_nat_autogen/pyproject.toml
pyproject.toml
examples/frameworks/nat_autogen_demo/pyproject.toml
🧬 Code graph analysis (12)
packages/nvidia_nat_autogen/tests/test_register.py (1)
tests/nat/builder/test_builder.py (1)
tool_wrapper
(554-564)
packages/nvidia_nat_autogen/tests/conftest.py (1)
packages/nvidia_nat_data_flywheel/src/nat/plugins/data_flywheel/observability/schema/sink/elasticsearch/dfw_es_record.py (1)
SystemMessage
(74-78)
packages/nvidia_nat_autogen/tests/test_tool_wrapper.py (1)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py (2)
autogen_tool_wrapper
(59-169)resolve_type
(39-55)
packages/nvidia_nat_autogen/tests/test_llm.py (6)
src/nat/builder/builder.py (1)
Builder
(68-290)src/nat/data_models/thinking_mixin.py (2)
ThinkingMixin
(29-86)thinking_system_prompt
(49-86)src/nat/llm/azure_openai_llm.py (2)
azure_openai_llm
(55-57)AzureOpenAIModelConfig
(30-51)src/nat/llm/nim_llm.py (1)
NIMModelConfig
(34-52)src/nat/llm/openai_llm.py (2)
openai_llm
(52-54)OpenAIModelConfig
(31-48)packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py (4)
_patch_autogen_client_based_on_config
(39-100)openai_autogen
(104-142)azure_openai_autogen
(146-189)nim_autogen
(193-231)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py (3)
src/nat/builder/context.py (3)
Context
(115-299)intermediate_step_manager
(169-180)output
(58-59)src/nat/builder/framework_enum.py (1)
LLMFrameworkEnum
(19-26)src/nat/profiler/decorators/function_tracking.py (1)
push_intermediate_step
(60-79)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/register.py (1)
tests/nat/builder/test_builder.py (1)
tool_wrapper
(554-564)
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/weather_update_tool.py (4)
src/nat/builder/builder.py (1)
Builder
(68-290)src/nat/builder/framework_enum.py (1)
LLMFrameworkEnum
(19-26)src/nat/builder/function_info.py (2)
FunctionInfo
(290-625)from_fn
(552-625)src/nat/data_models/function.py (1)
FunctionBaseConfig
(26-27)
src/nat/profiler/decorators/framework_wrapper.py (2)
src/nat/builder/framework_enum.py (1)
LLMFrameworkEnum
(19-26)packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py (2)
AutoGenProfilerHandler
(36-326)instrument
(63-92)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py (2)
src/nat/builder/builder.py (1)
Builder
(68-290)src/nat/builder/framework_enum.py (1)
LLMFrameworkEnum
(19-26)
packages/nvidia_nat_autogen/tests/test_autogen_callback_handler.py (2)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py (5)
AutoGenProfilerHandler
(36-326)instrument
(63-92)uninstrument
(94-108)_llm_call_monkey_patch
(110-236)_tool_call_monkey_patch
(238-326)src/nat/builder/context.py (1)
intermediate_step_manager
(169-180)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py (8)
src/nat/builder/builder.py (1)
Builder
(68-290)src/nat/builder/framework_enum.py (1)
LLMFrameworkEnum
(19-26)src/nat/llm/azure_openai_llm.py (2)
azure_openai_llm
(55-57)AzureOpenAIModelConfig
(30-51)src/nat/llm/nim_llm.py (1)
NIMModelConfig
(34-52)src/nat/llm/openai_llm.py (2)
openai_llm
(52-54)OpenAIModelConfig
(31-48)src/nat/llm/utils/thinking.py (3)
BaseThinkingInjector
(57-81)FunctionArgumentWrapper
(34-53)patch_with_thinking
(121-215)src/nat/utils/exception_handlers/automatic_retries.py (1)
patch_with_retry
(269-342)src/nat/utils/type_utils.py (1)
override
(56-57)
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/register.py (5)
src/nat/builder/builder.py (1)
Builder
(68-290)src/nat/builder/framework_enum.py (1)
LLMFrameworkEnum
(19-26)src/nat/builder/function_info.py (2)
FunctionInfo
(290-625)create
(351-549)src/nat/data_models/component_ref.py (1)
LLMRef
(116-124)src/nat/data_models/function.py (1)
FunctionBaseConfig
(26-27)
🪛 Ruff (0.13.2)
packages/nvidia_nat_autogen/tests/test_llm.py
36-36: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
37-37: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
50-50: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
51-51: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
100-100: Unpacked variable kwargs
is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
198-198: Unpacked variable kwargs
is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
258-258: Unused function argument: args
(ARG001)
258-258: Unused function argument: kwargs
(ARG001)
292-292: Unused function argument: args
(ARG001)
292-292: Unused function argument: kwargs
(ARG001)
330-330: Unpacked variable kwargs
is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
356-356: Unused function argument: args
(ARG001)
356-356: Unused function argument: kwargs
(ARG001)
404-404: Unused function argument: args
(ARG001)
404-404: Unused function argument: kwargs
(ARG001)
447-447: Unused function argument: args
(ARG001)
447-447: Unused function argument: kwargs
(ARG001)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
182-182: Use explicit conversion flag
Replace with conversion flag
(RUF010)
300-300: Use explicit conversion flag
Replace with conversion flag
(RUF010)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/tool_wrapper.py
120-120: Avoid specifying long messages outside the exception class
(TRY003)
148-148: Do not call setattr
with a constant attribute value. It is not any safer than normal property access.
Replace setattr
with assignment
(B010)
149-149: Do not call setattr
with a constant attribute value. It is not any safer than normal property access.
Replace setattr
with assignment
(B010)
packages/nvidia_nat_autogen/tests/test_autogen_callback_handler.py
160-160: Unused function argument: args
(ARG001)
160-160: Unused function argument: kwargs
(ARG001)
206-206: Unused function argument: args
(ARG001)
206-206: Unused function argument: kwargs
(ARG001)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/llm.py
78-78: Consider [system_message, *messages]
instead of concatenation
Replace with [system_message, *messages]
(RUF005)
examples/frameworks/nat_autogen_demo/src/nat_autogen_demo/register.py
28-28: Unused noqa
directive (non-enabled: F401
)
Remove unused noqa
directive
(RUF100)
29-29: Unused noqa
directive (non-enabled: F401
)
Remove unused noqa
directive
(RUF100)
107-107: Consider moving this statement to an else
block
(TRY300)
111-111: Use explicit conversion flag
Replace with conversion flag
(RUF010)
120-120: Use raise
without specifying exception name
Remove exception name
(TRY201)
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
Show resolved
Hide resolved
try: | ||
output = await original_func(*args, **kwargs) | ||
except Exception as _e: | ||
output = f"LLM call failed with error: {str(_e)}" | ||
logger.exception("Error during LLM call") | ||
|
||
model_output = "" | ||
try: | ||
for content in output.content: | ||
msg = str(content) | ||
model_output += msg or "" | ||
except Exception as _e: | ||
logger.exception("Error getting model output") | ||
|
||
now = time.time() | ||
# Record the end event | ||
# Prepare safe metadata and usage | ||
chat_resp: dict[str, Any] = {} | ||
try: | ||
if getattr(output, "choices", []): | ||
first_choice = output.choices[0] | ||
chat_resp = first_choice.model_dump() if hasattr( | ||
first_choice, "model_dump") else getattr(first_choice, "__dict__", {}) or {} | ||
except Exception as _e: | ||
logger.exception("Error preparing chat_responses") | ||
|
||
usage_payload: dict[str, Any] = {} | ||
try: | ||
usage_obj = getattr(output, "usage", None) or (getattr(output, "model_extra", {}) or {}).get("usage") | ||
if usage_obj: | ||
if hasattr(usage_obj, "model_dump"): | ||
usage_payload = usage_obj.model_dump() | ||
elif isinstance(usage_obj, dict): | ||
usage_payload = usage_obj | ||
except Exception as _e: | ||
logger.exception("Error preparing token usage") | ||
|
||
output_stats = IntermediateStepPayload(event_type=IntermediateStepType.LLM_END, | ||
span_event_timestamp=now, | ||
framework=LLMFrameworkEnum.AUTOGEN, | ||
name=model_name, | ||
data=StreamEventData(input=model_input, output=model_output), | ||
metadata=TraceMetadata(chat_responses=chat_resp), | ||
usage_info=UsageInfo( | ||
token_usage=TokenUsageBaseModel(**usage_payload), | ||
num_llm_calls=1, | ||
seconds_between_calls=seconds_between_calls, | ||
), | ||
UUID=llm_start_uuid) | ||
|
||
self.step_manager.push_intermediate_step(output_stats) | ||
|
||
with self._lock: | ||
self.last_call_ts = now | ||
|
||
return output | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not swallow LLM client failures.
Catching Exception
, turning it into a string, and returning it breaks AutoGen’s contract—the caller expects an OpenAIChatCompletionResponse
(or an exception), not a plain string. This silently masks production failures and immediately causes attribute lookups like output.content
to explode. Log the error with logger.error(...)
, emit whatever telemetry you need, and re-raise with a bare raise
to preserve the stack trace (per coding guidelines). For example:
- try:
- output = await original_func(*args, **kwargs)
- except Exception as _e:
- output = f"LLM call failed with error: {_e!s}"
- logger.exception("Error during LLM call")
+ try:
+ output = await original_func(*args, **kwargs)
+ except Exception as exc:
+ logger.error("Error during LLM call: %s", exc)
+ self.step_manager.push_intermediate_step(
+ IntermediateStepPayload(
+ event_type=IntermediateStepType.LLM_END,
+ span_event_timestamp=time.time(),
+ framework=LLMFrameworkEnum.AUTOGEN,
+ name=model_name,
+ data=StreamEventData(input=model_input, output=str(exc)),
+ metadata=TraceMetadata(error=str(exc)),
+ usage_info=UsageInfo(token_usage=TokenUsageBaseModel()),
+ UUID=llm_start_uuid,
+ ))
+ with self._lock:
+ self.last_call_ts = time.time()
+ raise
After re-raising, callers get the original failure semantics and we still capture telemetry.
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.13.2)
182-182: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
packages/nvidia_nat_autogen/src/nat/plugins/autogen/autogen_callback_handler.py
lines 179-235: the current except block for the LLM call converts exceptions
into a string and returns it, which violates the expected return type and masks
failures; instead, log the error (use logger.error or logger.exception with the
exception details and any telemetry), then re-raise the original exception with
a bare raise so the caller receives the original failure and stack trace; remove
the code path that assigns a string to output so downstream code does not assume
a successful response, and ensure subsequent code only runs when a real response
object is present (or is guarded) after the re-raise.
try: | ||
# Call the original BaseTool.run_json(...) | ||
# output = await original_func(*args, **kwargs) | ||
output = await original_func(*args, **kwargs) | ||
except Exception as _e: | ||
output = f"Tool execution failed with error: {str(_e)}" | ||
logger.exception("Error during tool execution") | ||
|
||
tool_output = output | ||
|
||
now = time.time() | ||
# Record the end event | ||
kwargs_args = (kwargs.get("args", {}) if isinstance(kwargs.get("args"), dict) else {}) | ||
usage_stat = IntermediateStepPayload(event_type=IntermediateStepType.TOOL_END, | ||
span_event_timestamp=now, | ||
framework=LLMFrameworkEnum.AUTOGEN, | ||
name=tool_name, | ||
data=StreamEventData( | ||
input={ | ||
"args": args, "kwargs": dict(kwargs_args) | ||
}, | ||
output=str(tool_output), | ||
), | ||
metadata=TraceMetadata(tool_outputs={"result": str(tool_output)}), | ||
usage_info=UsageInfo(token_usage=TokenUsageBaseModel()), | ||
UUID=tool_start_uuid) | ||
|
||
self.step_manager.push_intermediate_step(usage_stat) | ||
|
||
return tool_output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Propagate tool execution errors instead of returning strings.
Same issue on the tool path: converting the exception to a string and returning it violates the AutoGen tool contract and can lead to hard-to-debug downstream errors. Log with logger.error(...)
, push the failure telemetry if needed, and re-raise with raise
so callers can handle (or fail fast on) the original exception.
🧰 Tools
🪛 Ruff (0.13.2)
300-300: Use explicit conversion flag
Replace with conversion flag
(RUF010)
mock_fn.description = "Test function description" | ||
mock_fn.input_schema = TestInputSchema | ||
mock_fn.has_streaming_output = False | ||
mock_fn.has_single_output = True | ||
mock_fn.acall_invoke = AsyncMock(return_value="test_result") | ||
mock_fn.acall_stream = AsyncMock() | ||
return mock_fn | ||
|
||
@pytest.fixture | ||
def mock_builder(self): | ||
"""Create a mock builder.""" | ||
return Mock(spec=Builder) | ||
|
||
def test_autogen_tool_wrapper_basic(self, mock_function, mock_builder): | ||
"""Test basic tool wrapper functionality.""" | ||
with patch('nat.plugins.autogen.tool_wrapper.FunctionTool') as mock_function_tool: | ||
mock_tool = Mock() | ||
mock_function_tool.return_value = mock_tool | ||
|
||
result = autogen_tool_wrapper("test_tool", mock_function, mock_builder) | ||
|
||
mock_function_tool.assert_called_once() | ||
call_args = mock_function_tool.call_args | ||
assert call_args[1]['name'] == "test_tool" | ||
assert call_args[1]['description'] == "Test function description" | ||
assert callable(call_args[1]['func']) | ||
assert result == mock_tool | ||
|
||
def test_autogen_tool_wrapper_streaming(self, mock_function, mock_builder): | ||
"""Test tool wrapper with streaming output.""" | ||
mock_function.has_streaming_output = True | ||
mock_function.has_single_output = False | ||
|
||
with patch('nat.plugins.autogen.tool_wrapper.FunctionTool') as mock_function_tool: | ||
mock_tool = Mock() | ||
mock_function_tool.return_value = mock_tool | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use real mixin configs so the patch branches execute.
These tests hand _patch_autogen_client_based_on_config
a plain Mock
. Because the implementation uses isinstance(..., RetryMixin/ThinkingMixin)
, the retry/thinking paths never run and the assertions will fail. Instantiate the actual TestRetryConfig
/ TestThinkingConfig
(or set spec_set
via create_autospec
and adjust __class__
) so the object really subclasses the mixins under test.
- thinking_config = Mock(spec=TestThinkingConfig)
- thinking_config.thinking_system_prompt = "Think step by step"
+ thinking_config = TestThinkingConfig(thinking=True)
@@
- config = Mock(spec=CombinedConfig)
- config.thinking_system_prompt = "Think carefully"
- config.num_retries = 3
- config.retry_on_status_codes = [500, 502]
- config.retry_on_errors = ["timeout"]
+ config = CombinedConfig(thinking=True,
+ num_retries=3,
+ retry_on_status_codes=[500, 502],
+ retry_on_errors=["timeout"])
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/nvidia_nat_autogen/tests/test_tool_wrapper.py around lines 93 to 129
the fixtures pass plain Mock objects into _patch_autogen_client_based_on_config
so isinstance checks against RetryMixin/ThinkingMixin never succeed; replace
those plain Mocks with real mixin-config instances (e.g., instantiate
TestRetryConfig and TestThinkingConfig) or create autospecs that set __class__
to the appropriate mixin classes (create_autospec(..., spec_set=...) and assign
__class__ to the mixin) so the code paths for retry and thinking are executed
and the assertions exercise the intended branches.
Hello @AnuradhaKaruppiah, @willkill07, PR is ready. Addressing CodeRabbit Comments. Please take a look. |
This is slightly deprioritized due to work for the upcoming release. I will try to review by EOD Monday. |
Description
Microsoft AutoGen Integration
Closes: #419, #122
By Submitting this PR I confirm:
Summary by CodeRabbit
New Features
Documentation
Tests
Chores