Skip to content

Conversation

@arnonuem
Copy link
Contributor

puppies now got access to endless skills

mpfaffenberger and others added 30 commits October 25, 2025 10:57
- Add new auto_save_session config option with default value "true"
- Display auto_save_session status in command handler output
- Ensure auto_save_session is set when creating/updating config files
- Show current value as enabled/disabled in status information
* adding planning agent

* format
Integrate Claude Code OAuth authentication into Code Puppy, enabling users to authenticate through browser and automatically import available models.

- Implement complete OAuth 2.0 flow with PKCE security for Claude Code
- Add three custom commands: auth, status, and logout for token management
- Automatically fetch and register Claude Code models with 'claude-code-' prefix
- Securely store OAuth tokens in ~/.code_puppy with proper file permissions
- Include comprehensive test suite and detailed setup documentation
- Improve version handling fallback for development environments
…entication

- Add comprehensive ChatGPT OAuth plugin with browser-based authentication flow
- Implement PKCE OAuth flow matching OpenAI's official Codex CLI implementation
- Automatically fetch and register available ChatGPT models with 'chatgpt-' prefix
- Provide custom commands: /chatgpt-auth, /chatgpt-status, /chatgpt-logout
- Store tokens securely in ~/.code_puppy/chatgpt_oauth.json with 0600 permissions
- Exchange OAuth tokens for OpenAI API keys when organization/project is configured
- Refactor existing Claude Code OAuth plugin to use dedicated model files for better separation
- Update model factory to load from multiple model configuration sources
- Include comprehensive documentation, setup guides, and test coverage
- Add task tracking to enable graceful shutdown of running agent operations
- Modify run_prompt_with_attachments to return both result and asyncio task
- Update interactive mode to cancel running tasks on /exit and /quit commands
- Refactor ChatGPT OAuth plugin to create fresh OAuth contexts per instance
- Remove global OAuth context state machine for better isolation
- Update OAuth client ID configuration for Code Puppy application
- Delete unused math_utils.py file
- Removed complex API key exchange flow in favor of direct OAuth token usage like ChatMock
- Fixed JWT parsing to handle nested organization structure from user's payload
- Consolidated OAuth success/failure HTML templates into shared oauth_puppy_html module
- Replaced urllib.request/ssl with requests library for consistent HTTP handling
- Simplified token storage and model fetching logic
- Removed extensive documentation files (SETUP.md, README.md, ENABLE.md)
- Updated ChatGPT and Claude OAuth plugins to use shared HTML templates
- Fixed codex model to use OpenAIResponsesModel instead of OpenAIChatModel
- Filtered empty thinking parts from message history processing
- Added comprehensive test cases for JWT parsing with nested organization structure
…ents

- Assign underscore to second return value in test_run_prompt_with_attachments_passes_binary
- Assign underscore to second return value in test_run_prompt_with_attachments_warns_on_blank_prompt
- Maintains test compatibility with updated function signature that now returns a tuple
- Introduce new claude_code model type in ModelFactory with custom OAuth authentication
- Add specialized system prompt handling for Claude Code models in BaseAgent
- Update Claude Code OAuth plugin to use claude_code model type with proper headers
- Temporarily disable ChatGPT OAuth callbacks due to current implementation issues
- Include OAuth-specific headers (anthropic-beta) for Claude Code API compatibility
- Update model retrieval to use agent-specific configuration instead of global model
- Add special handling for Claude Code models with custom instructions
- Prepend Claude Code system prompt when applicable to ensure proper behavior
- Maintain separation between agent-specific and general prompt instructions
Fresh models on every auth! 🎾

- Changed add_models_to_extra_config() to start with empty dict
- Removes stale/accumulated models from previous auth sessions
- Ensures ~/.code_puppy/claude_models.json always reflects current API state
- Cleaner approach: overwrite instead of load-merge-save pattern

Now every /claude-code-auth gives you a clean slate with only the
models currently available from Claude Code's API. No more cruft!
* adding planning agent

* format

* improve prompt

* improve prompt for safegurading

* add plugin for callback

* yolo

* planning agent

* format
- Add mock for get_yolo_mode to ensure consistent test behavior
- Update assertion to match current prompt text format
- Prevent test failures due to environment-dependent yolo mode settings
- Added new custom OpenAI-compatible model configuration for MiniMax-M2
- Configured to use synthetic API endpoint with environment variable authentication
- Set context length to 205,000 tokens for extended conversation support
- Update model configuration from Cerebras-Qwen3-Coder-480b to Cerebras-GLM-4.6 in models.json
- Add custom ZaiCerebrasProvider to handle zai-prefixed models with Qwen model profile
- Update agent documentation to reflect new model recommendation for code-heavy tasks
- Update all integration tests to use the new model name consistently
- Maintain backward compatibility by supporting model profile updates for zai models
- Update test to check for correct model indicators instead of literal 'round_robin' text
- Fix was looking for 'glm-4.6' (lowercase) but actual model is 'Cerebras-GLM-4.6'
- Round robin functionality was working correctly, just test assertion was wrong
- All round robin integration tests now pass
mpfaffenberger and others added 23 commits December 13, 2025 12:10
- Updated test assertion to match improved error message text
- Changed expectation from generic "not found or is empty" to more specific "not found (check config or environment)"
- This aligns the test with enhanced error messaging that provides clearer guidance to users when environment variables are missing
- Remove group_id generation and direct emit_error/emit_warning calls from grep function
- Consolidate error handling to collect error messages in a single variable
- Move UI message emission to the end of the function to ensure single emission point
- Add error field to GrepOutput model to return error information to caller
- Remove unused imports for emit_error, emit_warning, and generate_group_id
- Restructure exception handling to set error_message instead of immediate emission
- Clean up the order of operations to prepare data first, then emit messages consistently
…ilities

- Simplify message content rendering by trusting Rich's built-in markdown styling
- Remove manual styling logic for headers, code blocks, and list items
- Preserve yellow coloring for tool messages while using Rich's ANSI styling for user/assistant messages
- Simplify preview panel rendering by displaying Rich-rendered content with consistent dimming
- Reduce code complexity from 72 to 24 lines changed while maintaining visual output quality
- Add _reload_current_agent() function to refresh the active agent after successful authentication
- Ensures new authentication tokens are picked up without requiring manual restart or reload
- Handles JSON agents with refresh_config capability gracefully
- Provides user feedback on successful agent reload or warnings if reload fails
- Improves user experience by making authentication changes take effect immediately
- Add description field to AgentInfo model to include agent details
- Import get_agent_descriptions to fetch agent descriptions alongside names
- Accumulate output into single string and emit once for better performance
- Replace multiple emit_system_message calls with single emit_info call
- Use Rich Text.from_markup() to prevent markup escaping in output
- Improve display format to show agent names, display names, and descriptions
- Replace raw markup strings with Text.from_markup() across all command handlers
- Ensure consistent Rich Text rendering for user-facing messages
- Update emit_info() calls to properly handle formatted content
- Maintain backward compatibility while improving message display consistency
- Apply changes to agent core commands, config commands, and MCP server management
- Standardize error, warning, and success message formatting throughout the codebase
…berger#138)

This enables GUI clients to identify which agent emitted each message,
solving the issue of interleaved messages when multiple sub-agents run
in parallel.

Changes:
- Add optional session_id field to BaseMessage (backward compatible)
- Add set_session_context/get_session_context to MessageBus
- Auto-tag messages with current session_id in emit()
- Set/restore session context in invoke_agent tool

When an agent invokes a sub-agent, the session_id is set as the current
context. All messages emitted during that sub-agent's execution are
automatically tagged. The previous context is restored when the sub-agent
completes.
…hell commands

- Introduce ShellLineMessage to preserve ANSI escape codes in terminal output
- Add emit_shell_line() function with stream differentiation (stdout/stderr)
- Enhance RichConsoleRenderer to properly parse and display ANSI-formatted text
- Implement background process execution mode with immediate response
- Add process tracking with log files and PIDs for background commands
- Improve process termination with proper pipe cleanup and reader thread signaling
- Extend ShellCommandOutput model with background, log_file, and pid fields
- Add comprehensive test coverage for new messaging and background functionality
- Replace manual environment variable manipulation with mocking of get_api_key function
- Simplify test setup by removing complex environment restoration logic
- Improve test reliability by using consistent mocking approach across test files
- Reduce code duplication and maintainability concerns in API key testing scenarios
- Fix slow command termination by implementing non-blocking reads with select() on POSIX systems
- Prevent indefinite blocking when stopping processes on Windows
- Add platform-specific handling to work around Windows limitations with select() on pipes
- Reduce CPU usage by adding 100ms timeout to select() calls
- Improve stop event responsiveness by checking it before each read operation
- Handle pipe errors more gracefully to prevent crashes during process termination
- Implement ChatGPTCodexAsyncClient to handle Codex API requirements including mandatory store=false and stream=true injection
- Add token refresh functionality with get_valid_access_token() and refresh_access_token() utilities
- Update model factory to support new "chatgpt_oauth" model type with proper authentication headers
- Add ChatGPT Codex model handling to model_utils with special prompt preparation for codex models
- Add comprehensive Codex system prompt file with agent behavior guidelines
- Update OAuth flow to register default Codex models instead of fetching from API
- Extend reasoning effort configuration to support minimal, low, medium, high, xhigh levels
- Re-enable ChatGPT OAuth plugin custom commands and callbacks
- Add extensive test coverage for OAuth token refresh and model fetching with fallback handling
- Extend reasoning command usage to include minimal and xhigh levels
- Refactor ChatGPT OAuth model fetching tests to use new Codex API endpoint
- Update authentication from API key to access token with account ID
- Change response structure handling from OpenAI to ChatGPT format
- Modify error handling to return default models instead of None
- Remove model filtering logic as it's no longer needed
- Update model configuration type from openai to chatgpt_oauth
- Simplify OAuth flow tests by removing deprecated model fetching tests
- Update integration tests to work with new default Codex models approach
@arnonuem arnonuem force-pushed the feat__add_skills branch 3 times, most recently from 961c861 to 51ad0db Compare December 23, 2025 18:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants