Skip to content

feat(rss-twitter): migrate Playwright agent to v0.6 template layout#6839

Open
VasuBansal7576 wants to merge 4 commits intoaden-hive:mainfrom
VasuBansal7576:codex/rss-twitter-v06-clean
Open

feat(rss-twitter): migrate Playwright agent to v0.6 template layout#6839
VasuBansal7576 wants to merge 4 commits intoaden-hive:mainfrom
VasuBansal7576:codex/rss-twitter-v06-clean

Conversation

@VasuBansal7576
Copy link
Copy Markdown
Contributor

@VasuBansal7576 VasuBansal7576 commented Mar 28, 2026

Description

Migrates the RSS-to-Twitter Playwright workflow into a Hive v0.6-compatible template layout while preserving the original behavior: fetch RSS news, summarize, ask for per-thread y/n/q approval, and post approved threads via Playwright.

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Documentation update
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Refactoring (no functional changes)

Related Issues

Fixes #4406

Changes Made

  • Added the Hive v0.6-compatible RSS-to-Twitter template package at examples/templates/rss_twitter_agent/.
  • Preserved the original interactive Playwright workflow through the current runner/CLI path: fetch RSS, summarize, review each thread with y/n/q, and post approved threads.
  • Added Hive credential namespace support for the Twitter session path via {name}/{alias} references such as twitter/default.
  • Updated the README for the current uv run python -m examples.templates.rss_twitter_agent ... workflow.
  • Added focused tests that cover approved-post, quit-on-review, and approval-before-quit behavior.

Testing

Describe the tests you ran to verify your changes:

  • Unit tests pass
  • Lint passes
  • Manual testing performed

Additional verification run for this PR:

uv run pytest examples/templates/rss_twitter_agent -q
uv run python -m examples.templates.rss_twitter_agent validate
uv run python -m examples.templates.rss_twitter_agent info --json
uv run ruff check examples/templates/rss_twitter_agent
uv run ruff format --check examples/templates/rss_twitter_agent

Checklist

  • My code follows the project's style guidelines
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Demo Video

Demo recording: https://www.youtube.com/watch?v=1cLnFnPfmKA

Summary by CodeRabbit

  • New Features

    • RSS-to-Twitter template: fetch feeds, summarize with local LLM, prompt per-thread approval, and post approved threads to Twitter/X via Playwright; CLI commands for run/validate/info and agent metadata/version.
  • Improvements

    • Safer RSS fetching (redirect and IP checks), Ollama/model validation, credential-ref format support, and session-dir override via HIVE_TWITTER_SESSION_DIR.
  • Documentation

    • README with usage examples, commands, and behavior notes.
  • Tests

    • Tests covering interactive approval flow, posting, timeouts, and utility behaviors.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 28, 2026

📝 Walkthrough

Walkthrough

Adds a new RSS-to-Twitter agent template: runtime config and metadata, Ollama integration, SSRF-resistant RSS fetching, LLM-based summarization and thread generation, interactive per-thread approval, Playwright-based posting with persistent sessions, CLI entrypoint, node specs, credential/session resolution, and tests.

Changes

Cohort / File(s) Summary
Documentation
examples/templates/rss_twitter_agent/README.md
New README describing end-to-end flow, CLI examples (run, validate, info), Ollama setup, credential ref format (provider/alias), and session behavior notes.
Package Init & CLI
examples/templates/rss_twitter_agent/__init__.py, examples/templates/rss_twitter_agent/__main__.py
Adds package exports (RSSTwitterAgent, default_agent) and a click CLI (run, validate, info) wired to agent operations and JSON/human output.
Agent Core & Graph
examples/templates/rss_twitter_agent/agent.py, examples/templates/rss_twitter_agent/nodes/__init__.py
Defines RSSTwitterAgent, workflow graph (fetch→process→generate→approve→post), lifecycle and execution (trigger_and_wait, run, start, stop), validation, metadata, and NodeSpec exports.
Configuration & Metadata
examples/templates/rss_twitter_agent/config.py
Env-driven Ollama/LLM config, model discovery via /api/tags, RuntimeConfig/AgentMetadata, get_ollama_* helpers and validate_ollama() function.
Credential / Session Resolution
examples/templates/rss_twitter_agent/credentials.py
Resolves Twitter session dir from HIVE_TWITTER_SESSION_DIR, Hive CredentialStore (supports provider/alias), or fallback ~/.hive/twitter_session; includes recursive extraction helpers.
Fetch, LLM, Generation
examples/templates/rss_twitter_agent/fetch.py
SSRF-resistant RSS fetcher with redirect checks, XML parsing, Ollama /api/generate integration, summarization and thread-generation with robust JSON-extraction fallbacks, interactive approval helper, and async post wrapper.
Orchestration Runner
examples/templates/rss_twitter_agent/run.py
Async run_workflow / run_interactive orchestration, thread preview rendering, y/n/q approval loop, posting approved threads, and structured result payloads.
Playwright Twitter Integration
examples/templates/rss_twitter_agent/twitter.py
Playwright-based posting with persistent Chromium context, first-run interactive login flow and .logged_in marker, per-thread/per-tweet posting and permalink resolution, error handling, post_threads_impl, and tool registration.
Tests
examples/templates/rss_twitter_agent/test_flow.py
Pytest coverage for interactive flows, quit behavior, approval retention, post-failure propagation, explicit zero max_articles passthrough, timeout handling, session extraction, SSRF rejection, and Ollama fallback behavior.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant CLI as "CLI / __main__.py"
    participant Runner as "run.py"
    participant RSS as "fetch.py (RSS)"
    participant Ollama as "Ollama API / config.py"
    participant Approver as "approve_threads"
    participant TwitterTool as "twitter.post_threads_impl"
    participant Browser as "Playwright (Browser)"

    User->>CLI: run(feed_url, max_articles)
    CLI->>Runner: run_interactive(...)
    Runner->>RSS: fetch_rss(feed_url, max_articles)
    RSS-->>Runner: articles_json
    Runner->>Ollama: summarize_articles(articles_json)
    Ollama-->>Runner: processed_json
    Runner->>Ollama: generate_tweets(processed_json)
    Ollama-->>Runner: threads_json
    Runner->>Approver: approve_threads(threads_json) (interactive)
    Approver->>User: render preview + prompt (y/n/q)
    User-->>Approver: choice
    Approver-->>Runner: approved_json
    Runner->>TwitterTool: post_threads_impl(approved_json, credential_ref)
    TwitterTool->>Browser: open persistent context / login (if needed)
    Browser-->>TwitterTool: session ready / per-tweet results
    TwitterTool-->>Runner: results_json
    Runner-->>CLI: final JSON output
    CLI->>User: prints summary
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇✨ A Rabbit's Note on Threads
I nibble feeds at break of day,
Ollama molds the words I say,
A human "y"—Playwright taps to post,
Threads unfurl from coast to coast,
I twitch my whiskers—many tweets, hooray!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 35.85% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: migrating an RSS-to-Twitter agent to the v0.6 template layout, which aligns with the primary objective.
Linked Issues check ✅ Passed The PR fulfills core objectives from #4406: RSS feed polling [fetch.py], article fetching [fetch.py], thread generation [fetch.py, nodes/], human-in-the-loop approval [fetch.py, run.py], and auto-post capability [twitter.py], implemented within the v0.6 template layout.
Out of Scope Changes check ✅ Passed All changes are within scope: RSS-to-Twitter agent template migration to v0.6 layout, interactive CLI, credential support, Ollama integration, and comprehensive tests—no unrelated changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (1)
examples/templates/rss_twitter_agent/__main__.py (1)

16-16: Avoid version drift by sourcing CLI version from metadata.

Keeping 1.1.0 hardcoded here duplicates the version already defined in config metadata.

Proposed fix
-from .agent import default_agent
+from .agent import default_agent
+from .config import metadata
@@
-@click.version_option(version="1.1.0")
+@click.version_option(version=metadata.version)
 def cli() -> None:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/__main__.py` at line 16, Replace the
hardcoded click.version_option(version="1.1.0") with a dynamic retrieval of the
package version from metadata (e.g.,
importlib.metadata.version("rss_twitter_agent") or
pkg_resources.get_distribution("rss_twitter_agent").version) and pass that value
into click.version_option so the CLI always reflects the package metadata;
update the top of __main__.py to import the chosen metadata function and use its
result when calling click.version_option.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/templates/rss_twitter_agent/agent.py`:
- Around line 136-147: The return always sets ExecutionResult(success=True, ...)
which hides failures from run_workflow; change it to use the workflow's actual
success flag (e.g., workflow.get("success") or workflow["success"]) and
propagate any error message/state into the output or an error field so consumers
see failures; update the ExecutionResult construction in agent.py to pass the
workflow success value (and optionally workflow["error"] or workflow["status"])
instead of True, and ensure steps_executed is derived from the real workflow
state if applicable.
- Around line 125-127: The current expression int(input_data.get("max_articles")
or 3) will treat an explicit 0 as missing and replace it with 3; change the
logic for max_articles to explicitly detect absence (e.g., use
input_data.get("max_articles") with a default or a None check) and only
substitute 3 when the value is actually missing, then convert the present value
to int—locate the max_articles assignment in agent.py and replace the
coercing-or pattern with an explicit presence check so that values like 0 are
preserved.

In `@examples/templates/rss_twitter_agent/config.py`:
- Around line 14-21: Update _check_ollama_running to validate the configured
model as well as liveness: after confirming Ollama is reachable via OLLAMA_URL,
call the Ollama models endpoint (e.g., GET {OLLAMA_URL}/api/models) and ensure
DEFAULT_MODEL is present in the returned list (or, if models list is
unavailable, perform a minimal health/generate call supplying
model=DEFAULT_MODEL and verify a successful response). Apply the same
model-presence check where similar validation occurs (the other validation block
around lines 53-59) so the validate command fails early if DEFAULT_MODEL is not
available.

In `@examples/templates/rss_twitter_agent/credentials.py`:
- Around line 45-57: The code calls CredentialStore.get(ref) with a namespaced
ref of the form "{name}/{alias}", but get() only accepts legacy IDs; split ref
into provider_name and alias (e.g., provider_name, alias = ref.split("/", 1))
and call CredentialStore.get_credential_by_alias(provider_name, alias) instead,
then pass the returned value into _extract_session_dir and return the expanded
path as before; replace the broad except Exception: pass with targeted error
handling or re-raise/log the lookup error so failures aren't silently swallowed.

In `@examples/templates/rss_twitter_agent/fetch.py`:
- Around line 131-148: The prompt string in the variable prompt embeds title
directly into the JSON exemplar which will break if title contains quotes or
newlines; update the JSON exemplar inside prompt to use json.dumps(title[:60])
(e.g., change the inner fragment {"title": "{title[:60]}", ...} to {"title":
{json.dumps(title[:60])}, ...}) so the title is properly escaped, keeping the
surrounding double-brace pattern used to emit literal braces in the f-string and
leaving the rest of the prompt (tweets placeholders, rules) unchanged.
- Around line 48-61: _fix:_call_ollama currently hardcodes the Ollama URL and
model env var, ignoring the template's RuntimeConfig and OLLAMA_URL/LLM_MODEL
settings; update _call_ollama to import and use the RuntimeConfig (or the
OLLAMA_URL and LLM_MODEL values) from
examples/templates/rss_twitter_agent/config.py instead of
"http://localhost:11434" and the local OLLAMA_MODEL env var, passing
RuntimeConfig.OLLAMA_URL as the request base URL and RuntimeConfig.LLM_MODEL (or
configured model) to the JSON payload while preserving existing options/timeout
so template-level configuration controls the Ollama calls.

In `@examples/templates/rss_twitter_agent/README.md`:
- Around line 14-21: The README contains a machine-specific cd path; replace the
hardcoded "/Users/vasu/Desktop/hive" line with a portable instruction such as
"cd <repo-root>" or remove the cd line entirely so the example uses the generic
command "uv run python -m examples.templates.rss_twitter_agent run ..." with the
same flags (--feed-url, --max-articles) to make the run instructions portable
and non-machine-specific.

In `@examples/templates/rss_twitter_agent/test_flow.py`:
- Around line 118-120: Remove the executable __main__ block that runs
run_module.run_interactive and prints results from the test module to prevent
accidental external actions; locate the if __name__ == "__main__": block that
calls run_module.run_interactive(max_articles=1) and either delete the block
entirely or move the demonstration call into a separate example script (or wrap
it in a deliberate CLI/example file) so the test module contains only importable
test code and no side-effectful execution.

In `@examples/templates/rss_twitter_agent/twitter.py`:
- Line 90: The code currently sets first_tweet_url once and always uses it as
the reply target (first_tweet_url) for i>0, causing later tweets not to reply to
the most recent tweet; update the posting loop in twitter.py so that after every
successful post you capture and assign the newly created tweet's permalink
(e.g., update first_tweet_url or a new variable like last_tweet_url) and use
that as the reply target for the next iteration (check the block handling i and
the fallback branch around the second half of the file); also add a resolution
check after posting that aborts the thread if you cannot resolve the newly
posted tweet URL, instead of falling back to standalone posts.
- Around line 217-221: The wrapper currently returns success=True even if some
per-thread calls to _post_thread_with_playwright() failed or returned {"posted":
0, ...}; change the success calculation to reflect actual per-thread outcomes by
inspecting results (e.g., compute success = all(r.get("posted", 0) > 0 and
"error" not in r for r in results)) and set "success" accordingly, adjust
total_posted to sum(r.get("posted",0) for r in results), and update the
"message" to report number of failed threads and include error snippets so
callers (and run.py's threads_posted logic) can reliably detect failures.

---

Nitpick comments:
In `@examples/templates/rss_twitter_agent/__main__.py`:
- Line 16: Replace the hardcoded click.version_option(version="1.1.0") with a
dynamic retrieval of the package version from metadata (e.g.,
importlib.metadata.version("rss_twitter_agent") or
pkg_resources.get_distribution("rss_twitter_agent").version) and pass that value
into click.version_option so the CLI always reflects the package metadata;
update the top of __main__.py to import the chosen metadata function and use its
result when calling click.version_option.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 033cd91d-dd80-4328-8f4f-2d91eb05831b

📥 Commits

Reviewing files that changed from the base of the PR and between c3c3075 and 1e3eb7d.

📒 Files selected for processing (11)
  • examples/templates/rss_twitter_agent/README.md
  • examples/templates/rss_twitter_agent/__init__.py
  • examples/templates/rss_twitter_agent/__main__.py
  • examples/templates/rss_twitter_agent/agent.py
  • examples/templates/rss_twitter_agent/config.py
  • examples/templates/rss_twitter_agent/credentials.py
  • examples/templates/rss_twitter_agent/fetch.py
  • examples/templates/rss_twitter_agent/nodes/__init__.py
  • examples/templates/rss_twitter_agent/run.py
  • examples/templates/rss_twitter_agent/test_flow.py
  • examples/templates/rss_twitter_agent/twitter.py

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (2)
examples/templates/rss_twitter_agent/credentials.py (1)

63-77: ⚠️ Potential issue | 🟠 Major

Don't silently fall back to the shared session when an explicit credential ref fails.

With credential_ref/TWITTER_CREDENTIAL_REF set, Lines 64-77 swallow resolution failures and then reuse ~/.hive/twitter_session. If that default profile is already logged in, the agent can post through the wrong X account instead of surfacing the bad ref. Only use the shared fallback when no ref was requested.

Suggested fix
     if ref:
         try:
             from framework.credentials.store import CredentialStore

             store = CredentialStore.with_encrypted_storage()
             if "/" in ref:
                 provider, alias = ref.split("/", 1)
                 value = store.get_credential_by_alias(provider, alias)
             else:
                 value = store.get(ref)
             resolved = _extract_session_dir(value)
             if resolved:
                 return str(Path(resolved).expanduser())
-        except Exception:
-            pass
+        except Exception as exc:
+            raise RuntimeError(
+                f"Failed to resolve Twitter credential '{ref}'"
+            ) from exc
+        raise RuntimeError(
+            f"Twitter credential '{ref}' did not provide a usable session directory"
+        )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/credentials.py` around lines 63 - 77,
The code currently swallows all exceptions when resolving a provided credential
ref and then silently falls back to the shared session; change the behavior so
that when ref is set (the branch using CredentialStore.with_encrypted_storage(),
get_credential_by_alias/get, and _extract_session_dir) any failure to resolve
the referenced credential does NOT fall back to the shared session but instead
surfaces the error: catch specific exceptions if needed (or re-raise the caught
Exception) and propagate an informative error (or return a distinct failure) so
callers know the explicit ref failed; only allow the shared
~/.hive/twitter_session fallback when ref is not provided.
examples/templates/rss_twitter_agent/config.py (1)

43-52: ⚠️ Potential issue | 🟠 Major

Require exact tag matches when the configured Ollama model is tagged.

Line 45 drops the tag before matching, so a config like llama3.2:1b is treated as satisfied by any installed llama3.2:*. validate_ollama() then passes even though later generation still asks for the missing exact tag. Keep the short-name fallback only for untagged configs.

Suggested fix
 def _model_available(models: list[dict[str, Any]], configured_model: str) -> bool:
     """Match configured model names regardless of optional Ollama tag/provider prefixes."""
     configured_short = configured_model.split(":", 1)[0]
+    require_exact = ":" in configured_model
     for model in models:
         name = model.get("name")
         if not isinstance(name, str) or not name:
             continue
-        if name == configured_model or name.split(":", 1)[0] == configured_short:
+        if name == configured_model:
+            return True
+        if not require_exact and name.split(":", 1)[0] == configured_short:
             return True
     return False
In Ollama's `/api/tags` response, is `models[].name` the exact installed model identifier including tags like `llama3.2:1b`, and should `/api/generate` be called with that exact tagged name when one is configured?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/config.py` around lines 43 - 52, The
_model_available function currently strips tags from the configured_model and
therefore treats a tagged config like "llama3.2:1b" as satisfied by any
installed "llama3.2:*"; change the logic in _model_available so that if
configured_model contains a ":" (i.e., is tagged) you only accept an exact match
against model.get("name"), and only use the short-name fallback (comparing
name.split(":",1)[0]) when the configured_model is untagged; update the checks
in _model_available to first detect a tagged configured_model and branch
accordingly, keeping all other validations for name being a non-empty string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/templates/rss_twitter_agent/agent.py`:
- Around line 122-135: The trigger_and_wait method accepts a timeout but
currently ignores it; update trigger_and_wait to honor timeout by wrapping the
await run_workflow(...) call with asyncio.wait_for using the provided timeout
(or, alternatively, immediately raise a ValueError if you prefer failing fast
when timeout is non-None). Locate the trigger_and_wait function and the
run_workflow invocation, and either wrap that await in asyncio.wait_for(...,
timeout=timeout) and let the TimeoutError propagate or catch it and return/raise
a suitable ExecutionResult/error; ensure you import asyncio if not already
present and handle the None case by leaving behavior unchanged.

In `@examples/templates/rss_twitter_agent/credentials.py`:
- Around line 12-23: The _extract_session_dir function currently returns any
non-empty string or dict value (keys like
"session_dir","user_data_dir","twitter_session_dir","playwright_user_data_dir","path","value")
verbatim, which breaks when the credential "value" contains a nested JSON/dict
with the real path; change the extraction logic in _extract_session_dir to,
after retrieving a candidate value, if that candidate is a dict or a JSON
string, feed it back into _extract_session_dir (i.e., recursively call
_extract_session_dir on the candidate) so nested dicts/JSON are resolved to the
innermost string path; apply the same recursive handling to the other occurrence
of this logic (the second block handling keys around lines 35-52) to keep
behavior consistent.

In `@examples/templates/rss_twitter_agent/fetch.py`:
- Around line 99-105: The code currently returns any non-empty JSON array;
instead validate the parsed model output shape before returning: in the block
that sets parsed = json.loads(...) (and the similar block around the tweets
handling), ensure parsed is a list of dicts when the downstream expects
summaries (e.g., elements are instances of dict and contain expected keys used
by generate_tweets()/summary.get(...)), and ensure parsed is a list of strings
when the downstream expects tweets (all(isinstance(x, str) for x in parsed) and
len(parsed) >= 3); only return json.dumps(parsed) when these shape checks pass,
otherwise log/raise a clear error or fall back to a safe default.
- Around line 18-24: Validate and reject untrusted or non-public feed URLs
before making the request: in the code that consumes feed_url (the block using
httpx.Client() and client.get), parse feed_url (e.g., via urlparse) and ensure
the scheme is http or https, then resolve the hostname (feed_url host) to IP(s)
(e.g., getaddrinfo) and check each address is not loopback, localhost,
link-local, or private RFC1918/RFC4193 ranges; reject the request (return
json.dumps([])) if any check fails. Also disable automatic redirects (set
follow_redirects=False on client.get) or validate each redirect target with the
same public-host checks before following redirects to prevent SSRF via redirect
chains. Ensure these checks are applied before calling client.get and when
handling redirect responses.

In `@examples/templates/rss_twitter_agent/twitter.py`:
- Around line 221-224: In the exception handlers for PlaywrightTimeout and
generic Exception (the block catching exceptions for each tweet, referencing
variables i and current_tweet_url), stop continuing the loop and abort the whole
thread by returning a thread-level failure (e.g., return False) or re-raising
the exception instead of only printing and continuing; update the handlers for
PlaywrightTimeout and Exception to either raise the caught exception or return
an explicit failure value so subsequent tweets are not posted as replies to the
previous successful tweet.

---

Duplicate comments:
In `@examples/templates/rss_twitter_agent/config.py`:
- Around line 43-52: The _model_available function currently strips tags from
the configured_model and therefore treats a tagged config like "llama3.2:1b" as
satisfied by any installed "llama3.2:*"; change the logic in _model_available so
that if configured_model contains a ":" (i.e., is tagged) you only accept an
exact match against model.get("name"), and only use the short-name fallback
(comparing name.split(":",1)[0]) when the configured_model is untagged; update
the checks in _model_available to first detect a tagged configured_model and
branch accordingly, keeping all other validations for name being a non-empty
string.

In `@examples/templates/rss_twitter_agent/credentials.py`:
- Around line 63-77: The code currently swallows all exceptions when resolving a
provided credential ref and then silently falls back to the shared session;
change the behavior so that when ref is set (the branch using
CredentialStore.with_encrypted_storage(), get_credential_by_alias/get, and
_extract_session_dir) any failure to resolve the referenced credential does NOT
fall back to the shared session but instead surfaces the error: catch specific
exceptions if needed (or re-raise the caught Exception) and propagate an
informative error (or return a distinct failure) so callers know the explicit
ref failed; only allow the shared ~/.hive/twitter_session fallback when ref is
not provided.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4f4387e0-e777-487f-97a1-b8c607c990e0

📥 Commits

Reviewing files that changed from the base of the PR and between 1e3eb7d and 15680c3.

📒 Files selected for processing (8)
  • examples/templates/rss_twitter_agent/README.md
  • examples/templates/rss_twitter_agent/agent.py
  • examples/templates/rss_twitter_agent/config.py
  • examples/templates/rss_twitter_agent/credentials.py
  • examples/templates/rss_twitter_agent/fetch.py
  • examples/templates/rss_twitter_agent/run.py
  • examples/templates/rss_twitter_agent/test_flow.py
  • examples/templates/rss_twitter_agent/twitter.py
✅ Files skipped from review due to trivial changes (1)
  • examples/templates/rss_twitter_agent/README.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • examples/templates/rss_twitter_agent/run.py

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
examples/templates/rss_twitter_agent/twitter.py (1)

92-98: Use asyncio.get_running_loop() instead of deprecated get_event_loop().

asyncio.get_event_loop() is deprecated in Python 3.10+ when called from a coroutine without a running loop. Since this code runs inside an async context, asyncio.get_running_loop() is the correct call.

♻️ Proposed fix
-            loop = asyncio.get_event_loop()
-            await loop.run_in_executor(
+            await asyncio.get_running_loop().run_in_executor(
                 None,
                 lambda: input(
                     "   -> Log in to X in the browser, then press Enter here: "
                 ),
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/twitter.py` around lines 92 - 98,
Replace the deprecated asyncio.get_event_loop() call with
asyncio.get_running_loop() where the variable loop is created and used in
loop.run_in_executor to wait for the browser login prompt; specifically update
the code that sets loop and calls loop.run_in_executor (the lambda: input(...)
block) to call asyncio.get_running_loop() so it uses the currently running event
loop from within the async context.
examples/templates/rss_twitter_agent/fetch.py (1)

120-127: Hardcoded "Hacker News" source may be misleading for other feeds.

Line 126 sets "source": "Hacker News" for all RSS feeds, which is incorrect when fetching from other sources.

♻️ Proposed fix to derive source from feed URL
+from urllib.parse import urlparse
+
+def _extract_source_from_url(url: str) -> str:
+    parsed = urlparse(url)
+    return parsed.hostname or "RSS Feed"
+
 def fetch_rss(
     feed_url: str = "https://news.ycombinator.com/rss", max_articles: int = 3
 ) -> str:
     ...
             article = {
                 ...
-                "source": "Hacker News",
+                "source": _extract_source_from_url(feed_url),
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/fetch.py` around lines 120 - 127, The
article dict currently hardcodes "Hacker News" as the source; change it to
compute a source dynamically (e.g., prefer feed.feed.get("title") if using
feedparser, else derive from the feed_url hostname via urllib.parse.urlparse)
and assign that to the "source" key when building article in fetch.py (the block
that defines article with title_elem, link_elem, desc_elem). Ensure you fall
back to an empty string if neither feed title nor a parsed hostname is available
so other feeds are labeled correctly.
examples/templates/rss_twitter_agent/credentials.py (1)

77-78: Broad exception handler silently masks credential lookup failures.

The bare except Exception: pass makes debugging credential resolution issues difficult. If CredentialStore.with_encrypted_storage() fails, or if the credential lookup encounters an unexpected error, users see no indication of why the fallback path was taken.

♻️ Proposed fix to log or narrow the exception handling
         except Exception:
-            pass
+            import logging
+            logging.getLogger(__name__).debug(
+                "Credential lookup for ref=%r failed; using default session dir", ref
+            )

Alternatively, narrow the exception to expected failures only (e.g., ImportError, FileNotFoundError).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/credentials.py` around lines 77 - 78,
The bare "except Exception: pass" in the credential lookup silently swallows
failures; update the except to either catch specific expected exceptions (e.g.,
ImportError, FileNotFoundError) or capture the exception as "except Exception as
e" and log it (use the module's logger or print) so failures inside
CredentialStore.with_encrypted_storage() and the subsequent credential lookup
are visible; if you choose to narrow exceptions, let unexpected exceptions
propagate after logging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/templates/rss_twitter_agent/agent.py`:
- Around line 128-129: Guard parsing of raw_max_articles to avoid ValueError:
when reading input_data.get("max_articles") (raw_max_articles) wrap the
conversion to int used for max_articles in a safe check/try-except (or use
str.isdigit() plus fallback) so non-numeric strings (e.g., "abc" or whitespace)
do not raise and instead set max_articles to the default 3; include an optional
warning/log via the existing logger if available and keep the existing behavior
for None/"" values.

---

Nitpick comments:
In `@examples/templates/rss_twitter_agent/credentials.py`:
- Around line 77-78: The bare "except Exception: pass" in the credential lookup
silently swallows failures; update the except to either catch specific expected
exceptions (e.g., ImportError, FileNotFoundError) or capture the exception as
"except Exception as e" and log it (use the module's logger or print) so
failures inside CredentialStore.with_encrypted_storage() and the subsequent
credential lookup are visible; if you choose to narrow exceptions, let
unexpected exceptions propagate after logging.

In `@examples/templates/rss_twitter_agent/fetch.py`:
- Around line 120-127: The article dict currently hardcodes "Hacker News" as the
source; change it to compute a source dynamically (e.g., prefer
feed.feed.get("title") if using feedparser, else derive from the feed_url
hostname via urllib.parse.urlparse) and assign that to the "source" key when
building article in fetch.py (the block that defines article with title_elem,
link_elem, desc_elem). Ensure you fall back to an empty string if neither feed
title nor a parsed hostname is available so other feeds are labeled correctly.

In `@examples/templates/rss_twitter_agent/twitter.py`:
- Around line 92-98: Replace the deprecated asyncio.get_event_loop() call with
asyncio.get_running_loop() where the variable loop is created and used in
loop.run_in_executor to wait for the browser login prompt; specifically update
the code that sets loop and calls loop.run_in_executor (the lambda: input(...)
block) to call asyncio.get_running_loop() so it uses the currently running event
loop from within the async context.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f3a3177c-37fa-438f-91cb-a0749b6162b7

📥 Commits

Reviewing files that changed from the base of the PR and between 15680c3 and 19ae3e5.

📒 Files selected for processing (5)
  • examples/templates/rss_twitter_agent/agent.py
  • examples/templates/rss_twitter_agent/credentials.py
  • examples/templates/rss_twitter_agent/fetch.py
  • examples/templates/rss_twitter_agent/test_flow.py
  • examples/templates/rss_twitter_agent/twitter.py

Comment on lines +128 to +129
raw_max_articles = input_data.get("max_articles")
max_articles = 3 if raw_max_articles in (None, "") else int(raw_max_articles)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle non-numeric max_articles values gracefully.

Line 129 uses int(raw_max_articles) which raises ValueError if the input is a non-numeric string (e.g., "abc"). While unlikely in normal use, invalid input would crash the workflow instead of failing gracefully.

🛡️ Proposed fix to handle conversion errors
         raw_max_articles = input_data.get("max_articles")
-        max_articles = 3 if raw_max_articles in (None, "") else int(raw_max_articles)
+        if raw_max_articles in (None, ""):
+            max_articles = 3
+        else:
+            try:
+                max_articles = int(raw_max_articles)
+            except (ValueError, TypeError):
+                max_articles = 3
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
raw_max_articles = input_data.get("max_articles")
max_articles = 3 if raw_max_articles in (None, "") else int(raw_max_articles)
raw_max_articles = input_data.get("max_articles")
if raw_max_articles in (None, ""):
max_articles = 3
else:
try:
max_articles = int(raw_max_articles)
except (ValueError, TypeError):
max_articles = 3
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/agent.py` around lines 128 - 129, Guard
parsing of raw_max_articles to avoid ValueError: when reading
input_data.get("max_articles") (raw_max_articles) wrap the conversion to int
used for max_articles in a safe check/try-except (or use str.isdigit() plus
fallback) so non-numeric strings (e.g., "abc" or whitespace) do not raise and
instead set max_articles to the default 3; include an optional warning/log via
the existing logger if available and keep the existing behavior for None/""
values.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
examples/templates/rss_twitter_agent/agent.py (2)

124-126: Unused entry_point parameter.

The entry_point parameter is accepted but not used in the method body. The workflow always executes the same way regardless of which entry point is specified. Consider either removing the parameter or documenting that it's intentionally ignored for this simple agent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/agent.py` around lines 124 - 126, The
trigger_and_wait method declares an unused parameter entry_point which is
misleading; either remove entry_point from the method signature of
trigger_and_wait or make it meaningful by using it to select/dispatch the
workflow before executing the existing logic in trigger_and_wait (reference the
trigger_and_wait method in agent.py). Update any callers/tests to match the new
signature if you remove the parameter, or implement a simple conditional
dispatch on entry_point inside trigger_and_wait and validate unknown entry
points with an error.

160-161: Consider defaulting success to False for safer defensive coding.

workflow.get("success", True) defaults to True when the key is missing. While run_workflow always sets this key (per context snippet 2), defaulting to False would be safer if an unexpected code path returns without the key.

♻️ Suggested change
         return ExecutionResult(
-            success=bool(workflow.get("success", True)),
+            success=bool(workflow.get("success", False)),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/agent.py` around lines 160 - 161, The
return currently creates an ExecutionResult using workflow.get("success", True)
which defaults missing success to True; change the default to False to be
defensive by using workflow.get("success", False) when constructing
ExecutionResult(success=...), updating the call in the function that returns
ExecutionResult (the place referencing ExecutionResult and the workflow dict) so
missing keys yield False.
examples/templates/rss_twitter_agent/test_flow.py (1)

232-244: Consider slightly larger timeout values to reduce flakiness risk.

The 1ms timeout is very aggressive. While the 50ms sleep provides a 50x margin, coroutine scheduling overhead on slow CI systems could occasionally exceed 1ms before asyncio.wait_for is even reached. Slightly larger values (e.g., timeout=0.01, sleep=0.1) would be more robust while still ensuring the timeout triggers reliably.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/templates/rss_twitter_agent/test_flow.py` around lines 232 - 244,
The test test_trigger_and_wait_honors_timeout uses an overly aggressive timeout
(0.001s) with slow_workflow sleeping only 0.05s; increase the values to reduce
flakiness by making slow_workflow sleep longer (e.g., 0.1s) and set
default_agent.trigger_and_wait timeout to a slightly larger value (e.g., 0.01s
or 0.1s) so the timeout still triggers reliably; update the monkeypatch target
run_workflow/slow_workflow and the asyncio.run call to use the new timeout/sleep
values accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@examples/templates/rss_twitter_agent/agent.py`:
- Around line 124-126: The trigger_and_wait method declares an unused parameter
entry_point which is misleading; either remove entry_point from the method
signature of trigger_and_wait or make it meaningful by using it to
select/dispatch the workflow before executing the existing logic in
trigger_and_wait (reference the trigger_and_wait method in agent.py). Update any
callers/tests to match the new signature if you remove the parameter, or
implement a simple conditional dispatch on entry_point inside trigger_and_wait
and validate unknown entry points with an error.
- Around line 160-161: The return currently creates an ExecutionResult using
workflow.get("success", True) which defaults missing success to True; change the
default to False to be defensive by using workflow.get("success", False) when
constructing ExecutionResult(success=...), updating the call in the function
that returns ExecutionResult (the place referencing ExecutionResult and the
workflow dict) so missing keys yield False.

In `@examples/templates/rss_twitter_agent/test_flow.py`:
- Around line 232-244: The test test_trigger_and_wait_honors_timeout uses an
overly aggressive timeout (0.001s) with slow_workflow sleeping only 0.05s;
increase the values to reduce flakiness by making slow_workflow sleep longer
(e.g., 0.1s) and set default_agent.trigger_and_wait timeout to a slightly larger
value (e.g., 0.01s or 0.1s) so the timeout still triggers reliably; update the
monkeypatch target run_workflow/slow_workflow and the asyncio.run call to use
the new timeout/sleep values accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6a714396-1ed6-479c-8a8d-df77147d5bbc

📥 Commits

Reviewing files that changed from the base of the PR and between 19ae3e5 and 4931ff2.

📒 Files selected for processing (2)
  • examples/templates/rss_twitter_agent/agent.py
  • examples/templates/rss_twitter_agent/test_flow.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Agent Idea]: RSS-to-Twitter Thread Automation — Content Repurposing Agent

1 participant