Skip to content

ethulia/twitter-monitor

Repository files navigation

Twitter Monitor Agent

An automated agent that monitors Twitter mentions for @replicate and identifies frustrated or angry customers.

Setup

After cloning this repo:

npm install

That's it! The agent will automatically set up Chrome with remote debugging when you run it.

Usage

Running the Agent

In Claude Code, run:

/scan-tweets

The agent will:

  1. Automatically start Chrome with remote debugging (if not already running)
  2. Check for previously collected tweets to determine the time window
  3. Scrape new tweets mentioning @replicate since the last scan
  4. Analyze tweets for angry/frustrated customers
  5. Generate a summary report with:
    • Number of angry tweets found
    • Details of each angry tweet (username, description, URL)
    • Results saved to YYYY-MM-DD-angry-tweets.json

First run: You'll need to log into X/Twitter in the Chrome window that opens. Your login will be saved for future runs.

What the Agent Detects

The agent identifies tweets showing:

  • Anger or frustration with the product/service
  • Complaints about performance (timeouts, queue times, slowness)
  • Negative comparisons to competitors (especially @fal)
  • Strong dissatisfaction with UX/website design
  • Users threatening to switch to competitors

Manual Scraper Usage (Advanced)

You can also run the scraper manually:

# Default: last 24 hours of @replicate mentions
node scrape-tweets-connect.js

# Custom time range
node scrape-tweets-connect.js --hours 48

# Custom search query
node scrape-tweets-connect.js --query "@username"

How It Works

  1. The script connects to Chrome via Chrome DevTools Protocol (CDP) on port 9222
  2. Navigates to X/Twitter search with the "Latest" filter
  3. Scrolls through tweets, extracting:
    • Username and display name
    • Tweet text
    • Timestamp
    • Tweet URL
    • Engagement metrics (replies, retweets, likes)
  4. Stops when it finds tweets older than the specified time range
  5. Saves results to a JSON file named tweets-{params}-{timestamp}.json

Output

The scraper creates JSON files with the following structure:

[
  {
    "username": "example_user",
    "displayName": "Example User",
    "text": "Tweet content here...",
    "timestamp": "2025-10-03T12:00:00.000Z",
    "url": "https://x.com/example_user/status/123456789",
    "metrics": {
      "replies": 10,
      "retweets": 5,
      "likes": 25
    }
  }
]

Troubleshooting

"Connection refused" error

  • Make sure Chrome is running with --remote-debugging-port=9222
  • Run ./setup-debug-profile.sh to restart Chrome properly

No tweets found

  • Ensure you're logged into X/Twitter in the Chrome window
  • Check that the search query returns results on X/Twitter
  • Try increasing the time range with --hours

Chrome keeps closing

  • Don't close the Chrome window while the scraper is running
  • The browser needs to stay open for Playwright to control it

Notes

  • The scraper writes results incrementally after each scroll
  • Maximum of 50 scrolls to prevent infinite loops
  • Duplicate tweets are automatically filtered out
  • The debug profile preserves your login session across runs

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published