Skip to content

Latest commit

 

History

History
631 lines (445 loc) · 13.2 KB

File metadata and controls

631 lines (445 loc) · 13.2 KB

ContentEngine Deployment Checklist

Pre-Deployment Verification

Server Requirements Check

Before running deploy.sh, verify these on the target server (192.168.0.5):

  • SSH Access Working

  • Python 3.11+ Installed

    ssh [email protected] "python3 --version"
    # Should show 3.11 or higher
  • uv Package Manager Installed

    ssh [email protected] "command -v uv"
    # If not installed, run: scripts/setup_server.sh
  • Ollama Installed & Running (Required for context-capture)

    ssh [email protected] "systemctl status ollama"
    # If not running: systemctl start ollama
  • llama3:8b Model Pulled

    ssh [email protected] "ollama list | grep llama3:8b"
    # If not present: ollama pull llama3:8b
  • Sufficient Disk Space (at least 10GB for Ollama models + data)

    ssh [email protected] "df -h /home"

Deployment Issues & Fixes

Issue 1: .env File Not Synced

Problem: The deploy.sh script excludes .env from sync (line 36)

Solution:

# After deployment, manually copy .env
scp .env [email protected]:~/ContentEngine/.env

# OR create it manually on server
ssh [email protected]
cd ~/ContentEngine
cp .env.example .env
nano .env  # Add your credentials

Required .env variables:

  • LINKEDIN_CLIENT_ID - From LinkedIn Developer app
  • LINKEDIN_CLIENT_SECRET - From LinkedIn Developer app
  • LINKEDIN_ACCESS_TOKEN - From OAuth flow (see below)
  • LINKEDIN_USER_SUB - From OAuth flow
  • OLLAMA_HOST - Should be http://192.168.0.5:11434 or http://localhost:11434

Issue 2: OAuth Tokens Need Server Setup

Problem: OAuth tokens stored in local database won't transfer

Solutions:

Option A: Migrate Database

# Copy database from local to server
scp content.db [email protected]:~/ContentEngine/content.db

Option B: Re-run OAuth Flow on Server

# SSH to server
ssh [email protected]
cd ~/ContentEngine

# Start OAuth server (requires GUI/browser access)
uv run python -m agents.linkedin.oauth_server

# Then migrate tokens to database
uv run python scripts/migrate_oauth.py

Option C: Manual Token Entry

# If you have tokens already, just add to .env
# Then migrate to database:
ssh [email protected] "cd ~/ContentEngine && uv run python scripts/migrate_oauth.py"

Issue 3: Database Migrations

Problem: Database schema might be different between local and server

Solution:

# On server, run migrations
ssh [email protected] "cd ~/ContentEngine && uv run python scripts/migrate_database_schema.py"

# OR start fresh (if no important data)
ssh [email protected] "cd ~/ContentEngine && rm -f content.db && uv run content-engine list"
# This will auto-create new database

Issue 4: Ollama Not Running

Problem: Context capture fails with "Connection refused to Ollama"

Solution:

# Check if Ollama is running
ssh [email protected] "systemctl status ollama"

# If not running, start it
ssh [email protected] "systemctl start ollama"

# Enable on boot
ssh [email protected] "systemctl enable ollama"

# Verify it's accessible
ssh [email protected] "curl http://localhost:11434/api/tags"

Issue 5: Background Worker Not Set Up

Problem: Scheduled posts won't publish automatically

Solution:

# Set up cron job on server
ssh [email protected]

# Edit crontab
crontab -e

# Add this line (runs every 15 minutes)
*/15 * * * * cd /home/ajohn/ContentEngine && /home/ajohn/.cargo/bin/uv run content-worker >> /tmp/content-worker.log 2>&1

Issue 6: File Permissions

Problem: Scripts might not be executable

Solution:

ssh [email protected] "cd ~/ContentEngine && chmod +x scripts/*.sh scripts/*.py"

Issue 7: Port Conflicts

Problem: OAuth server needs port 3000 but it's already in use

Check:

ssh [email protected] "netstat -tuln | grep 3000"

Solution: Update .env to use different port:

PORT=3001
REDIRECT_URI=http://localhost:3001/callback

Deployment Process (Step-by-Step)

Step 1: Initial Server Setup (First Time Only)

# Run from your local machine
cd ~/Work/ContentEngine
./scripts/setup_server.sh

This will:

  • Set up SSH keys
  • Install uv
  • Clone ContentEngine repo
  • Install dependencies
  • Check for Ollama
  • Pull llama3:8b model

⚠️ Security Note: The setup_server.sh has your password in plaintext. Consider removing it after first run.


Step 2: Deploy Code Updates

# Run this every time you want to push code changes
./scripts/deploy.sh

This will:

  • Sync code to server (excluding .git, .env, venv)
  • Install dependencies with uv sync

⚠️ This does NOT sync .env or database!


Step 3: Configure Environment on Server

# SSH to server
ssh [email protected]
cd ~/ContentEngine

# Set up .env file
cp .env.example .env
nano .env

# Add your credentials:
# - LINKEDIN_CLIENT_ID
# - LINKEDIN_CLIENT_SECRET
# - OLLAMA_HOST=http://localhost:11434

Step 4: Set Up OAuth Tokens

Option A: Copy from local

# From local machine
scp ~/Work/ContentEngine/content.db [email protected]:~/ContentEngine/content.db

Option B: Run OAuth flow on server

ssh [email protected]
cd ~/ContentEngine

# This requires browser access to server
uv run python -m agents.linkedin.oauth_server

# After completing OAuth flow:
uv run python scripts/migrate_oauth.py

Step 5: Test Deployment

ssh [email protected]
cd ~/ContentEngine

# Test 1: Context capture
uv run content-engine capture-context
# Should complete without errors

# Test 2: List posts
uv run content-engine list
# Should show database connection works

# Test 3: LinkedIn connection (dry run)
uv run content-engine draft "Test post"
uv run content-engine approve 1 --dry-run
# Should validate LinkedIn API connection

Step 6: Set Up Background Worker

ssh [email protected]

# Test worker manually first
cd ~/ContentEngine
uv run content-worker

# If successful, add to cron
crontab -e

# Add this line:
*/15 * * * * cd /home/ajohn/ContentEngine && /home/ajohn/.cargo/bin/uv run content-worker >> /tmp/content-worker.log 2>&1

# Verify cron job added
crontab -l

Step 7: Set Up Systemd Timers (Recommended)

ContentEngine includes systemd service and timer files for automated tasks. Systemd timers are more reliable than cron jobs and provide better logging.

Context Capture Timer (Already Set Up)

Runs daily at 11:59 PM to capture context from session history.

ssh [email protected]

# Copy service and timer files to systemd directory
sudo cp ~/ContentEngine/systemd/content-engine-capture.service /etc/systemd/system/
sudo cp ~/ContentEngine/systemd/content-engine-capture.timer /etc/systemd/system/

# Reload systemd
sudo systemctl daemon-reload

# Enable and start the timer
sudo systemctl enable content-engine-capture.timer
sudo systemctl start content-engine-capture.timer

# Check timer status
sudo systemctl status content-engine-capture.timer
sudo systemctl list-timers --all | grep content-engine

LinkedIn Analytics Timer (New)

Runs daily at 10:00 AM to collect post analytics.

Prerequisites:

  • LINKEDIN_ACCESS_TOKEN must be set (either in environment or database)
  • data/posts.jsonl file must exist
ssh [email protected]

# Copy service and timer files to systemd directory
sudo cp ~/ContentEngine/systemd/linkedin-analytics.service /etc/systemd/system/
sudo cp ~/ContentEngine/systemd/linkedin-analytics.timer /etc/systemd/system/

# Edit service file to set LINKEDIN_ACCESS_TOKEN
sudo nano /etc/systemd/system/linkedin-analytics.service
# Update: Environment="LINKEDIN_ACCESS_TOKEN=your_actual_token"

# Reload systemd
sudo systemctl daemon-reload

# Enable and start the timer
sudo systemctl enable linkedin-analytics.timer
sudo systemctl start linkedin-analytics.timer

# Check timer status
sudo systemctl status linkedin-analytics.timer
sudo systemctl list-timers --all | grep linkedin-analytics

# Test service manually (optional)
sudo systemctl start linkedin-analytics.service

# View logs
journalctl -u linkedin-analytics.service
# OR
cat ~/ContentEngine/analytics.log

Alternative: Use Database Token (Recommended)

If your LINKEDIN_ACCESS_TOKEN is stored in the database, you can remove the environment variable from the service file:

# Edit service file
sudo nano /etc/systemd/system/linkedin-analytics.service

# Remove or comment out the Environment line:
# Environment="LINKEDIN_ACCESS_TOKEN=your_token_here"

# The collect-analytics command will automatically load the token from database

Verify Timer Schedule:

# Show when the timer will run next
systemctl list-timers linkedin-analytics.timer

# Show timer logs
journalctl -u linkedin-analytics.timer

Testing Checklist

After deployment, verify these work:

  • Context Capture

    ssh [email protected] "cd ~/ContentEngine && uv run content-engine capture-context"
  • Database Access

    ssh [email protected] "cd ~/ContentEngine && uv run content-engine list"
  • LinkedIn Connection (dry-run)

    ssh [email protected] "cd ~/ContentEngine && uv run content-engine draft 'Test' && uv run content-engine approve 1 --dry-run"
  • Ollama Access

    ssh [email protected] "curl http://localhost:11434/api/tags"
  • Background Worker

    ssh [email protected] "cd ~/ContentEngine && uv run content-worker"
  • LinkedIn Analytics Collection

    ssh [email protected] "cd ~/ContentEngine && uv run content-engine collect-analytics --test-post urn:li:share:7412668096475369472"
  • Systemd Timers

    # Check analytics timer
    ssh [email protected] "systemctl list-timers --all | grep linkedin-analytics"
    
    # Check context capture timer
    ssh [email protected] "systemctl list-timers --all | grep content-engine-capture"

Common Errors & Solutions

Error: "Could not connect to Ollama"

Cause: Ollama not running or wrong host in .env

Fix:

ssh [email protected]
systemctl start ollama
systemctl status ollama

# Update .env
cd ~/ContentEngine
nano .env
# Set: OLLAMA_HOST=http://localhost:11434

Error: "No OAuth token found for linkedin"

Cause: OAuth tokens not in database

Fix:

ssh [email protected]
cd ~/ContentEngine

# Check if .env has tokens
cat .env | grep LINKEDIN

# Migrate tokens to database
uv run python scripts/migrate_oauth.py

# Verify
uv run content-engine list  # Should not error

Error: "Database is locked"

Cause: Multiple processes accessing SQLite simultaneously

Fix:

ssh [email protected]
cd ~/ContentEngine

# Check for running workers
ps aux | grep content-worker

# Kill if needed
pkill -f content-worker

# Try again
uv run content-engine list

Error: "Module not found"

Cause: Dependencies not installed

Fix:

ssh [email protected]
cd ~/ContentEngine

# Reinstall dependencies
uv sync

# Verify
uv run python -c "from lib import context_capture; print('OK')"

Rollback Plan

If deployment breaks things:

# Restore from local
scp -r ~/Work/ContentEngine/* [email protected]:~/ContentEngine/

# OR restore from git
ssh [email protected]
cd ~/ContentEngine
git reset --hard HEAD
git pull origin master
uv sync

Security Considerations

⚠️ Important:

  1. Remove password from setup_server.sh after first setup

    nano scripts/setup_server.sh
    # Remove or comment out: PASSWORD="..."
  2. Secure .env file

    ssh [email protected]
    chmod 600 ~/ContentEngine/.env
  3. Don't commit .env to git

    # Already in .gitignore, but verify
    cat .gitignore | grep .env
  4. Use SSH keys instead of passwords

    # Already done by setup_server.sh

Quick Deploy (After Initial Setup)

Once everything is set up, future deployments are simple:

# From local machine
cd ~/Work/ContentEngine
./scripts/deploy.sh

# That's it! Code is synced and dependencies updated

No need to reconfigure .env, OAuth, or Ollama - those persist on the server.


Verification Script

Create this script to verify everything works:

#!/bin/bash
# verify_deployment.sh

echo "🔍 Verifying ContentEngine deployment on 192.168.0.5..."

# Test SSH
echo -n "SSH connection: "
ssh [email protected] "echo '✅'" || echo ""

# Test Ollama
echo -n "Ollama service: "
ssh [email protected] "systemctl is-active ollama" | grep -q active && echo "" || echo ""

# Test uv
echo -n "uv installed: "
ssh [email protected] "command -v uv" > /dev/null && echo "" || echo ""

# Test ContentEngine imports
echo -n "Python imports: "
ssh [email protected] "cd ~/ContentEngine && uv run python -c 'from lib import context_capture'" && echo "" || echo ""

# Test database
echo -n "Database access: "
ssh [email protected] "cd ~/ContentEngine && uv run content-engine list" > /dev/null && echo "" || echo ""

echo ""
echo "Deployment verification complete!"

Last Updated: 2026-01-14

Server: 192.168.0.5 (ajohn)

Next Steps: Review checklist, run deployment, test thoroughly