Before running deploy.sh, verify these on the target server (192.168.0.5):
-
SSH Access Working
-
Python 3.11+ Installed
ssh [email protected] "python3 --version" # Should show 3.11 or higher
-
uv Package Manager Installed
ssh [email protected] "command -v uv" # If not installed, run: scripts/setup_server.sh
-
Ollama Installed & Running (Required for context-capture)
ssh [email protected] "systemctl status ollama" # If not running: systemctl start ollama
-
llama3:8b Model Pulled
ssh [email protected] "ollama list | grep llama3:8b" # If not present: ollama pull llama3:8b
-
Sufficient Disk Space (at least 10GB for Ollama models + data)
ssh [email protected] "df -h /home"
Problem: The deploy.sh script excludes .env from sync (line 36)
Solution:
# After deployment, manually copy .env
scp .env [email protected]:~/ContentEngine/.env
# OR create it manually on server
ssh [email protected]
cd ~/ContentEngine
cp .env.example .env
nano .env # Add your credentialsRequired .env variables:
LINKEDIN_CLIENT_ID- From LinkedIn Developer appLINKEDIN_CLIENT_SECRET- From LinkedIn Developer appLINKEDIN_ACCESS_TOKEN- From OAuth flow (see below)LINKEDIN_USER_SUB- From OAuth flowOLLAMA_HOST- Should behttp://192.168.0.5:11434orhttp://localhost:11434
Problem: OAuth tokens stored in local database won't transfer
Solutions:
Option A: Migrate Database
# Copy database from local to server
scp content.db [email protected]:~/ContentEngine/content.dbOption B: Re-run OAuth Flow on Server
# SSH to server
ssh [email protected]
cd ~/ContentEngine
# Start OAuth server (requires GUI/browser access)
uv run python -m agents.linkedin.oauth_server
# Then migrate tokens to database
uv run python scripts/migrate_oauth.pyOption C: Manual Token Entry
# If you have tokens already, just add to .env
# Then migrate to database:
ssh [email protected] "cd ~/ContentEngine && uv run python scripts/migrate_oauth.py"Problem: Database schema might be different between local and server
Solution:
# On server, run migrations
ssh [email protected] "cd ~/ContentEngine && uv run python scripts/migrate_database_schema.py"
# OR start fresh (if no important data)
ssh [email protected] "cd ~/ContentEngine && rm -f content.db && uv run content-engine list"
# This will auto-create new databaseProblem: Context capture fails with "Connection refused to Ollama"
Solution:
# Check if Ollama is running
ssh [email protected] "systemctl status ollama"
# If not running, start it
ssh [email protected] "systemctl start ollama"
# Enable on boot
ssh [email protected] "systemctl enable ollama"
# Verify it's accessible
ssh [email protected] "curl http://localhost:11434/api/tags"Problem: Scheduled posts won't publish automatically
Solution:
# Set up cron job on server
ssh [email protected]
# Edit crontab
crontab -e
# Add this line (runs every 15 minutes)
*/15 * * * * cd /home/ajohn/ContentEngine && /home/ajohn/.cargo/bin/uv run content-worker >> /tmp/content-worker.log 2>&1Problem: Scripts might not be executable
Solution:
ssh [email protected] "cd ~/ContentEngine && chmod +x scripts/*.sh scripts/*.py"Problem: OAuth server needs port 3000 but it's already in use
Check:
ssh [email protected] "netstat -tuln | grep 3000"Solution:
Update .env to use different port:
PORT=3001
REDIRECT_URI=http://localhost:3001/callback
# Run from your local machine
cd ~/Work/ContentEngine
./scripts/setup_server.shThis will:
- Set up SSH keys
- Install uv
- Clone ContentEngine repo
- Install dependencies
- Check for Ollama
- Pull llama3:8b model
setup_server.sh has your password in plaintext. Consider removing it after first run.
# Run this every time you want to push code changes
./scripts/deploy.shThis will:
- Sync code to server (excluding .git, .env, venv)
- Install dependencies with
uv sync
# SSH to server
ssh [email protected]
cd ~/ContentEngine
# Set up .env file
cp .env.example .env
nano .env
# Add your credentials:
# - LINKEDIN_CLIENT_ID
# - LINKEDIN_CLIENT_SECRET
# - OLLAMA_HOST=http://localhost:11434Option A: Copy from local
# From local machine
scp ~/Work/ContentEngine/content.db [email protected]:~/ContentEngine/content.dbOption B: Run OAuth flow on server
ssh [email protected]
cd ~/ContentEngine
# This requires browser access to server
uv run python -m agents.linkedin.oauth_server
# After completing OAuth flow:
uv run python scripts/migrate_oauth.pyssh [email protected]
cd ~/ContentEngine
# Test 1: Context capture
uv run content-engine capture-context
# Should complete without errors
# Test 2: List posts
uv run content-engine list
# Should show database connection works
# Test 3: LinkedIn connection (dry run)
uv run content-engine draft "Test post"
uv run content-engine approve 1 --dry-run
# Should validate LinkedIn API connectionssh [email protected]
# Test worker manually first
cd ~/ContentEngine
uv run content-worker
# If successful, add to cron
crontab -e
# Add this line:
*/15 * * * * cd /home/ajohn/ContentEngine && /home/ajohn/.cargo/bin/uv run content-worker >> /tmp/content-worker.log 2>&1
# Verify cron job added
crontab -lContentEngine includes systemd service and timer files for automated tasks. Systemd timers are more reliable than cron jobs and provide better logging.
Runs daily at 11:59 PM to capture context from session history.
ssh [email protected]
# Copy service and timer files to systemd directory
sudo cp ~/ContentEngine/systemd/content-engine-capture.service /etc/systemd/system/
sudo cp ~/ContentEngine/systemd/content-engine-capture.timer /etc/systemd/system/
# Reload systemd
sudo systemctl daemon-reload
# Enable and start the timer
sudo systemctl enable content-engine-capture.timer
sudo systemctl start content-engine-capture.timer
# Check timer status
sudo systemctl status content-engine-capture.timer
sudo systemctl list-timers --all | grep content-engineRuns daily at 10:00 AM to collect post analytics.
Prerequisites:
LINKEDIN_ACCESS_TOKENmust be set (either in environment or database)data/posts.jsonlfile must exist
ssh [email protected]
# Copy service and timer files to systemd directory
sudo cp ~/ContentEngine/systemd/linkedin-analytics.service /etc/systemd/system/
sudo cp ~/ContentEngine/systemd/linkedin-analytics.timer /etc/systemd/system/
# Edit service file to set LINKEDIN_ACCESS_TOKEN
sudo nano /etc/systemd/system/linkedin-analytics.service
# Update: Environment="LINKEDIN_ACCESS_TOKEN=your_actual_token"
# Reload systemd
sudo systemctl daemon-reload
# Enable and start the timer
sudo systemctl enable linkedin-analytics.timer
sudo systemctl start linkedin-analytics.timer
# Check timer status
sudo systemctl status linkedin-analytics.timer
sudo systemctl list-timers --all | grep linkedin-analytics
# Test service manually (optional)
sudo systemctl start linkedin-analytics.service
# View logs
journalctl -u linkedin-analytics.service
# OR
cat ~/ContentEngine/analytics.logAlternative: Use Database Token (Recommended)
If your LINKEDIN_ACCESS_TOKEN is stored in the database, you can remove the environment variable from the service file:
# Edit service file
sudo nano /etc/systemd/system/linkedin-analytics.service
# Remove or comment out the Environment line:
# Environment="LINKEDIN_ACCESS_TOKEN=your_token_here"
# The collect-analytics command will automatically load the token from databaseVerify Timer Schedule:
# Show when the timer will run next
systemctl list-timers linkedin-analytics.timer
# Show timer logs
journalctl -u linkedin-analytics.timerAfter deployment, verify these work:
-
Context Capture
ssh [email protected] "cd ~/ContentEngine && uv run content-engine capture-context"
-
Database Access
ssh [email protected] "cd ~/ContentEngine && uv run content-engine list"
-
LinkedIn Connection (dry-run)
ssh [email protected] "cd ~/ContentEngine && uv run content-engine draft 'Test' && uv run content-engine approve 1 --dry-run"
-
Ollama Access
ssh [email protected] "curl http://localhost:11434/api/tags"
-
Background Worker
ssh [email protected] "cd ~/ContentEngine && uv run content-worker"
-
LinkedIn Analytics Collection
ssh [email protected] "cd ~/ContentEngine && uv run content-engine collect-analytics --test-post urn:li:share:7412668096475369472"
-
Systemd Timers
# Check analytics timer ssh [email protected] "systemctl list-timers --all | grep linkedin-analytics" # Check context capture timer ssh [email protected] "systemctl list-timers --all | grep content-engine-capture"
Cause: Ollama not running or wrong host in .env
Fix:
ssh [email protected]
systemctl start ollama
systemctl status ollama
# Update .env
cd ~/ContentEngine
nano .env
# Set: OLLAMA_HOST=http://localhost:11434Cause: OAuth tokens not in database
Fix:
ssh [email protected]
cd ~/ContentEngine
# Check if .env has tokens
cat .env | grep LINKEDIN
# Migrate tokens to database
uv run python scripts/migrate_oauth.py
# Verify
uv run content-engine list # Should not errorCause: Multiple processes accessing SQLite simultaneously
Fix:
ssh [email protected]
cd ~/ContentEngine
# Check for running workers
ps aux | grep content-worker
# Kill if needed
pkill -f content-worker
# Try again
uv run content-engine listCause: Dependencies not installed
Fix:
ssh [email protected]
cd ~/ContentEngine
# Reinstall dependencies
uv sync
# Verify
uv run python -c "from lib import context_capture; print('OK')"If deployment breaks things:
# Restore from local
scp -r ~/Work/ContentEngine/* [email protected]:~/ContentEngine/
# OR restore from git
ssh [email protected]
cd ~/ContentEngine
git reset --hard HEAD
git pull origin master
uv sync-
Remove password from setup_server.sh after first setup
nano scripts/setup_server.sh # Remove or comment out: PASSWORD="..." -
Secure .env file
ssh [email protected] chmod 600 ~/ContentEngine/.env
-
Don't commit .env to git
# Already in .gitignore, but verify cat .gitignore | grep .env
-
Use SSH keys instead of passwords
# Already done by setup_server.sh
Once everything is set up, future deployments are simple:
# From local machine
cd ~/Work/ContentEngine
./scripts/deploy.sh
# That's it! Code is synced and dependencies updatedNo need to reconfigure .env, OAuth, or Ollama - those persist on the server.
Create this script to verify everything works:
#!/bin/bash
# verify_deployment.sh
echo "🔍 Verifying ContentEngine deployment on 192.168.0.5..."
# Test SSH
echo -n "SSH connection: "
ssh [email protected] "echo '✅'" || echo "❌"
# Test Ollama
echo -n "Ollama service: "
ssh [email protected] "systemctl is-active ollama" | grep -q active && echo "✅" || echo "❌"
# Test uv
echo -n "uv installed: "
ssh [email protected] "command -v uv" > /dev/null && echo "✅" || echo "❌"
# Test ContentEngine imports
echo -n "Python imports: "
ssh [email protected] "cd ~/ContentEngine && uv run python -c 'from lib import context_capture'" && echo "✅" || echo "❌"
# Test database
echo -n "Database access: "
ssh [email protected] "cd ~/ContentEngine && uv run content-engine list" > /dev/null && echo "✅" || echo "❌"
echo ""
echo "Deployment verification complete!"Last Updated: 2026-01-14
Server: 192.168.0.5 (ajohn)
Next Steps: Review checklist, run deployment, test thoroughly