Reading is an AI-powered RSS aggregator that collects, filters, and organizes tech articles for efficient daily reading.
🌐 Demo: reading.qijun.io · 📋 RSS Sources
- RSS integration & web scraping
- AI-powered classification & tagging
- Automatic summarization
- Content filtering by interest
- Smart categorization
- Frontend: Next.js 15, TypeScript, Tailwind, Shadcn/ui
- Backend: Python 3.8+, SQLite, RSS parser, LLM APIs
git clone https://github.com/yourusername/reading.git
cd reading
pnpm install
cd packages/tasks && pip install -r requirements.txt
- Configure
.env
(API keys, DB, tokens) - Initialize DB:
yoyo apply
- Start services:
cd packages/web && pnpm dev cd packages/tasks && python scraper.py
Visit http://localhost:3000
to explore!
- Public (read-only) or authenticated (full control) access
- Access tokens, JWT, and password protection
./scripts/deploy.sh
- Web: Next.js frontend
- Scraper: scheduled article collection
- SQLite with volume persistence
./scripts/data-manager.sh backup # Backup
./scripts/data-manager.sh restore # Restore
./scripts/data-manager.sh export # Export SQL dump
# Article Quality Management
# Note: Run database migrations first if using the cleaner for the first time
yoyo apply # Run this once to create the processing state table
./scripts/clean-database.sh --dry-run # Preview articles to be removed
./scripts/clean-database.sh --source "Hacker News" # Clean specific source
./scripts/clean-database.sh --limit 50 --dry-run # Test on limited articles
./scripts/clean-database.sh --status # Check processing status
./scripts/clean-database.sh --reset # Clear state and restart
./scripts/clean-database.sh --confirm # Execute cleanup
- Modular Python backend (scraper, DB, LLM integration)
- Next.js + TypeScript frontend
- Linting & formatting for both stacks
📄 Licensed under MIT.
✨ Issues & Feature Requests