Scrape-IT is a powerful SaaS platform for workflow automation with integrated web scraping capabilities. Built on Next.js, it allows users to automate complex data extraction workflows, securely store credentials, manage billing, and monitor performance—all in one intuitive interface.
- Workflow Automation: Easily build and execute multi-step workflows. Run tasks in distinct phases with assigned credits for fine-grained control over scraping executions.
- Advanced Web Scraping Tools: Design customized workflows with automated actions, scheduled executions, and flexible selector configuration.
- Credential Storage: Securely store API keys, tokens, and other sensitive information using encrypted storage.
- Intuitive UI and Analytics: Built with ShadCn for a modern UI, featuring real-time charts and reports for monitoring performance and credit usage.
- Secure Server-Side Handling: Backend processing powered by Next.js server actions ensures security and reliability.
- AI-Powered Web Scraping (Beta): Use Gemini-powered AI to intelligently navigate and scrape data from complex websites (optional feature with API key).
- Sign Up: Create an account on Scrape-IT. First-time users get 1000 FREE credits.
- Claim Your Free Credits: Instantly activate your free credits to begin executing workflows.
- Add Credentials: Securely store your API keys, tokens, or website login info.
- Build Your Workflow: Use Scrape-IT’s visual tools to create multi-step scraping workflows.
- Monitor and Analyze: View real-time analytics, manage your billing, and optimize your scraping operations from the dashboard.
- Frontend: Next.js, Tailwind CSS, ShadCn
- Backend: Secure server-side processing using Next.js Server Actions
- Billing (Beta): Stripe integration
- Security: Encrypted storage for sensitive data
- Analytics: Real-time visualization and reporting tools
-
Clone the repository
git clone https://github.com/SuyashJain17/Scrape-It.git cd Scrape-It -
Install dependencies
npm install
-
Set up environment variables Create a
.envfile in the root directory and add the following:GEMINI_API_KEY=your_gemini_api_key # Optional - required for AI-powered scraping DATABASE_URL=postgresql://username:password@localhost:5432/yourdb NEXTAUTH_SECRET=your_nextauth_secret NEXTAUTH_URL=http://localhost:3000
-
Run database migrations
npx prisma migrate dev
-
Start the development server
npm run dev
The app will be available at
http://localhost:3000.
- Use NextAuth.js to sign up or log in to your account.
- First-time users receive 1000 free credits to test workflow executions.
- Drag and drop nodes to define scraping tasks.
- Use AI suggestions for selector optimization.
- Securely store website login credentials if required.
- Use the scheduling feature to automate scraping tasks.
- Download scraped data in the desired format.
- Start development server:
npm run dev - Build for production:
npm run build - Run production server:
npm start
- Lint code:
npm run lint
- Add support for multi-step scraping workflows.
- Integrate more export formats (e.g., Google Sheets, Excel).
- Enhance AI capabilities for broader use cases.

