This is a real-time chat application that detects and rephrases toxic messages using AI. The application is now prepared for deployment on various platforms including Vercel.
Click the button above to deploy directly to Vercel, or follow the manual steps below.
-
Sign up for Vercel at vercel.com
-
Clone or fork this repository
-
Deploy via Vercel CLI:
# Install Vercel CLI npm i -g vercel # Navigate to project directory cd Chat-Toxicity-Moderator # Deploy vercel --prod
-
Or deploy via Vercel Dashboard:
- Go to vercel.com
- Click "New Project"
- Import your GitHub repository
- Vercel will automatically detect the configuration
- Click "Deploy"
For enhanced functionality, add these environment variables in your Vercel project settings:
GROQ_API_KEY: Your GROQ API key for improved rephrasing quality
After deployment, your application will be available at:
- Main Interface:
https://your-project-name.vercel.app/ - Sender View:
https://your-project-name.vercel.app/sender.html - Receiver View:
https://your-project-name.vercel.app/receiver.html - Moderation Dashboard:
https://your-project-name.vercel.app/moderator.html
├── api/ # Vercel API routes
│ └── moderate.py # Moderation API endpoint
├── public/ # Static frontend files
│ ├── index.html # Main interface
│ ├── sender.html # Sender view
│ ├── receiver.html # Receiver view
│ └── moderator.html # Moderation dashboard
├── requirements.txt # Python dependencies
├── vercel.json # Vercel configuration
└── VERCEL_INSTRUCTIONS.md # Detailed Vercel deployment guide
- Real-time toxicity detection using keyword-based analysis (optimized for Vercel)
- Automatic message rephrasing using GROQ API (when API key provided)
- Clean, modern UI with separate sender/receiver views
- Works without API keys using fallback logic
POST /api/moderate- Moderate a messageGET /- Health check
Example API call:
const response = await fetch('/api/moderate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ text: "Your message here" })
});
const result = await response.json();For the complete ML functionality with local models, consider using Render, Railway, or Docker deployment options instead of Vercel, as Vercel has limitations with large model loading and longer execution times.
For more information about deployment options and alternatives, see VERCEL_INSTRUCTIONS.md.