HyperMind is a futuristic, adaptive AI research assistant and learning companion designed to democratize high-quality education. It leverages the Google Gemini API to provide interactive tutoring, instant quiz generation, debate simulation, and real-time multimodal voice/video interaction.
- Adaptive Learning Engine:
- Concept Tutor: Deep explanations with analogies and step-by-step reasoning.
- Debate Mode: A contrarian persona that challenges assumptions to strengthen critical thinking.
- Practice & Quiz: Instantly generates 10-question MCQ quizzes from topics or uploaded documents with detailed AI analysis of results.
- Multimodal Interaction:
- Chat: Rich text support with Markdown and Code Syntax Highlighting.
- Vision: Upload images for analysis.
- Document Analysis: Parse and summarize PDF documents (
pdfjs-dist). - Voice Integration: Text-to-Speech playback for AI responses.
- Interactive Visualizations:
- Generative UI (GenUI): Dynamic rendering of Line, Bar, and Area charts based on data generated by the AI.
- Diagramming: Automatic generation of flowcharts and mind maps using ReactFlow.
- 3D Elements: Immersive 3D scenes powered by Spline.
- Gemini Live (Real-time):
- Low-latency voice and video calls with the AI agent.
- Real-time audio visualization and camera streaming.
- Frontend Library: React 19
- Language: TypeScript
- AI Model: Google Gemini 2.5 Flash / Pro (
@google/genaiSDK) - Styling: Tailwind CSS + Custom Animations
- 3D Rendering: Spline (
@splinetool/react-spline) - Visualization: Recharts (Data), ReactFlow (Diagrams)
- Utilities: Lucide React (Icons), Marked (Markdown), PrismJS (Syntax Highlighting)
-
Clone the repository
git clone https://github.com/yourusername/hypermind.git cd hypermind -
Install dependencies Note: This project uses a no-build ESM approach via
importmapinindex.htmlfor rapid prototyping, but can be adapted to a standard Vite build.If running locally with a simple server:
npx serve . -
Environment Configuration You must provide a Google Gemini API Key.
Security Note: In a production environment, never expose your API key on the client side. Use a proxy server.
For local testing, ensure
process.env.API_KEYis accessible or replace the placeholder incomponents/ChatInterface.tsxandcomponents/LiveSession.tsxwith your key (not recommended for public commits).
- Landing Page: Select a mode ("Concept Tutor" or "Practice & Quiz") to initialize the Neural Link.
- Chat Interface:
- Type a topic to start learning.
- Click the Paperclip to upload PDFs or Images.
- Switch modes using the pill selector in the top right.
- Command Examples:
- "Explain Quantum Entanglement with a diagram" (Triggers ReactFlow diagram)
- "Plot the growth of AI adoption over the last decade" (Triggers GenUI Chart)
- "Quiz me on the PDF I just uploaded" (Triggers Quiz Mode)
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the project
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request