Skip to content

Commit 05149ce

Browse files
author
msalsouri
committed
Implement WSL GPU support for Ollama integration and reorganize documentation
This commit includes: - Fix for NVIDIA libraries in WSL (symlinks and paths) - Enhanced Docker NVIDIA runtime integration - New scripts for managing Ollama processes and diagnosing issues - Jupyter notebook (nvidia_wsl_fix_guide.ipynb) for WSL GPU troubleshooting - Documentation of GPU status and fixes - Reorganization of docs directory into logical categories (setup, guides, architecture, troubleshooting, status)
1 parent 872317e commit 05149ce

29 files changed

+823
-0
lines changed

docs/AI_ASSISTANT_GUIDE.md

Whitespace-only changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

docs/guides/AI_Assistant_Guide.md

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
# AI Assistant Guide for CodexContinue Project
2+
3+
## Project Overview
4+
5+
CodexContinue is a cross-platform development project that integrates machine learning capabilities with a web application. The project is designed to run on both macOS and Windows (via WSL - Windows Subsystem for Linux) environments, with specific optimizations for each platform.
6+
7+
## Repository Structure
8+
9+
- `app/`: Core application code
10+
- `backend/`: Backend API server built with FastAPI
11+
- `frontend/`: Frontend web interface
12+
- `ml/`: Machine learning components and model integration
13+
- `docker/`: Docker configuration for containerized development
14+
- `scripts/`: Utility scripts for setup, maintenance, and diagnostics
15+
- `docs/`: Documentation for various aspects of the project
16+
- `notebooks/`: Jupyter notebooks for data analysis and demonstrations
17+
18+
## Current Development Context
19+
20+
You are assisting with the cross-platform setup of CodexContinue, specifically focusing on:
21+
22+
1. **Windows WSL Development Environment**: Setting up the development environment in Windows Subsystem for Linux with GPU support for machine learning components
23+
2. **DevContainer Configuration**: Ensuring proper DevContainer configuration across platforms
24+
3. **Ollama Integration**: Configuring Ollama (a large language model server) to run properly with GPU acceleration in WSL
25+
26+
## Key Files and Directories
27+
28+
### Configuration Files
29+
30+
- `.devcontainer/`: Contains DevContainer configuration
31+
- `docker-compose.yml`: Main Docker Compose configuration
32+
- `docker-compose.macos.yml`: macOS-specific Docker Compose overrides (CPU-only for Ollama)
33+
- `docker-compose.dev.yml`: Development environment Docker Compose configuration
34+
35+
### Documentation
36+
37+
- `docs/WINDOWS_WSL_GUIDE.md`: Guide for setting up the project in WSL
38+
- `docs/DEVCONTAINER_TROUBLESHOOTING.md`: Troubleshooting for DevContainer issues
39+
- `docs/DEVCONTAINER_VOLUME_MOUNT_FIX.md`: Solutions for volume mounting issues in WSL
40+
41+
### Scripts
42+
43+
- `scripts/fix-devcontainer.sh`: Script to fix DevContainer configuration issues
44+
- `scripts/wsl-quick-setup.sh`: Quick setup script for WSL environment
45+
- `scripts/start-ollama-wsl.sh`: Script to start Ollama in WSL with GPU support
46+
- `scripts/check-platform.sh`: Verify environment configuration
47+
48+
### Machine Learning
49+
50+
- `ml/models/ollama/`: Ollama model configuration
51+
- `ml/scripts/build_codexcontinue_model.sh`: Script to build the custom model
52+
53+
## Development Workflow
54+
55+
1. **Repository Setup**: The project is maintained in a Git repository with coordination between macOS and Windows environments
56+
2. **Container-Based Development**: Development is primarily done in Docker containers via VS Code DevContainers
57+
3. **Platform-Specific Configurations**:
58+
- macOS uses CPU-only mode for Ollama
59+
- Windows (WSL) uses GPU acceleration for Ollama
60+
61+
## Recent Changes
62+
63+
Recent work has focused on:
64+
65+
1. Adding WSL support with GPU acceleration
66+
2. Fixing DevContainer configurations for cross-platform compatibility
67+
3. Creating documentation and scripts for smoother onboarding
68+
69+
## Current Status
70+
71+
The project has been successfully:
72+
73+
1. Set up on macOS
74+
2. Configured for Windows WSL development
75+
3. Fixed for DevContainer issues between platforms
76+
77+
The user is now at the stage of verifying the GPU integration with Ollama in the WSL environment.
78+
79+
## Common Issues and Solutions
80+
81+
1. **Volume Mount Issues in WSL**: Fixed by using relative paths instead of `${localWorkspaceFolder}`
82+
2. **GPU Integration in WSL**: Requires proper NVIDIA driver installation and container toolkit setup
83+
3. **Cross-Platform Development**: Docker volumes are platform-specific, so models need to be built separately on each platform
84+
85+
## Next Steps
86+
87+
1. Verify GPU support in WSL for Ollama
88+
2. Test the full application stack
89+
3. Continue development across both platforms
90+
91+
## Terminology
92+
93+
- **WSL**: Windows Subsystem for Linux
94+
- **DevContainer**: Development containers for VS Code
95+
- **Ollama**: Self-hosted large language model server
96+
- **CodexContinue**: The project name, referring to an AI-assisted coding and documentation tool
97+
- **Docker**: Containerization platform used for development and deployment
98+
- **GPU**: Graphics Processing Unit, used for accelerating machine learning tasks
99+
- **NVIDIA**: Manufacturer of GPUs, relevant for WSL GPU support
100+
- **FastAPI**: Web framework for building APIs with Python
101+
- **Docker Compose**: Tool for defining and running multi-container Docker applications
102+
- **Jupyter Notebooks**: Interactive notebooks for data analysis and visualization
103+
- **Volume Mounting**: The process of linking directories between the host and container
104+
- **Cross-Platform Development**: Developing software that runs on multiple operating systems
105+
- **Containerization**: The practice of packaging software into containers for consistent deployment
106+
- **Machine Learning**: A subset of AI focused on building systems that learn from data
107+
- **Model Integration**: The process of incorporating machine learning models into applications
108+
- **API**: Application Programming Interface, a set of rules for building software applications
109+
- **Frontend**: The user interface of the application
110+
- **Backend**: The server-side logic and database interactions of the application
111+
- **Dockerfile**: A script containing instructions to build a Docker image
112+
- **Image**: A lightweight, standalone, executable package that includes everything needed to run a piece of software
113+
- **Container**: A standard unit of software that packages up code and all its dependencies
File renamed without changes.

0 commit comments

Comments
 (0)