Skip to content

Commit b334bd5

Browse files
committed
Add comprehensive Windows WSL support with GPU integration
- Created start-ollama-wsl.sh for starting Ollama with GPU in WSL - Added troubleshoot-wsl-gpu.sh for diagnosing WSL GPU issues - Created WINDOWS_WSL_GUIDE.md with detailed instructions - Created WINDOWS_WSL_IMPLEMENTATION.md with implementation details - Updated README, NEXT_STEPS and cross-platform documentation - Enhanced platform detection and GPU verification - Made all scripts executable
1 parent d03cb96 commit b334bd5

21 files changed

+2232
-4
lines changed

.devcontainer/devcontainer.json

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,15 @@
4444
}
4545
},
4646
"remoteUser": "appuser",
47-
"forwardPorts": [8000, 8501, 5000, 6379, 11434, 8888],
47+
"forwardPorts": [
48+
8000,
49+
8501,
50+
5000,
51+
6379,
52+
11434,
53+
8888
54+
],
4855
"shutdownAction": "stopCompose",
4956
"postCreateCommand": "echo 'Starting container initialization...' && ls -la /app && echo 'Installing Python packages...' && find /app -name 'requirements*.txt' -exec pip install -r {} \\; && echo 'Environment ready!'",
5057
"postStartCommand": "echo 'Container started successfully - CodexContinue development environment is ready!'"
51-
}
58+
}

NEXT_STEPS.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,14 @@ git push -u origin main
4040

4141
2. Choose your Windows setup option:
4242

43+
- **Windows Subsystem for Linux (WSL):** Follow [docs/WINDOWS_WSL_GUIDE.md](docs/WINDOWS_WSL_GUIDE.md) (Recommended for best GPU integration)
44+
45+
```bash
46+
# Quick setup after cloning
47+
./scripts/wsl-quick-setup.sh
48+
```
49+
4350
- **Native Windows with Docker Desktop:** Follow [docs/WINDOWS_QUICKSTART.md](docs/WINDOWS_QUICKSTART.md)
44-
- **Windows Subsystem for Linux (WSL):** Follow [docs/WSL_SETUP.md](docs/WSL_SETUP.md) (Recommended for better GPU integration)
4551

4652
## 5. Development Workflow
4753

README.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ CodexContinue features built-in learning capabilities through:
1717
4. **Knowledge Integration**: Easy integration of new knowledge and capabilities
1818

1919
The system uses a custom CodexContinue model built on Llama3, specifically designed for software development tasks with:
20+
2021
- Expanded code generation capabilities
2122
- Technical problem-solving expertise
2223
- Advanced reasoning for development workflows
@@ -51,7 +52,42 @@ git clone https://github.com/yourusername/CodexContinue.git
5152
cd CodexContinue
5253

5354
# Start the development environment
55+
./scripts/start-dev-environment.sh
56+
```
57+
58+
### Platform-Specific Instructions
59+
60+
#### macOS
61+
62+
For macOS, use the CPU-only configuration for Ollama:
63+
64+
```bash
65+
./scripts/start-ollama-macos.sh
66+
```
67+
68+
#### Windows (with WSL)
69+
70+
For Windows with WSL (recommended):
71+
72+
```bash
73+
# Quick setup
74+
./scripts/wsl-quick-setup.sh
75+
76+
# Or start Ollama with GPU support
77+
./scripts/start-ollama-wsl.sh
78+
```
79+
80+
See [Windows WSL Guide](docs/WINDOWS_WSL_GUIDE.md) for detailed instructions.
81+
82+
#### Windows (native)
83+
84+
See [Windows Quick Start](docs/WINDOWS_QUICKSTART.md) for setup instructions.
85+
cd CodexContinue
86+
87+
# Start the development environment
88+
5489
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
90+
5591
```
5692
5793
### Production Environment
@@ -107,21 +143,25 @@ We provide a convenience script to set up a new service with the recommended str
107143
CodexContinue can be customized for different domains:
108144

109145
### 🏥 Health Domain
146+
110147
- Medical data processing
111148
- Healthcare-focused UI
112149
- Medical terminology integration
113150

114151
### ⚖️ Legal Domain
152+
115153
- Legal document processing
116154
- Case management
117155
- Legal research capabilities
118156

119157
### 💰 Finance Domain
158+
120159
- Financial data analysis
121160
- Market trend visualization
122161
- Investment planning tools
123162

124163
### 👩‍💻 Developer Domain
164+
125165
- Code generation and analysis
126166
- Project scaffolding
127167
- Documentation assistance

docs/CROSS_PLATFORM_DEVELOPMENT.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# Cross-Platform Development for CodexContinue
2+
3+
This document explains how to work with CodexContinue across different platforms, specifically moving between macOS and Windows environments.
4+
5+
## Platform-Specific Configurations
6+
7+
### Windows with GPU Support
8+
9+
Windows with NVIDIA GPU capabilities can utilize full GPU acceleration for the Ollama service using either:
10+
11+
1. **Native Windows with Docker Desktop**:
12+
13+
```bash
14+
# Start the full environment with GPU support
15+
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
16+
```
17+
18+
2. **Windows Subsystem for Linux (WSL)** (Recommended):
19+
20+
```bash
21+
# Quick setup script for WSL
22+
./scripts/wsl-quick-setup.sh
23+
24+
# OR start just the Ollama service with GPU support
25+
./scripts/start-ollama-wsl.sh
26+
27+
# Then start other services as needed
28+
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
29+
```
30+
31+
Using WSL generally provides better GPU integration and performance with Docker. See [WINDOWS_WSL_GUIDE.md](WINDOWS_WSL_GUIDE.md) for detailed instructions.
32+
33+
The standard `docker-compose.yml` includes GPU configuration for Ollama:
34+
35+
```yaml
36+
deploy:
37+
resources:
38+
reservations:
39+
devices:
40+
- driver: nvidia
41+
count: all
42+
capabilities: [gpu]
43+
```
44+
45+
### macOS (CPU-only)
46+
47+
Since macOS doesn't support the same GPU integration, we've created a modified configuration:
48+
49+
```bash
50+
# Start the Ollama service in CPU-only mode
51+
./scripts/start-ollama-macos.sh
52+
53+
# Start the rest of the environment
54+
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
55+
```
56+
57+
This uses the `docker-compose.macos.yml` which removes GPU requirements for Ollama.
58+
59+
## Git Repository Setup for Cross-Platform Work
60+
61+
To effectively move between platforms:
62+
63+
1. Create a remote Git repository on GitHub or another hosting service
64+
65+
2. Use our helper script to set up the remote connection:
66+
67+
```bash
68+
./scripts/setup-git-remote.sh
69+
```
70+
71+
3. Before switching platforms, commit and push your changes:
72+
73+
```bash
74+
git add .
75+
git commit -m "Your commit message"
76+
git push
77+
```
78+
79+
4. On your other platform (e.g., Windows), clone the repository:
80+
81+
```bash
82+
git clone https://github.com/your-username/CodexContinue.git
83+
cd CodexContinue
84+
```
85+
86+
5. Start the environment with the appropriate configuration for that platform
87+
88+
## Ollama Model Configuration
89+
90+
The Ollama model works on both platforms but performs faster with GPU acceleration on Windows:
91+
92+
* Both platforms use the same Modelfile at `ml/models/ollama/Modelfile`
93+
* The model is built using the same script on both platforms: `ml/scripts/build_codexcontinue_model.sh`
94+
* Model weights are stored in a Docker volume, so they're isolated to each platform instance
95+
96+
## Development Workflow
97+
98+
A typical cross-platform workflow might look like:
99+
100+
1. Develop and test initial features on macOS
101+
2. Push changes to GitHub
102+
3. Clone on Windows for performance-intensive tasks utilizing GPU
103+
4. Make additional changes on Windows
104+
5. Push back to GitHub
105+
6. Pull latest changes on macOS
106+
107+
## Troubleshooting
108+
109+
### Windows WSL Issues
110+
111+
If you're having issues with GPU access in WSL:
112+
113+
```bash
114+
# Run the GPU troubleshooting script
115+
./scripts/troubleshoot-wsl-gpu.sh
116+
```
117+
118+
This script will diagnose common GPU issues in WSL and provide fix recommendations.
119+
120+
### General Issues
121+
122+
If you encounter issues when moving between platforms:
123+
124+
* Check Docker configuration for each platform
125+
* Verify Ollama model was built correctly (run `./scripts/check_ollama_model.sh`)
126+
* Ensure the latest code is pulled from the remote repository
127+
* Docker volumes might need to be recreated when switching platforms
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
# Cross-Platform Setup Summary
2+
3+
## Overview of Completed Setup
4+
5+
We've successfully prepared the CodexContinue project for cross-platform development between macOS and Windows. The setup is designed to leverage the GPU capabilities of Windows while maintaining compatibility with macOS.
6+
7+
## Key Components Added
8+
9+
1. **Platform-Specific Docker Configurations**
10+
- Standard `docker-compose.yml` with GPU support for Windows
11+
- Added `docker-compose.macos.yml` for CPU-only operation on macOS
12+
- Created `start-ollama-macos.sh` script for macOS
13+
- Created `start-ollama-wsl.sh` script for Windows WSL
14+
- Created `wsl-quick-setup.sh` for easy setup in WSL
15+
- Created `troubleshoot-wsl-gpu.sh` for diagnosing GPU issues in WSL
16+
- Platform-specific startup scripts
17+
18+
2. **Git Integration**
19+
- Initialized git repository
20+
- Added comprehensive `.gitignore` file
21+
- Created `setup-git-remote.sh` script to connect to remote repositories
22+
- Documented workflow for cross-platform development
23+
24+
3. **Documentation**
25+
- Added `CROSS_PLATFORM_DEVELOPMENT.md` with detailed workflow
26+
- Created `WINDOWS_WSL_GUIDE.md` with comprehensive WSL setup and usage instructions
27+
- Created `WSL_SETUP.md` with detailed WSL configuration steps
28+
- Added `WINDOWS_QUICKSTART.md` for fast setup on Windows
29+
- Created `OLLAMA_MODEL_TESTING.md` for testing the model across platforms
30+
- Created `WINDOWS_WSL_IMPLEMENTATION.md` with implementation details
31+
- Updated README and README-DEV with cross-platform information
32+
33+
4. **Ollama Model Integration**
34+
- Verified the Modelfile configuration
35+
- Enhanced `check_ollama_model.sh` to be platform-agnostic
36+
- Created `check-platform.sh` to detect WSL and verify GPU access
37+
- Created `check-gpu-support.sh` to verify GPU support for Ollama
38+
- Documented Ollama model usage in `ml/models/ollama/README.md`
39+
- Ensured model build scripts work across platforms
40+
41+
## Next Steps
42+
43+
All necessary changes have been completed. The next steps are outlined in `NEXT_STEPS.md`:
44+
45+
1. Create a remote Git repository
46+
2. Connect your local repository to the remote
47+
3. Push your code to the remote repository
48+
4. Clone and set up on your Windows system
49+
50+
## Benefits of This Setup
51+
52+
1. **Development Flexibility**
53+
- Develop on macOS for convenience
54+
- Use Windows with GPU for performance-intensive tasks
55+
- Seamlessly move between platforms
56+
57+
2. **Optimized Performance**
58+
- GPU acceleration on Windows for faster model inference
59+
- Compatible configuration for macOS development
60+
61+
3. **Consistent Environment**
62+
- Same core Docker configuration across platforms
63+
- Only platform-specific differences are isolated
64+
65+
4. **Documentation and Guides**
66+
- Clear instructions for both platforms
67+
- Troubleshooting guides for common issues
68+
- Testing procedures for verifying functionality
69+
70+
The project is now ready for you to create a remote repository and continue development on both macOS and Windows.

0 commit comments

Comments
 (0)