Skip to content

Commit 591a637

Browse files
authored
Merge pull request #53 from drivecore/feature/provider-docs
Add documentation for LLM providers (Anthropic, OpenAI, Ollama)
2 parents 0fea5bd + 243798f commit 591a637

File tree

5 files changed

+316
-0
lines changed

5 files changed

+316
-0
lines changed

docs/providers/_category_.json

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
{
2+
"label": "Providers",
3+
"position": 4,
4+
"link": {
5+
"type": "doc",
6+
"id": "providers/index"
7+
}
8+
}

docs/providers/anthropic.md

+70
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
---
2+
sidebar_position: 2
3+
---
4+
5+
# Anthropic (Claude)
6+
7+
[Anthropic](https://www.anthropic.com/) is the company behind the Claude family of large language models, known for their strong reasoning capabilities, long context windows, and robust tool-calling support.
8+
9+
## Setup
10+
11+
To use Claude models with MyCoder, you need an Anthropic API key:
12+
13+
1. Create an account at [Anthropic Console](https://console.anthropic.com/)
14+
2. Navigate to the API Keys section and create a new API key
15+
3. Set the API key as an environment variable or in your configuration file
16+
17+
### Environment Variables
18+
19+
You can set the Anthropic API key as an environment variable:
20+
21+
```bash
22+
export ANTHROPIC_API_KEY=your_api_key_here
23+
```
24+
25+
### Configuration
26+
27+
Configure MyCoder to use Anthropic's Claude in your `mycoder.config.js` file:
28+
29+
```javascript
30+
export default {
31+
// Provider selection
32+
provider: 'anthropic',
33+
model: 'claude-3-7-sonnet-20250219',
34+
35+
// Optional: Set API key directly (environment variable is preferred)
36+
// anthropicApiKey: 'your_api_key_here',
37+
38+
// Other MyCoder settings
39+
maxTokens: 4096,
40+
temperature: 0.7,
41+
// ...
42+
};
43+
```
44+
45+
## Supported Models
46+
47+
Anthropic offers several Claude models with different capabilities and price points:
48+
49+
- `claude-3-7-sonnet-20250219` (recommended) - Strong reasoning and tool-calling capabilities with 200K context
50+
- `claude-3-5-sonnet-20240620` - Balanced performance and cost with 200K context
51+
- `claude-3-opus-20240229` - Most capable model with 200K context
52+
- `claude-3-haiku-20240307` - Fastest and most cost-effective with 200K context
53+
54+
## Best Practices
55+
56+
- Claude models excel at complex reasoning tasks and multi-step planning
57+
- They have strong tool-calling capabilities, making them ideal for MyCoder workflows
58+
- Claude models have a 200K token context window, allowing for large codebases to be processed
59+
- For cost-sensitive applications, consider using Claude Haiku for simpler tasks
60+
61+
## Troubleshooting
62+
63+
If you encounter issues with Anthropic's Claude:
64+
65+
- Verify your API key is correct and has sufficient quota
66+
- Check that you're using a supported model name
67+
- For tool-calling issues, ensure your functions are properly formatted
68+
- Monitor your token usage to avoid unexpected costs
69+
70+
For more information, visit the [Anthropic Documentation](https://docs.anthropic.com/).

docs/providers/index.mdx

+54
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
---
2+
sidebar_position: 1
3+
---
4+
5+
# LLM Providers
6+
7+
MyCoder supports multiple Language Model (LLM) providers, giving you flexibility to choose the best solution for your needs. This section documents how to configure and use the various supported providers.
8+
9+
## Supported Providers
10+
11+
MyCoder currently supports the following LLM providers:
12+
13+
- [**Anthropic**](./anthropic.md) - Claude models from Anthropic
14+
- [**OpenAI**](./openai.md) - GPT models from OpenAI
15+
- [**Ollama**](./ollama.md) - Self-hosted open-source models via Ollama
16+
17+
## Configuring Providers
18+
19+
Each provider has its own specific configuration requirements, typically involving:
20+
21+
1. Setting API keys or connection details
22+
2. Selecting a specific model
23+
3. Configuring provider-specific parameters
24+
25+
You can configure the provider in your `mycoder.config.js` file. Here's a basic example:
26+
27+
```javascript
28+
export default {
29+
// Provider selection
30+
provider: 'anthropic',
31+
model: 'claude-3-7-sonnet-20250219',
32+
33+
// Other MyCoder settings
34+
// ...
35+
};
36+
```
37+
38+
## Provider Selection Considerations
39+
40+
When choosing which provider to use, consider:
41+
42+
- **Performance**: Different providers have different capabilities and performance characteristics
43+
- **Cost**: Pricing varies significantly between providers
44+
- **Features**: Some models have better support for specific features like tool calling
45+
- **Availability**: Self-hosted options like Ollama provide more control but require setup
46+
- **Privacy**: Self-hosted options may offer better privacy for sensitive work
47+
48+
## Provider-Specific Documentation
49+
50+
For detailed instructions on setting up each provider, see the provider-specific pages:
51+
52+
- [Anthropic Configuration](./anthropic.md)
53+
- [OpenAI Configuration](./openai.md)
54+
- [Ollama Configuration](./ollama.md)

docs/providers/ollama.md

+107
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
---
2+
sidebar_position: 4
3+
---
4+
5+
# Ollama
6+
7+
[Ollama](https://ollama.ai/) is a platform for running open-source large language models locally. It allows you to run various models on your own hardware, providing privacy and control over your AI interactions.
8+
9+
## Setup
10+
11+
To use Ollama with MyCoder:
12+
13+
1. Install Ollama from [ollama.ai](https://ollama.ai/)
14+
2. Start the Ollama service
15+
3. Pull a model that supports tool calling
16+
4. Configure MyCoder to use Ollama
17+
18+
### Installing Ollama
19+
20+
Follow the installation instructions on the [Ollama website](https://ollama.ai/) for your operating system.
21+
22+
For macOS:
23+
```bash
24+
curl -fsSL https://ollama.ai/install.sh | sh
25+
```
26+
27+
For Linux:
28+
```bash
29+
curl -fsSL https://ollama.ai/install.sh | sh
30+
```
31+
32+
For Windows, download the installer from the Ollama website.
33+
34+
### Pulling a Model
35+
36+
After installing Ollama, you need to pull a model that supports tool calling. **Important: Most Ollama models do not support tool calling**, which is required for MyCoder.
37+
38+
A recommended model that supports tool calling is:
39+
40+
```bash
41+
ollama pull medragondot/Sky-T1-32B-Preview:latest
42+
```
43+
44+
### Environment Variables
45+
46+
You can set the Ollama base URL as an environment variable (defaults to http://localhost:11434 if not set):
47+
48+
```bash
49+
export OLLAMA_BASE_URL=http://localhost:11434
50+
```
51+
52+
### Configuration
53+
54+
Configure MyCoder to use Ollama in your `mycoder.config.js` file:
55+
56+
```javascript
57+
export default {
58+
// Provider selection
59+
provider: 'ollama',
60+
model: 'medragondot/Sky-T1-32B-Preview:latest',
61+
62+
// Optional: Custom base URL (defaults to http://localhost:11434)
63+
// ollamaBaseUrl: 'http://localhost:11434',
64+
65+
// Other MyCoder settings
66+
maxTokens: 4096,
67+
temperature: 0.7,
68+
// ...
69+
};
70+
```
71+
72+
## Tool Calling Support
73+
74+
**Important**: For MyCoder to function properly, the Ollama model must support tool calling (function calling). Most open-source models available through Ollama **do not** support this feature yet.
75+
76+
Confirmed models with tool calling support:
77+
78+
- `medragondot/Sky-T1-32B-Preview:latest` - Recommended for MyCoder
79+
80+
If using other models, verify their tool calling capabilities before attempting to use them with MyCoder.
81+
82+
## Hardware Requirements
83+
84+
Running large language models locally requires significant hardware resources:
85+
86+
- Minimum 16GB RAM (32GB+ recommended)
87+
- GPU with at least 8GB VRAM for optimal performance
88+
- SSD storage for model files (models can be 5-20GB each)
89+
90+
## Best Practices
91+
92+
- Start with smaller models if you have limited hardware
93+
- Ensure your model supports tool calling before using with MyCoder
94+
- Run on a machine with a dedicated GPU for better performance
95+
- Consider using a cloud provider's API for resource-intensive tasks if local hardware is insufficient
96+
97+
## Troubleshooting
98+
99+
If you encounter issues with Ollama:
100+
101+
- Verify the Ollama service is running (`ollama serve`)
102+
- Check that you've pulled the correct model
103+
- Ensure the model supports tool calling
104+
- Verify your hardware meets the minimum requirements
105+
- Check Ollama logs for specific error messages
106+
107+
For more information, visit the [Ollama Documentation](https://github.com/ollama/ollama/tree/main/docs).

docs/providers/openai.md

+77
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
---
2+
sidebar_position: 3
3+
---
4+
5+
# OpenAI
6+
7+
[OpenAI](https://openai.com/) provides a suite of powerful language models, including the GPT family, which offer strong capabilities for code generation, analysis, and tool use.
8+
9+
## Setup
10+
11+
To use OpenAI models with MyCoder, you need an OpenAI API key:
12+
13+
1. Create an account at [OpenAI Platform](https://platform.openai.com/)
14+
2. Navigate to the API Keys section and create a new API key
15+
3. Set the API key as an environment variable or in your configuration file
16+
17+
### Environment Variables
18+
19+
You can set the OpenAI API key as an environment variable:
20+
21+
```bash
22+
export OPENAI_API_KEY=your_api_key_here
23+
```
24+
25+
Optionally, if you're using an organization-based account:
26+
27+
```bash
28+
export OPENAI_ORGANIZATION=your_organization_id
29+
```
30+
31+
### Configuration
32+
33+
Configure MyCoder to use OpenAI in your `mycoder.config.js` file:
34+
35+
```javascript
36+
export default {
37+
// Provider selection
38+
provider: 'openai',
39+
model: 'gpt-4o',
40+
41+
// Optional: Set API key directly (environment variable is preferred)
42+
// openaiApiKey: 'your_api_key_here',
43+
// openaiOrganization: 'your_organization_id',
44+
45+
// Other MyCoder settings
46+
maxTokens: 4096,
47+
temperature: 0.7,
48+
// ...
49+
};
50+
```
51+
52+
## Supported Models
53+
54+
OpenAI offers several models with different capabilities:
55+
56+
- `gpt-4o` (recommended) - Latest model with strong reasoning and tool-calling capabilities
57+
- `gpt-4-turbo` - Strong performance with 128K context window
58+
- `gpt-4` - Original GPT-4 model with 8K context window
59+
- `gpt-3.5-turbo` - More affordable option for simpler tasks
60+
61+
## Best Practices
62+
63+
- GPT-4o provides the best balance of performance and cost for most MyCoder tasks
64+
- For complex programming tasks, use GPT-4 models rather than GPT-3.5
65+
- The tool-calling capabilities in GPT-4o are particularly strong for MyCoder workflows
66+
- Use the JSON response format for structured outputs when needed
67+
68+
## Troubleshooting
69+
70+
If you encounter issues with OpenAI:
71+
72+
- Verify your API key is correct and has sufficient quota
73+
- Check that you're using a supported model name
74+
- For rate limit issues, implement exponential backoff in your requests
75+
- Monitor your token usage to avoid unexpected costs
76+
77+
For more information, visit the [OpenAI Documentation](https://platform.openai.com/docs/).

0 commit comments

Comments
 (0)