Skip to content

Conversation

@williamjameshandley
Copy link
Contributor

feat: Add support for xAI's grok

This PR adds support for xAI's grok models via their OpenAI-compatible API. Since the xAI API is compatible with the OpenAI SDK, this implementation extends the existing OpenAI provider.

The xAI documentation provides an example of using the OpenAI Python SDK:

import os
from openai import OpenAI

XAI_API_KEY = os.getenv("XAI_API_KEY")
client = OpenAI(
    api_key=XAI_API_KEY,
    base_url="https://api.x.ai/v1",
)

completion = client.chat.completions.create(
    model="grok-2-latest",
    messages=[
        {"role": "system", "content": "You are Grok, a chatbot inspired by the Hitchhikers Guide to the Galaxy."},
        {"role": "user", "content": "What is the meaning of life, the universe, and everything?"},
    ],
)

print(completion.choices[0].message.content)

To facilitate code reuse and accommodate xAI's pricing structure, the OpenAI provider interface was slightly modified to move pricing logic into the class itself.

This change allows for seamless integration with xAI while maintaining compatibility with existing OpenAI models. Pricing for grok-beta and grok-2 models has been included.

Tested and working (see attached screenshot).
2025-01-19-103412_1920x1080_scrot

@kharvd
Copy link
Owner

kharvd commented Jan 21, 2025

I really don't want to have dozens of OpenAI-compatible providers. Instead, it would be neat to set the OpenAI API URL per model config instead (it is already possible to set it globally https://github.com/kharvd/gpt-cli?tab=readme-ov-file#customize-openai-api-url)

@kharvd
Copy link
Owner

kharvd commented Jan 21, 2025

#106

@williamjameshandley
Copy link
Contributor Author

Agree that proliferation of interfaces should be avoided. How would this handle the different model names/prices though?

@kharvd
Copy link
Owner

kharvd commented Jan 21, 2025

Model names are currently handled through the oai-compat: prefix. Pricing is not supported yet unfortunately. This makes me wonder if we should add model configs instead of assistant configs for this. That is, you should be able to define a new OpenAI-compatible model in the config file, specify api key, base url, price per input/output token, and then use it in assistants or throught the CLI args. Let me know if you want to give that a go

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants