Skip to content

Conversation

@amotl
Copy link
Member

@amotl amotl commented Jul 28, 2025

  • gemma3:1b looks very promising when used with LlamaIndex' NLSQLTableQueryEngine. Validated with a basic Text-to-SQL job, it responds in approx. 8 seconds on a lousy CPU, while OpenAI GPT-4.1 via SaaS also takes up to 5 seconds.
  • Don't explicitly specify table names NLSQLTableQueryEngine for generic use cases.

@coderabbitai
Copy link

coderabbitai bot commented Jul 28, 2025

Warning

Rate limit exceeded

@amotl has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 44 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 9dba64a and e1f5a73.

📒 Files selected for processing (1)
  • topic/machine-learning/llama-index/boot.py (1 hunks)

"""

Walkthrough

The changes refactor LLM configuration to use LLM_BACKEND and LLM_MODEL environment variables, add support for the "ollama" backend, and update environment files accordingly. Dependency for Ollama is introduced, and minor docstring edits are made in test functions. The tables parameter in the SQL query engine instantiation is commented out.

Changes

Cohort / File(s) Change Summary
LLM Configuration Refactor & Ollama Support
topic/machine-learning/llama-index/boot.py
Refactored LLM setup to use LLM_BACKEND and LLM_MODEL env vars; added Ollama backend support; removed direct OpenAI API assignments; updated error messaging; embedding configuration now uses env vars.
SQL Query Engine Parameter Change
topic/machine-learning/llama-index/demo_nlsql.py
Commented out the tables parameter in NLSQLTableQueryEngine instantiation; engine now constructed without explicit table list.
Azure Environment Variable Additions
topic/machine-learning/llama-index/env.azure
Added LLM_BACKEND=azure and LLM_MODEL=gpt-4.1 environment variables to Azure config.
Standalone Environment Variable Additions & Examples
topic/machine-learning/llama-index/env.standalone
Added LLM_BACKEND=openai and LLM_MODEL=gpt-4.1 env vars; included commented-out Ollama backend/model examples; reorganized OpenAI API key line.
Ollama Dependency Addition
topic/machine-learning/llama-index/requirements.txt
Added llama-index-llms-ollama<0.7 to requirements.
Test Docstring Edits
topic/machine-learning/llama-index/test.py
Updated docstrings in test functions to add "the" before "outcome"; no logic changes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Suggested reviewers

  • kneth

Poem

A rabbit hopped through fields of code,
Where LLMs and backends now freely abode.
With Ollama and Azure, the models align,
Environment variables set—oh, how divine!
Dependencies added, tests polished anew,
This patch is a meadow where features grew.
🐇✨
"""

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch llamaindex-ollama

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (2)
topic/machine-learning/llama-index/test.py (1)

36-36: Action Required: Add fallback and default for llm_backend in configure_llm

The failures (ValueError: Open AI API type not defined or invalid: None) occur because LLM_BACKEND isn’t set in your env.standalone, so llm_backend is None and falls through to the else branch. You can fix this by picking up the existing OPENAI_API_TYPE env var (if present) and defaulting to "openai":

– In topic/machine-learning/llama-index/boot.py, replace the current assignment

llm_backend = os.getenv("LLM_BACKEND")

with:

llm_backend = (
    os.getenv("LLM_BACKEND")
    or os.getenv("OPENAI_API_TYPE")
    or "openai"
)

– Update the error message to reference llm_backend rather than openai.api_type:

-    else:
-        raise ValueError(f"Open AI API type not defined or invalid: {openai.api_type}")
+    else:
+        raise ValueError(f"LLM backend undefined or invalid: {llm_backend}")

This ensures:

• Tests continue to work when only OPENAI_API_KEY (and optionally OPENAI_API_TYPE) are defined in env.standalone.
• New code paths (LLM_BACKEND) are still supported.
• A sensible default ("openai") prevents unintended None values.

topic/machine-learning/llama-index/boot.py (1)

54-64: Missing embedding support for Ollama backend.

The embedding model configuration only handles "openai" and "azure" backends, but doesn't provide embedding support for the new "ollama" backend. This could cause issues when the Ollama backend is used.

Consider adding Ollama embedding support or handling the case explicitly:

     if llm_backend == "openai":
         embed_model = LangchainEmbedding(OpenAIEmbeddings(model=llm_model))
     elif llm_backend == "azure":
         embed_model = LangchainEmbedding(
             AzureOpenAIEmbeddings(
                 azure_endpoint=os.getenv("OPENAI_AZURE_ENDPOINT"),
                 model=llm_model,
             )
         )
+    elif llm_backend == "ollama":
+        # Ollama doesn't have built-in embeddings, could use a fallback or raise an error
+        embed_model = None  # Or implement Ollama-specific embedding solution
     else:
         embed_model = None
♻️ Duplicate comments (1)
topic/machine-learning/llama-index/env.standalone (1)

11-12: Verify the model name "gpt-4.1" is valid.

Same issue as in env.azure - please verify that "gpt-4.1" is a valid OpenAI model name. OpenAI typically uses names like "gpt-4" or "gpt-4-turbo".

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e47a8a and 8cf0bfd.

📒 Files selected for processing (6)
  • topic/machine-learning/llama-index/boot.py (1 hunks)
  • topic/machine-learning/llama-index/demo_nlsql.py (1 hunks)
  • topic/machine-learning/llama-index/env.azure (1 hunks)
  • topic/machine-learning/llama-index/env.standalone (1 hunks)
  • topic/machine-learning/llama-index/requirements.txt (1 hunks)
  • topic/machine-learning/llama-index/test.py (2 hunks)
🧰 Additional context used
🧠 Learnings (1)
topic/machine-learning/llama-index/env.standalone (1)

Learnt from: amotl
PR: #1038
File: application/open-webui/compose.yml:44-48
Timestamp: 2025-07-23T22:00:51.593Z
Learning: In the cratedb-examples repository, hard-coded credentials like "crate:crate" in Docker Compose files are acceptable for demonstration purposes to maintain simplicity and avoid unnecessary layers of indirection, even when flagged by security tools like Checkov.

🪛 GitHub Actions: LlamaIndex
topic/machine-learning/llama-index/test.py

[error] 36-36: Test 'test_nlsql' failed with ValueError: Open AI API type not defined or invalid: None


[error] 54-54: Test 'test_mcp' failed with ValueError: Open AI API type not defined or invalid: None

topic/machine-learning/llama-index/demo_nlsql.py

[error] 23-23: ValueError: Open AI API type not defined or invalid: None. This error occurs in the configure_llm() function when the OpenAI API type is not set or invalid.

🪛 Gitleaks (8.27.2)
topic/machine-learning/llama-index/env.standalone

13-13: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🔇 Additional comments (10)
topic/machine-learning/llama-index/requirements.txt (1)

6-6: LGTM! Dependency addition supports new Ollama backend.

The addition of llama-index-llms-ollama<0.7 correctly introduces the required dependency for Ollama LLM backend support, with a version constraint consistent with other llama-index packages in the project.

topic/machine-learning/llama-index/env.azure (1)

1-3: LLM Model Name Validated

The additions in topic/machine-learning/llama-index/env.azure (lines 1–3) correctly introduce LLM_BACKEND=azure and LLM_MODEL=gpt-4.1. The model name “gpt-4.1” is indeed an official OpenAI GPT-4 series variant. No further changes needed.

topic/machine-learning/llama-index/test.py (1)

27-27: LGTM! Minor docstring improvement.

The addition of "the" before "outcome" improves the grammatical clarity of the docstrings.

Also applies to: 45-45

topic/machine-learning/llama-index/demo_nlsql.py (1)

34-34: Removal of explicit tables parameter is safe

  • Confirmed that NLSQLTableQueryEngine defaults to using the tables already scoped by SQLDatabase(include_tables=…), so commenting out the tables argument has no adverse effect.
  • The CI is still failing on the existing configure_llm() OpenAI API type error; please address that in a follow-up.
topic/machine-learning/llama-index/env.standalone (2)

1-9: LGTM! Good examples of Ollama model configurations.

The commented-out Ollama backend configuration provides helpful examples of various model options (phi4-mini, deepseek-r1, llama3.2, qwen3, gemma3) that users can easily uncomment and use for local LLM deployment.


13-13: Static analysis false positive - API key is clearly marked as invalid.

The flagged API key contains "-invalid-" in the string and is commented out, making it clearly a placeholder for demonstration purposes. This aligns with the project's approach of using demo credentials for examples.

topic/machine-learning/llama-index/boot.py (4)

11-11: LGTM! Ollama import added correctly.

The import for Ollama is properly added to support the new backend functionality.


28-33: LGTM! OpenAI configuration updated correctly.

The refactoring to use environment variables for the OpenAI backend is implemented correctly, maintaining the same functionality while improving configurability.


34-42: LGTM! Azure configuration updated correctly.

The Azure OpenAI configuration properly uses the environment variable for the model while maintaining all necessary Azure-specific parameters.


43-50: Ollama configuration looks good with appropriate timeout settings.

The Ollama backend configuration includes sensible defaults:

  • temperature=0.0 for consistent outputs
  • request_timeout=120.0 for reasonable timeout handling
  • keep_alive=-1 to keep the model loaded

@amotl amotl force-pushed the llamaindex-ollama branch from 8cf0bfd to eb20750 Compare July 28, 2025 21:21
@amotl amotl force-pushed the llamaindex-ollama branch from eb20750 to 9dba64a Compare July 28, 2025 21:23
@amotl amotl requested review from WalBeh and surister July 28, 2025 21:24
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
topic/machine-learning/llama-index/boot.py (2)

21-27: LGTM! Environment variable validation implemented.

The validation logic properly addresses the past review comment concerns by checking for required environment variables and providing clear error messages.


57-57: LGTM! Error message correctly references the variable.

The error message fix properly addresses the past review comment by referencing llm_backend instead of the old openai.api_type.

🧹 Nitpick comments (1)
topic/machine-learning/llama-index/boot.py (1)

59-69: Consider adding embedding model support for Ollama backend.

The current implementation returns None for embedding models when using the Ollama backend, which may cause issues in downstream code that expects an embedding model.

Consider adding Ollama embedding support or documenting this limitation:

+    elif llm_backend == "ollama":
+        # Note: Ollama embeddings could be added here if needed
+        # For now, returning None - ensure downstream code handles this
+        embed_model = None
     else:
         embed_model = None

Alternatively, you could use a local embedding model that works well with Ollama:

+    elif llm_backend == "ollama":
+        from llama_index.embeddings.huggingface import HuggingFaceEmbedding
+        embed_model = HuggingFaceEmbedding(model_name="sentence-transformers/all-MiniLM-L6-v2")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eb20750 and 9dba64a.

📒 Files selected for processing (6)
  • topic/machine-learning/llama-index/boot.py (1 hunks)
  • topic/machine-learning/llama-index/demo_nlsql.py (1 hunks)
  • topic/machine-learning/llama-index/env.azure (1 hunks)
  • topic/machine-learning/llama-index/env.standalone (1 hunks)
  • topic/machine-learning/llama-index/requirements.txt (1 hunks)
  • topic/machine-learning/llama-index/test.py (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • topic/machine-learning/llama-index/demo_nlsql.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • topic/machine-learning/llama-index/requirements.txt
  • topic/machine-learning/llama-index/test.py
  • topic/machine-learning/llama-index/env.azure
🧰 Additional context used
🧠 Learnings (1)
topic/machine-learning/llama-index/env.standalone (1)

Learnt from: amotl
PR: #1038
File: application/open-webui/compose.yml:44-48
Timestamp: 2025-07-23T22:00:51.593Z
Learning: In the cratedb-examples repository, hard-coded credentials like "crate:crate" in Docker Compose files are acceptable for demonstration purposes to maintain simplicity and avoid unnecessary layers of indirection, even when flagged by security tools like Checkov.

🪛 Gitleaks (8.27.2)
topic/machine-learning/llama-index/env.standalone

13-13: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🔇 Additional comments (5)
topic/machine-learning/llama-index/env.standalone (3)

1-9: LGTM! Well-structured Ollama backend configuration.

The commented configuration examples provide clear options for various Ollama models, including the target gemma3:1b model mentioned in the PR objectives. The structure makes it easy to switch between different models for evaluation.


13-13: Static analysis false positive - API key is safe.

The flagged API key is commented out and marked as "invalid", making it a placeholder for demonstration purposes. Based on retrieved learnings, demo credentials are acceptable in this repository.


10-13: Model Name Validation – No Action Required

Upon verification, gpt-4.1 is an officially supported OpenAI model (introduced April 2025). No change to the LLM_MODEL setting is needed.

Likely an incorrect or invalid review comment.

topic/machine-learning/llama-index/boot.py (2)

11-11: LGTM! Proper import for Ollama backend.

The import is correctly added to support the new Ollama LLM backend functionality.


48-55: LGTM! Well-configured Ollama backend implementation.

The Ollama LLM configuration follows best practices with:

  • Appropriate temperature setting (0.0 for deterministic results)
  • Reasonable timeout (120 seconds)
  • Keep-alive setting (-1 for persistent connection)

This aligns with the PR objective of evaluating Ollama with smaller models.

@amotl amotl marked this pull request as ready for review July 28, 2025 21:31
@amotl amotl merged commit f58fc1f into main Jul 29, 2025
3 checks passed
@amotl amotl deleted the llamaindex-ollama branch July 29, 2025 15:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants