Skip to content

Conversation

@amotl
Copy link
Member

@amotl amotl commented Jul 29, 2025

About

It looks like gemma3:1b can do Text-to-SQL reasonably, at least on basic inquiries, when using good instructions, in this case from LlamaIndex, coming from LangChain.

@coderabbitai
Copy link

coderabbitai bot commented Jul 29, 2025

Walkthrough

This set of changes updates documentation, configuration, and code for the LlamaIndex-based NLSQLTableQueryEngine demo, focusing on improved environment variable handling for LLM backends (OpenAI GPT and Ollama). It also introduces new prompt instruction files, updates requirements, adds a test block for the Gemma model, and removes explicit table filtering in SQLDatabase instantiation.

Changes

Cohort / File(s) Change Summary
LLM Backend Environment Configuration
topic/machine-learning/llama-index/README.md, topic/machine-learning/llama-index/env.standalone
Updated documentation and environment files to clarify and expand instructions for setting environment variables for OpenAI GPT and Ollama backends, including examples for OLLAMA_BASE_URL and new model options.
Ollama Backend Base URL Support
topic/machine-learning/llama-index/boot.py
Modified Ollama backend initialization to accept a base_url parameter from the OLLAMA_BASE_URL environment variable, defaulting to an empty string if unset.
SQLDatabase Table Filtering
topic/machine-learning/llama-index/demo_nlsql.py
Removed explicit table name filtering from SQLDatabase instantiation, allowing all tables to be considered by default.
Prompt Instruction Files
topic/machine-learning/llm/sql-request.txt, topic/machine-learning/llm/sql-response.txt
Added new instruction files: one for generating CrateDB SQL queries from questions, and one for synthesizing responses from query results.
Requirements Update
topic/machine-learning/llm/requirements.txt
Added llm-ollama<0.14 as a new dependency.
LLM Prompt Testing
topic/machine-learning/llm/test.xsh
Cleaned up formatting and added a new, disabled test block for Google Gemma (gemma3:1b) model, demonstrating a Text-to-SQL prompt and assertion.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant DemoScript
    participant EnvVars
    participant LLMBackend (OpenAI/Ollama)
    participant SQLDatabase

    User->>DemoScript: Run NLSQLTableQueryEngine demo
    DemoScript->>EnvVars: Read LLM backend config (API keys, model, base_url)
    DemoScript->>LLMBackend (OpenAI/Ollama): Initialize backend with config
    DemoScript->>SQLDatabase: Instantiate (now with all tables)
    User->>DemoScript: Input natural language question
    DemoScript->>LLMBackend (OpenAI/Ollama): Send prompt/instruction
    LLMBackend (OpenAI/Ollama)->>DemoScript: Return SQL query
    DemoScript->>SQLDatabase: Execute SQL query
    SQLDatabase->>DemoScript: Return results
    DemoScript->>LLMBackend (OpenAI/Ollama): Synthesize response (optional)
    DemoScript->>User: Present answer
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Possibly related PRs

Suggested reviewers

  • surister
  • WalBeh

Poem

In the warren of code, a rabbit hops,
Tweaking configs and backend stops.
New prompts appear, instructions neat,
All tables now, no filter cheat.
LLMs ready, Ollama too,
With Gemma waiting for its debut!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7c9a778 and fa3b9e3.

📒 Files selected for processing (4)
  • topic/machine-learning/llm/requirements.txt (1 hunks)
  • topic/machine-learning/llm/sql-request.txt (1 hunks)
  • topic/machine-learning/llm/sql-response.txt (1 hunks)
  • topic/machine-learning/llm/test.xsh (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • topic/machine-learning/llm/sql-response.txt
  • topic/machine-learning/llm/requirements.txt
  • topic/machine-learning/llm/test.xsh
  • topic/machine-learning/llm/sql-request.txt
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch llm-gemma3

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@amotl amotl requested review from WalBeh and surister July 29, 2025 23:47
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
topic/machine-learning/llama-index/demo_nlsql.py (1)

31-34: LGTM! Increased flexibility for multi-table scenarios.

Removing the table filtering allows the demo to work with databases containing multiple tables, which is more realistic and flexible. The preserved commented code provides a clear reference for how to restrict to specific tables when needed.

Consider documenting this trade-off: while removing table filtering increases flexibility, it may impact performance with large databases and could make LLM responses less focused. You might want to add a comment explaining when to use table filtering:

 sql_database = SQLDatabase(
     engine_crate,
+    # Uncomment to restrict to specific tables for better performance/focus:
     #include_tables=[os.getenv("CRATEDB_TABLE_NAME")],
 )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f58fc1f and 7c9a778.

📒 Files selected for processing (8)
  • topic/machine-learning/llama-index/README.md (1 hunks)
  • topic/machine-learning/llama-index/boot.py (1 hunks)
  • topic/machine-learning/llama-index/demo_nlsql.py (1 hunks)
  • topic/machine-learning/llama-index/env.standalone (1 hunks)
  • topic/machine-learning/llm/requirements.txt (1 hunks)
  • topic/machine-learning/llm/sql-request.txt (1 hunks)
  • topic/machine-learning/llm/sql-response.txt (1 hunks)
  • topic/machine-learning/llm/test.xsh (1 hunks)
🧰 Additional context used
🪛 GitHub Actions: llm
topic/machine-learning/llm/test.xsh

[error] 40-40: SyntaxError: Invalid syntax at line 40 with assignment '$OLLAMA_HOST=http://100.127.86.113:11434/'.

🔇 Additional comments (8)
topic/machine-learning/llm/requirements.txt (1)

3-3: Confirm dependency addition and monitoring recommendation

The llm-ollama<0.14 constraint is consistent with existing patterns and safely allows patch/minor updates. According to PyPI, the latest stable release is 0.12.0, and no known vulnerabilities have been reported to date. However, since llm-ollama currently lacks a published security policy, please:

  • Continue to monitor the package for new releases and advisories.
  • Consider tightening the upper bound (e.g., llm-ollama<0.13) if you prefer locking to the latest known stable release.

No changes are strictly required at this time.

topic/machine-learning/llm/sql-response.txt (1)

1-3: LGTM! Clean and focused prompt instruction.

This prompt instruction file provides clear guidance for LLMs to synthesize responses from query results. The placeholder "Query:" line appropriately indicates where query results will be inserted during runtime.

topic/machine-learning/llama-index/boot.py (1)

51-51: LGTM! Environment variable integration follows established pattern.

The addition of the base_url parameter using os.getenv("OLLAMA_BASE_URL", "") is correctly implemented and consistent with other environment variable usage in the file. The empty string default is appropriate as the Ollama client will use its default URL when not specified.

topic/machine-learning/llama-index/env.standalone (2)

3-4: LGTM! Good configuration examples for different deployment scenarios.

The two OLLAMA_BASE_URL options provide helpful examples for both cloud (RunPod proxy) and local deployment scenarios.


11-12: LGTM! Relevant model options for Text-to-SQL tasks.

The addition of sqlcoder:7b and duckdb-nsql:7b models is excellent as these are specialized for SQL generation tasks, which aligns perfectly with the NLSQL demo's purpose.

topic/machine-learning/llama-index/README.md (1)

99-113: Excellent documentation improvements for LLM backend configuration.

The added configuration examples clearly demonstrate how to set up both OpenAI GPT and Ollama backends, with practical examples for different deployment scenarios (runpod, local). This directly supports the PR objective of evaluating the gemma3:1b model and provides users with clear setup instructions.

topic/machine-learning/llm/test.xsh (1)

35-48: Well-structured test for Gemma model Text-to-SQL capability.

The test follows the established pattern and includes appropriate assertions for validating SQL query generation. The use of the sql-request.txt fragment and system prompt with table schema provides proper context for the model.

topic/machine-learning/llm/sql-request.txt (1)

1-15: Comprehensive SQL generation instructions.

The instruction file provides clear guidance for CrateDB SQL query generation, addressing key best practices like column validation, table qualification, and proper output formatting. This will help ensure consistent and accurate SQL generation across different LLM models.

Gemma3 works well for basic Text-to-SQL at least.
@amotl amotl marked this pull request as ready for review July 29, 2025 23:52
@amotl amotl requested a review from hammerhead July 30, 2025 08:37
@amotl amotl requested a review from surister July 30, 2025 11:21
Copy link
Member

@surister surister left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@amotl amotl merged commit 26b8341 into main Jul 30, 2025
4 checks passed
@amotl amotl deleted the llm-gemma3 branch July 30, 2025 13:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants