Skip to content

fix(memory): complete FalkorDB to LadybugDB migration in memory system#1834

Open
AndyMik90 wants to merge 5 commits intodevelopfrom
fix/memory-falkordb-to-ladybug
Open

fix(memory): complete FalkorDB to LadybugDB migration in memory system#1834
AndyMik90 wants to merge 5 commits intodevelopfrom
fix/memory-falkordb-to-ladybug

Conversation

@AndyMik90
Copy link
Owner

@AndyMik90 AndyMik90 commented Feb 14, 2026

Summary

  • Fix critical bug where get_graphiti_status() still imported FalkorDB driver, causing memory system to always fall back to file-based storage
  • Replace FalkorDB import check with LadybugDB (real_ladybug) / kuzu fallback pattern matching the existing runtime integration
  • Update all test fixtures and mocks from FalkorDB to kuzu driver references
  • Clean up stale FalkorDB documentation references across test files and runners

Test plan

  • Verify get_graphiti_status() returns available: True when LadybugDB/kuzu is installed
  • Verify memory system initializes with Graphiti instead of falling back to file-based storage
  • Run graphiti test suite: pytest apps/backend/integrations/graphiti/tests/ -v (670 tests passing)
  • Verify agent tasks can read/write to the memory system during execution
  • Confirm no FalkorDB import errors appear in logs

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Bug Fixes

    • Improved memory integration detection with a stepped fallback check and clearer availability/reason reporting to avoid false-positive availability.
  • Documentation

    • CLI help/examples updated to reflect current backend naming.
  • Tests

    • Test fixtures and cases updated to align with backend naming and better simulate provider error and configuration scenarios.
  • Chores

    • Cleaned up infrastructure metadata in roadmap configuration.

## Problem
The memory system was still checking for FalkorDB imports in `config.py`,
causing it to always report as unavailable and fall back to file-based
storage, despite LadybugDB being the configured and installed database.

Error in logs:
```
Graphiti packages not installed: falkordb is required for FalkorDri...
```

## Root Cause
In `get_graphiti_status()` at line 638, the code tried to import:
```python
from graphiti_core.driver.falkordb_driver import FalkorDriver
```
This import failed because FalkorDB was removed when migrating to LadybugDB.

## Changes Made

### Priority 1 — Critical Bug Fix
**File**: `apps/backend/integrations/graphiti/config.py` (lines 634-646)
- Replaced FalkorDB import check with LadybugDB/kuzu import check
- Now tries `real_ladybug` first (Python 3.12+), falls back to `kuzu`
- Removed unreachable pragma: no cover comment (line now executes)

### Priority 2 — Test Infrastructure Updates
1. **`conftest.py`** (lines 84-103)
   - Renamed fixture: `mock_falkor_driver` → `mock_kuzu_driver`
   - Updated docstring and patch path to reference KuzuDriver

2. **`test_config.py`** (lines 1056-1070, 1092-1094)
   - Updated test to reflect new behavior: `available=True` when packages
     installed, even with embedder validation errors (embedder is optional)
   - Updated comment from "falkordb" to "LadybugDB/kuzu"

3. **`test_memory.py`** (lines 267, 278)
   - Updated variable name: `mock_falkordb_driver` → `mock_kuzu_driver`
   - Updated sys.modules patch path to use kuzu_driver instead of falkordb_driver

### Priority 3 — Documentation Updates
4. **`test_memory_facade.py`** (line 163)
   - Updated comment: "remote FalkorDB" → "remote database"

5. **`spec_runner.py`** (line 139)
   - Updated example: "FalkorDB" → "LadybugDB"

## Testing
All 670 graphiti tests pass:
```
apps/backend/.venv/bin/pytest apps/backend/integrations/graphiti/tests/ -v
========== 670 passed, 6 skipped, 112 deselected, 4 warnings in 2.10s ==========
```

## Impact
- Memory system now correctly detects LadybugDB as available
- No more false negatives causing fallback to file-based storage
- All existing functionality preserved
- No breaking changes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions github-actions bot added area/backend This is backend only bug Something isn't working size/S Small (10-99 lines) labels Feb 14, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @AndyMik90, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue preventing the Graphiti memory system from properly utilizing LadybugDB or Kuzu by fixing an erroneous FalkorDB import. The changes ensure that the system correctly detects and initializes with the preferred graph database drivers, enhancing the reliability and functionality of the memory integration. It also updates the testing infrastructure and documentation to reflect this migration.

Highlights

  • Critical Bug Fix: Fixed a critical bug in get_graphiti_status() that incorrectly imported the FalkorDB driver, preventing the Graphiti memory system from initializing correctly and causing a fallback to file-based storage.
  • Database Driver Migration: Implemented a robust import mechanism for the Graphiti memory system, prioritizing LadybugDB and falling back to Kuzu if LadybugDB is not available, ensuring proper detection and initialization of the preferred graph database drivers.
  • Test Suite Updates: Migrated all test fixtures and mocks from referencing FalkorDB to the new Kuzu driver, ensuring test suite compatibility with the updated memory system.
  • Documentation Cleanup: Removed outdated FalkorDB documentation references and comments across various test files and runners for improved code clarity and accuracy.
Changelog
  • apps/backend/integrations/graphiti/config.py
    • Removed direct import of FalkorDriver.
    • Added a try-except block to import real_ladybug first, then kuzu as a fallback, to determine status["available"].
  • apps/backend/integrations/graphiti/tests/conftest.py
    • Renamed mock_falkor_driver fixture to mock_kuzu_driver.
    • Updated the patch target from falkordb_driver.FalkorDriver to kuzu_driver.KuzuDriver.
    • Updated the docstring to reflect the change to LadybugDB/kuzu.
  • apps/backend/integrations/graphiti/tests/test_config.py
    • Updated the docstring for test_get_graphiti_status_invalid_config_sets_reason to clarify that available should be True even with embedder config errors.
    • Modified assertion to check status["available"] is True instead of False in the test_get_graphiti_status_invalid_config_sets_reason test.
    • Updated a comment to reflect the new LadybugDB/kuzu dependency for status["available"] = True.
  • apps/backend/integrations/graphiti/tests/test_memory.py
    • Changed mock_falkordb_driver variable name to mock_kuzu_driver.
    • Updated the patched module path from falkordb_driver to kuzu_driver.
  • apps/backend/integrations/graphiti/tests/test_memory_facade.py
    • Updated a comment to generalize "remote FalkorDB" to "remote database".
  • apps/backend/runners/spec_runner.py
    • Updated an example task description from "Add Graphiti memory integration with FalkorDB" to "Add Graphiti memory integration with LadybugDB".
Activity
  • The pull request was generated using Claude Code.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 14, 2026

📝 Walkthrough

Walkthrough

Switches Graphiti backend detection to import graphiti_core then try real_ladybug (preferred on Python 3.12+) and fall back to kuzu; availability is true only if a supported backend imports. Tests, fixtures, and help text updated to reference Kuzu/Ladybug; project metadata infrastructure block emptied.

Changes

Cohort / File(s) Summary
Driver Import Strategy
apps/backend/integrations/graphiti/config.py
Replaces unconditional FalkorDriver import with nested imports: require graphiti_core, try real_ladybug, then kuzu; set available only when a backend import succeeds and provide explicit failure reason otherwise.
Test Fixtures & Mocks
apps/backend/integrations/graphiti/tests/conftest.py, apps/backend/integrations/graphiti/tests/test_memory.py
Renames fixture mock_falkor_drivermock_kuzu_driver; update patched targets and sys.modules mocks to reference kuzu_driver paths and KuzuDriver.
Test Assertions & Behavior
apps/backend/integrations/graphiti/tests/test_config.py
Adds MagicMock import; adjusts tests to mock graphiti_core/real_ladybug in sys.modules, adds a test for no graph backend present (expects available=False with reason), and ensures validation errors include VOYAGE_API_KEY.
Docs / Help Text
apps/backend/integrations/graphiti/tests/test_memory_facade.py, apps/backend/runners/spec_runner.py
Update descriptive text and CLI help examples to reference LadybugDB/Kuzu and generic remote DB wording instead of FalkorDB.
Project Metadata
apps/backend/runners/roadmap/project_index.json
Replaces a populated infrastructure object (docker_compose/docker_services) with an empty infrastructure object.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

test, size/M

Poem

🐰
I hopped through imports, quick and sly,
Ladybug first, then Kuzu nearby.
If neither answers, availability sighs,
Tests tuck mocks where the error lies.
A nibble of joy — the rabbit hops high.

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (8 files):

⚔️ README.md (content)
⚔️ apps/backend/integrations/graphiti/config.py (content)
⚔️ apps/backend/integrations/graphiti/tests/conftest.py (content)
⚔️ apps/backend/integrations/graphiti/tests/test_config.py (content)
⚔️ apps/backend/integrations/graphiti/tests/test_memory.py (content)
⚔️ apps/backend/integrations/graphiti/tests/test_memory_facade.py (content)
⚔️ apps/backend/runners/roadmap/project_index.json (content)
⚔️ apps/backend/runners/spec_runner.py (content)

These conflicts must be resolved before merging into develop.
Resolve conflicts locally and push changes to this branch.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately and specifically describes the main change: completing a migration from FalkorDB to LadybugDB in the memory system, with clear context about what was fixed.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/memory-falkordb-to-ladybug
⚔️ Resolve merge conflicts (beta)
  • Auto-commit resolved conflicts to branch fix/memory-falkordb-to-ladybug
  • Create stacked PR with resolved conflicts
  • Post resolved changes as copyable diffs in a comment

No actionable comments were generated in the recent review. 🎉


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@AndyMik90 AndyMik90 self-assigned this Feb 14, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully completes the migration from FalkorDB to LadybugDB/kuzu for the memory system, fixing a critical bug that caused the system to fall back to file-based storage. The changes are well-executed, updating the core logic, tests, mocks, and documentation consistently. I have one suggestion to improve the robustness of a test case by mocking external dependencies, which will make it less fragile and independent of the test environment's setup.

Comment on lines 1056 to 1073
def test_get_graphiti_status_invalid_config_sets_reason(self, clean_env):
"""Test get_graphiti_status sets reason when config is invalid.
"""Test get_graphiti_status with validation errors (embedder misconfigured).

This tests lines 628-629 where the reason is set from validation errors.
When packages are installed but embedder config has errors, available should
still be True (embedder is optional - keyword search fallback exists).
Validation errors are reported in the errors list for informational purposes.
"""
os.environ["GRAPHITI_ENABLED"] = "true"
os.environ["GRAPHITI_EMBEDDER_PROVIDER"] = "voyage"

status = get_graphiti_status()

assert status["enabled"] is True
assert status["available"] is False
# When config is invalid, reason should be set from errors
assert status["reason"] != ""
# With LadybugDB/kuzu installed, available should be True
assert status["available"] is True
# Validation errors are informational (embedder is optional)
assert len(status["errors"]) > 0
assert "VOYAGE_API_KEY" in status["errors"][0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test is fragile as it depends on graphiti_core and real_ladybug/kuzu being installed in the test environment. If they are not, the test will fail because get_graphiti_status will correctly report available: False.

To make the test more robust and isolate the logic under test (that available is True when packages are present, even with other configuration errors), you should mock the necessary imports.

Please also ensure MagicMock is imported from unittest.mock at the top of the file.

    def test_get_graphiti_status_invalid_config_sets_reason(self, clean_env):
        """Test get_graphiti_status with validation errors (embedder misconfigured).

        When packages are installed but embedder config has errors, available should
        still be True (embedder is optional - keyword search fallback exists).
        Validation errors are reported in the errors list for informational purposes.
        """
        os.environ["GRAPHITI_ENABLED"] = "true"
        os.environ["GRAPHITI_EMBEDDER_PROVIDER"] = "voyage"

        # Mock imports to ensure test is independent of environment
        with patch.dict(
            "sys.modules",
            {"graphiti_core": MagicMock(), "real_ladybug": MagicMock()},
        ):
            status = get_graphiti_status()

        assert status["enabled"] is True
        # With LadybugDB/kuzu installed, available should be True
        assert status["available"] is True
        # Validation errors are informational (embedder is optional)
        assert len(status["errors"]) > 0
        assert "VOYAGE_API_KEY" in status["errors"][0]

- Remove FalkorDB docker service reference from project_index.json (docker-compose.yml no longer exists)
- Correct line number reference in test_config.py comment (line 644 not 641)

Code review findings - no functional changes, just metadata cleanup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@apps/backend/integrations/graphiti/config.py`:
- Around line 639-647: The current nested import logic can produce a misleading
message when graphiti_core imports but both backends fail; wrap the inner
backend import (the try that attempts to import real_ladybug and falls back to
kuzu) in its own except ImportError handler so you can set status["available"] =
False and a clear status["reason"] that names the missing graph DB backend
(referencing the inner ImportError variable) instead of the outer "Graphiti
packages not installed" message; keep the outer try around the graphiti_core
import (if present) and only set status["available"] = True after a successful
backend import, using the unique symbols real_ladybug, kuzu,
status["available"], and status["reason"] to locate and update the logic.

In `@apps/backend/integrations/graphiti/tests/test_config.py`:
- Around line 1056-1073: The failing assertion in
test_get_graphiti_status_invalid_config_sets_reason checks status["available"]
but availability depends on installed packages; update the test to make it
deterministic by either skipping when the optional runtime deps aren't present
or by mocking imports: add a pytest.importorskip("graphiti_core") /
pytest.importorskip("real_ladybug") (or the actual package names used in
config.py) at the top of the test to skip when packages are missing, or
alternatively mock the import checks used by get_graphiti_status (reference
get_graphiti_status in config.py) so status["available"] is forced True for the
test; ensure the test still asserts enabled True and that errors contain
"VOYAGE_API_KEY" when using your chosen approach.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/backend/runners/roadmap/project_index.json (1)

1-7: ⚠️ Potential issue | 🟡 Minor

Remove hardcoded developer-local path from committed configuration.

This file contains /Users/andremikalsen/Documents/Coding/autonomous-coding on line 2, a machine-specific absolute path that will be incorrect for every other developer and CI environments. Since the ProjectIndexPhase in apps/backend/runners/roadmap/phases.py regenerates this file via analyzer.py when needed, either add apps/backend/runners/roadmap/project_index.json to .gitignore (like .auto-claude/ already is), or remove the hardcoded path and use a placeholder or relative reference.

Copy link
Owner Author

@AndyMik90 AndyMik90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Auto Claude PR Review

Merge Verdict: 🟠 NEEDS REVISION

🟠 Needs revision - 1 issue(s) require attention.

1 issue(s) must be addressed (0 required, 1 recommended)

Risk Assessment

Factor Level Notes
Complexity Low Based on lines changed
Security Impact None Based on security findings
Scope Coherence Good Based on structural review

Findings Summary

  • Medium: 1 issue(s)

Generated by Auto Claude PR Review

Findings (1 selected of 1 total)

🟡 [029d7f5ba930] [MEDIUM] Environment-dependent test assertion may fail on Python < 3.12

📁 apps/backend/integrations/graphiti/tests/test_config.py:1070

The test test_get_graphiti_status_invalid_config_sets_reason (line 1070) now hard-asserts status['available'] is True, but this depends on real_ladybug or kuzu being importable in the test environment. real_ladybug is only a dependency for Python 3.12+ per requirements.txt (line 17: real_ladybug>=0.13.0; python_version >= "3.12"), and kuzu is not in requirements.txt at all.

This is inconsistent with two other tests in the exact same class:

  • test_get_graphiti_status_enabled (line 1038): explicitly says "We can't assert on 'available' since it depends on test environment"
  • test_get_graphiti_status_with_graphiti_installed (line 1095): says "Since LadybugDB/kuzu may not be installed in all test environments"

If CI ever runs on Python < 3.12, or if real_ladybug/kuzu are not installed, this test will fail.

Suggested fix:

Either: (1) Add a `pytest.mark.skipif` guard like `@pytest.mark.skipif(not _has_kuzu_or_ladybug(), reason='requires LadybugDB/kuzu')`, or (2) Avoid asserting on `available` and instead check the structure like the neighboring `test_get_graphiti_status_enabled` test does. For example:
```python
# available depends on whether packages are installed
assert 'available' in status
assert len(status['errors']) > 0
assert 'VOYAGE_API_KEY' in status['errors'][0]

---
*This review was generated by Auto Claude.*

- Add nested try-except in config.py for clearer error messages when graph DB backend is missing
- Mock imports in test_config.py to make test environment-independent
- Ensure test passes regardless of whether graphiti_core/real_ladybug/kuzu are installed

Resolves CodeRabbit and Gemini review comments on PR #1834.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/backend/integrations/graphiti/tests/test_config.py`:
- Around line 1056-1078: Add a new unit test that exercises the branch in
get_graphiti_status where graphiti_core is importable but neither graph DB
backend is present: e.g., create test_get_graphiti_status_no_graph_backend that
sets GRAPHITI_ENABLED="true" and uses patch.dict("sys.modules",
{"graphiti_core": MagicMock(), "real_ladybug": None, "kuzu": None}) to simulate
the environment; call get_graphiti_status() and assert status["enabled"] is
True, status["available"] is False, and that status["reason"] or an error string
mentions "real_ladybug or kuzu" to validate the intended message.

AndyMik90 and others added 2 commits February 15, 2026 20:28
Add test_get_graphiti_status_no_graph_backend to verify error handling when
graphiti_core imports successfully but neither real_ladybug nor kuzu are
available. This addresses CodeRabbit's recommendation to test the error path
in config.py lines 645-650.

Addresses CodeRabbit review comment on PR #1834.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@AndyMik90
Copy link
Owner Author

🤖 Auto Claude PR Review

🟠 Follow-up Review: Needs Revision

🟠 Needs revision - 1 blocking issue(s) require fixes.

Resolution Status

  • Resolved: 1 previous findings addressed
  • Unresolved: 0 previous findings remain
  • 🆕 New Issues: 0 new findings in recent changes

Finding Validation

  • 🔍 Dismissed as False Positives: 1 findings were re-investigated and found to be incorrect
  • Confirmed Valid: 0 findings verified as genuine issues
  • 👤 Needs Human Review: 0 findings require manual verification

🚨 Blocking Issues

  • Branch Out of Date: PR branch is behind the base branch and needs to be updated

Verdict

Branch is out of date with base branch. Update branch first - if no conflicts arise, you can merge. If merge conflicts arise, resolve them and run follow-up review again.

Review Process

Agents invoked: resolution-verifier, new-code-reviewer, comment-analyzer, finding-validator


This is an AI-generated follow-up review using parallel specialist analysis with finding validation.


This review identified blockers that must be resolved before merge. Generated by Auto Claude.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/backend This is backend only bug Something isn't working size/S Small (10-99 lines)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant